uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,477,468,751,381 | arxiv | \section{Introduction}
Fundamental geometric objects which have been intensely studied in recent years
are \emph{wave maps}. These maps satisfy a \lq manifold valued wave equation' and can be seen as geometric generalizations
of solutions to the standard linear wave equation on flat Minkowski space. Wave maps are defined as follows.
Let $(M,g)$ be a $(1+d)$--dimensional Lorentzian spacetime, and let $(N,h)$ be a Riemannian manifold. A wave map
$U : M \rightarrow N$ is defined to
be a formal critical point of the action functional
\begin{align}\label{s01
\mathcal S(U,\partial U) = \frac{1}{2} \int_M g^{\mu \nu} \langle \partial_\mu U, \partial_\nu U \rangle_h dg.
\end{align}
As critical points of this action, a wave map $U$ is satisfies the Euler--Lagrange equations associated to $\mathcal S$ which in local coordinates are
\begin{align}\label{s02}
\Box_g U^i + \Gamma^i_{jk}(U) \partial_{\mu} U^j \partial_{\nu} U^k g^{\mu \nu} = 0,
\end{align}
where $\Box_g := \frac{1}{\sqrt{-g}} \partial_\mu(g^{\mu \nu} \sqrt{-g} \partial_\nu)$ is the D'Alembertian for the
background metric $g$ and $\Gamma^i_{jk}$ are the Christoffel symbols for the target metric $h$. The system
\eqref{s02} is referred to as the \emph{wave map system} or as simply the \emph{wave map equation}. Note that if
$(M,g) = (\mathbb R^{1+d}, \eta)$, flat Minkowski space, and $N = \mathbb R$ then from \eqref{s02} we see that wave maps are simply solutions to the free wave equation
on $\mathbb R^{1+d}$.
The most studied setting of wave maps has been when $(M,g) = (\mathbb R^{1+d}, \eta)$ and $(N,h)$ is a
$d$--dimensional Riemannian manifold (see the classical reference
\cite{shat} and the recent review \cite{sch}). Wave maps are treated as solutions to the initial value problem for \eqref{s02}.
It is known that solutions starting from small initial data (within a certain smoothness space) are global and behave,
in a sense, like solutions to the free wave equation on Minkowski space. Recently, researchers have turned to the problem
of describing the long time dynamics of generic large data solutions. The guiding principle is the so called
\emph{soliton resolution conjecture}. This belief asserts that for most nonlinear dispersive PDE,
generic globally defined solutions asymptotically decouple into a superposition of nonlinear bulk terms (traveling waves,
rescaled solitons, etc.) and radiation (a solution to the underlying linear equation). However, when trying to verify this
conjecture for wave maps on Minkowski space, complications arise due to the scaling symmetry:
\begin{align*}
U(t,x) \mbox{ solves \eqref{s02}} \implies U_{\lambda}(t,x) = U(\lambda t, \lambda x) \mbox{ solves \eqref{s02}.}
\end{align*}
Due to this scaling symmetry, the long--time dynamics of large data wave maps on $\mathbb R^{1+d}$ can be very complex and one
can have (depending on the geometry of the target and dimension) self--similar solutions, finite time break down via energy concentration, dynamic \lq towers' of solitons, and
other interesting scenarios. Therefore, to gain better insight on the role of the soliton resolution
conjecture it is instructive to consider models when the background metric does not admit such a scaling symmetry. Moreover, the case of a curved background metric is still relatively unexplored. These reasons motivated
the following model introduced by Bizon and Kahl \cite{biz2} which we consider in this paper.
In this work, we continue our study of so called equivariant \emph{wave maps on a wormhole} initiated in \cite{cpr}. The
setup is the following. We consider wave maps $U : \mathbb R \times (\mathbb R \times \mathbb S^2) \rightarrow \mathbb S^3$ where the background metric is
given by
\begin{align}\label{s02b
ds^2 = -dt^2 + dr^2 + (r^2 + 1)(d\theta^2 + \sin^2 \theta d\varphi^2 ), \quad t,r \in \mathbb R, (\theta,\varphi) \in \mathbb S^2.
\end{align}
Each constant time slice is given by the Riemannian manifold $\mathcal M := \mathbb R \times \mathbb S^2$ with metric
\begin{align}\label{s02c
ds^2 = dr^2 + (r^2 + 1)(d\theta^2 + \sin^2 \theta d\varphi^2 ), \quad r \in \mathbb R, (\theta,\varphi) \in \mathbb S^2.
\end{align}
Since $r^2 + 1 \approx r^2$ for large $r$, $\mathcal M$ has two asymptotically Euclidean ends connected by a 2--sphere
at $r = 0$ (the throat). Because of this, the above spacetime has appeared as a prototype `wormhole' geometry in the
general relativity literature since its introduction by Ellis in the 1970's and popularization by Morris and
Thorne in the 1980's (see \cite{mt}, \cite{fjtt} and the references therein). Due to the rotational symmetry of the background and target, it is natural to consider a subclass of
wave maps $U : \mathbb R \times (\mathbb R \times \mathbb S^2) \rightarrow \mathbb S^3$ such that
\begin{align}\label{s02d
\exists \ell \in \mathbb N, \quad U \circ \rho = \rho^\ell \circ U, \quad \forall \rho \in SO(3).
\end{align}
Here, the rotation group $SO(3)$ acts on the background and target in the natural way. The integer
$\ell$ is commonly referred to as the \emph{equivariance class} and can be thought of as parametrizing a
fixed amount of angular momentum for the wave map. If we fix spherical coordinates
$(\psi, \vartheta, \phi)$ on $\mathbb S^3$, then from \eqref{s02d} it follows that $U$ is completely determined by the associated
function $\psi = \psi(t,r)$ and the wave map equation \eqref{s02} reduces to the single scalar semilinear wave equation
for $\psi$:
\begin{align}
\begin{split}\label{s04
&\partial_t^2 \psi - \partial_r^2 \psi - \frac{2r}{r^2 + 1} \partial_r \psi + \frac{\ell(\ell+1)}{2(r^2 + 1)} \sin 2 \psi = 0, \quad (t,r) \in \mathbb R \times \mathbb R,\\
&\overrightarrow \psi(0) = (\psi_0,\psi_1).
\end{split}
\end{align}
Throughout this work we use the notation $\overrightarrow \psi(t) = (\psi(t,\cdot), \partial_t \psi(t,\cdot))$. Solutions $\psi$ to \eqref{s04}
will be referred to as $\ell$--equivariant \emph{wave maps on a wormhole}. The equation \eqref{s04} has the following conserved energy along the flow:
\begin{align*}
\mathcal E_\ell(\overrightarrow \psi(t)) := \frac{1}{2} \int_\mathbb R \left [ |\partial_t \psi(t,r)|^2 + |\partial_r \psi(t,r)|^2 +
\frac{\ell(\ell+1)}{r^2 + 1} \sin^2 \psi(t,r) \right ] (r^2 + 1) dr = \mathcal E_\ell(\overrightarrow \psi(0)).
\end{align*}
In order for the initial data to have finite energy, we must have for some $m,n \in \mathbb Z$,
\begin{align*}
\psi_0(-\infty) = m\pi \quad \mbox{and} \quad \psi_0(\infty) = n\pi.
\end{align*}
For a finite energy solution $\overrightarrow \psi(t)$ to \eqref{s04} to depend continuously on $t$, we must have that $\psi(t,-\infty) = m
\pi$ and $\psi(t,\infty) = n\pi$ for all $t$. Due to the symmetries $\psi \mapsto m\pi + \psi$ and
$\psi \mapsto -\psi$ of
\eqref{s04}, we will, without loss of generality, fix $m = 0$ and assume
$n \in \mathbb N \cup \{0\}$. Thus, we only consider wave maps which send the left Euclidean end at $r = -\infty$ to the
north pole of $\mathbb S^3$. The integer $n$ is referred to as the topological degree of the map $\psi$ and, heuristically,
represents the minimal number of times $\mathcal M$ gets wrapped around $\mathbb S^3$ by $\psi$. For each $n \in \mathbb N \cup \{0\}$, we denote the
set of finite energy pairs of degree $n$ by
\begin{align*}
\mathcal E_{\ell,n} := \left \{ (\psi_0,\psi_1) : \mathcal E_\ell(\psi_0,\psi_1) < \infty, \quad \psi_0(-\infty) = 0, \quad \psi_0(\infty) = n\pi
\right \}.
\end{align*}
As described in \cite{biz2} and in our earlier work \cite{cpr}, there are features of the wave maps on a wormhole equation that make it an attractive model in which to study the soliton resolution conjecture. The first feature is that we have global well--posedness for arbitrary solutions to \eqref{s04} trivially. The geometry of the wormhole breaks the scaling invariance that the equation has in the flat case and removes the singularity at the origin. By a simple contraction argument, conservation of energy and time stepping we easily deduce that every solution to \eqref{s04} is globally defined in time (see Section 3 for more details). Another feature of \eqref{s04} is that there is an abundance of static solutions to \eqref{s04}. Such solutions are more commonly referred to as harmonic maps. More precisely, it can be shown that for every $\ell \in \mathbb N$, $n \in \mathbb N \cup \{0\}$ there exists a unique solution $Q_{\ell,n}
\in \mathcal E_{\ell,n}$ to
\begin{align}\label{s05
Q'' + \frac{2r}{r^2 + 1} Q' - \frac{\ell(\ell+1)}{2(r^2 + 1)} \sin 2 Q = 0, \quad r \in \mathbb R.
\end{align}
See Section 2 for more details.
In \cite{biz2} the authors gave mixed numerical and analytic evidence for the following formulation of the soliton resolution conjecture for this model: for every $\ell \in \mathbb N$, $n \in \mathbb N \cup \{0\}$, and
for any $(\psi_0,\psi_1) \in \mathcal E_{\ell,n}$ there exist a unique global solution $\psi$ to \eqref{s04}
and solutions $\varphi^{\pm}_L$ to the linearized equation
\begin{align}\label{s06a
\partial_t^2 \varphi - \partial_r^2 \varphi - \frac{2r}{r^2 + 1} \partial_r \varphi + \frac{\ell(\ell+1)}{r^2 + 1} \varphi = 0,
\end{align}
such that
\begin{align*}
\overrightarrow \psi(t) = (Q_{\ell,n}, 0) + \overrightarrow \varphi^{\pm}_L(t) + o(1),
\end{align*}
as $t \rightarrow \pm \infty$. In our earlier work \cite{cpr}, we verified this conjecture in the so called \emph{corotational} case
$\ell = 1$. In this work, we verify this conjecture for all equivariance classes.
We note here that a model with features similar to wave maps on a wormhole was previously studied in
\cite{ls}, \cite{kls1} and \cite{klls2} which served as further motivation and as a road map for the work carried out here. In these works, the
authors studied $\ell$--equivariant wave maps $U: \mathbb R \times (\mathbb R \backslash B(0,1)) \rightarrow \mathbb S^3$. In their work, an $\ell$--equivariant
wave map $U$ is
determined by the associated azimuth angle $\psi(t,r)$ which satisfies the equation
\begin{align}
\begin{split}\label{s05a
&\partial_t^2 \psi - \partial_r^2 \psi - \frac{2}{r} \partial_r \psi + \frac{\ell(\ell+1)}{2(r^2 + 1)}\sin 2 \psi = 0, \quad t \in \
\mathbb R, r \geq 1,\\
&\psi(t,1) = 0, \quad \psi(t,\infty) = n \pi, \quad \forall t.
\end{split}
\end{align}
Such wave maps were called $\ell$--equivariant \emph{exterior wave maps}. Similar to wave maps on a wormhole, global well--posedness
and an abundance of harmonic maps hold for the exterior wave map equation \eqref{s05a}. In the works
\cite{ls}, \cite{kls1}, and \cite{klls2}, the authors proved the soliton resolution conjecture for $\ell$--equivariant exterior
wave maps for arbitrary $\ell \geq 1$. We point out that the geometry of the background $\mathbb R \times (\mathbb R \backslash B(0,1))$
is still flat and could be considered as artificially removing the scaling symmetry present in the flat case. On the other hand,
the curved geometry of the background considered in this
work is what removes scaling invariance. This makes wave maps on a wormhole more geometric in nature
while still retaining the properties that make them attractive for studying the soliton resolution conjecture. However, due to the
asymptotically Euclidean nature of the wormhole geometry, we are able to adapt techniques developed for the flat case
to this curved geometry.
We now state our main result. In what follows we use the following notation. If $r_0 \geq -\infty$ and
$w(r)$ is a positive continuous function on $[r_0,\infty)$, then we denote
\begin{align*}
\| (\psi_0,\psi_1) \|_{\mathcal H([r_0,\infty); w(r)dr)}^2 :=
\int_{r_0}^\infty \left [ |\psi_0(r)|^2 + |\psi_1(r)|^2 dr \right ] w(r) dr.
\end{align*}
The Hilbert space $\mathcal H([r_0,\infty); w(r)dr)$ is then defined to be the completion of the vector space of $C^\infty_0(r_0,\infty)$ pairs
with respect to the norm $\| \cdot \|_{\mathcal H([r_0,\infty); w(r)dr)}$. Let $\ell \in \mathbb N$ be a fixed equivariance class, and let $n \in \mathbb N \cup \{0\}$ be a fixed topological degree.
In the $n=0$ case, the natural space to place the solution $\overrightarrow \psi(t)$ to \eqref{s04} in is the \emph{energy space}
$\mathcal H_0 := \mathcal H((-\infty,\infty); (r^2 + 1)dr)$. Indeed, it is easy to show that $\| \overrightarrow \psi \|_{\mathcal E_{\ell,0}} \simeq
\| \overrightarrow \psi \|_{\mathcal H_0}$. For $n \geq 1$, we measure distance relative to $(Q_{\ell,n},0)$ and define $\mathcal H_{\ell,n} := \mathcal E_{\ell,n} - (Q_{\ell,n},0)$
with \lq norm'
\begin{align*}
\| \overrightarrow \psi \|_{\mathcal H_{\ell,n}} := \| \overrightarrow \psi - (Q_{\ell,n},0) \|_{\mathcal H_0}.
\end{align*}
Note that $\psi(r) - Q_{\ell,n}(r) \rightarrow 0$ as $r \rightarrow \pm \infty$. The main result of this work is the following.
\begin{thm}\label{t01
For all $(\psi_0,\psi_1) \in \mathcal E_{\ell,n}$, there exists a unique global solution $\overrightarrow \psi(t)
\in C(\mathbb R; \mathcal H_{\ell,n})$ to \eqref{s04} which scatters forwards and backwards in time to the harmonic map $(Q_{\ell,n},0)$, i.e. there exist
solutions $\varphi^{\pm}_L$ to the linearized equation \eqref{s06a} such that
\begin{align*}
\overrightarrow \psi(t) = (Q_{\ell,n}, 0) + \overrightarrow \varphi_L^{\pm}(t) + o_{\mathcal H_0}(1),
\end{align*}
as $t \rightarrow \pm \infty$.
\end{thm}
We now give an outline of the proof and the paper. The proof is a generalization of that for the corotational case
$\ell = 1$ in \cite{cpr} and draws from the work \cite{klls2}. For this model, the set up is as follows. We first note that
the existence and uniqueness
of the harmonic map $Q_{\ell,n}$ follows nearly verbatim from the arguments in \cite{cpr} for the special case $\ell = 1$ which are classical ODE type arguments. This is discussed more in Section 2. In the remainder of Section 2 we give
an equivalent reformulation of Theorem \ref{t01} which is simpler to work with. Instead of studying the azimuth angle $\psi$, we
study the function $u$ defined by the relation $\psi = Q_{\ell,n} + \langle r \rangle^\ell u$. A simple computation shows that $u$ satisfies a radial semilinear wave equation on a higher dimensional wormhole $(\mathcal M^d,g)$
\begin{align}
\begin{split}\label{s06
&\partial_t^2 u - \Delta_g u + V(r) u = N(r,u), \quad (t,r) \in \mathbb R \times \mathbb R, \\
&\overrightarrow u(0) = (u_0,u_1) \in \mathcal H := \dot H^1 \times L^2(M^d).
\end{split}
\end{align}
Here, $d = 2\ell + 3$ and the potential $V$ and nonlinearity $N$ are explicit with $V$ arising from linearizing \eqref{s04}
about the harmonic map $Q_{\ell,n}$. Our main result, Theorem \ref{t01}, is shown to be equivalent to the statement that every solution
to \eqref{s06} is global and scatters to free waves on $\mathcal M^d$ as $t \rightarrow \infty$ (see Theorem \ref{t21} for the precise
statement). The remainder of the work is then devoted to proving Theorem \ref{t21} (the `$u$--formulation' of our main
result). In particular, we use the concentration--compactness/rigidity method introduced by Kenig and Merle in there work on the energy--critical Schr\"odinger and wave equations
\cite{km06} \cite{km08}. The method has three main steps and is by contradiction. In the first step, we show that solutions to
\eqref{s06} starting from small initial data scatter to free waves as $t \rightarrow \pm \infty$. In the second step, we then
show that if our main result fails, then there exists a nonzero solution $u_*$ to \eqref{s06} which doesn't scatter in
either direction and is, in a certain sense, minimal. This minimality imposes the following compactness property on $u_*$: the set
\begin{align*}
K = \{ \overrightarrow u_*(t) : t \in \mathbb R \}
\end{align*}
is precompact in $\mathcal H$. These two steps are carried out in Section 3. We remark here that in the work \cite{klls2} the authors
established these steps by using delicate estimates and arguments developed in \cite{bulut} and \cite{cpr1} for the energy--critical wave equation on flat space in high dimensions. This is done by using a Strauss estimate to reduce the nonlinearity to an energy--critical
power on $\mathbb R^{1 + (2\ell+3)}$. However, the arguments we give in this work are much simpler and bypass all of this technical machinery by using only basic Strichartz and Strauss estimates (in fact, our argument also applies to the analogous step in the exterior wave map problem). In the final and most
difficult step, we establish the following rigidity result: if $u$ solves \eqref{s06} and
\begin{align*}
K = \{ \overrightarrow u(t) : t \in \mathbb R \}
\end{align*}
is precompact in $\mathcal H$ then $\overrightarrow u = (0,0)$. This step contradicts the second step and we conclude that our main result
Theorem \ref{t01} holds. This is proved in Section 4. In particular, we show that such a solution $u$ must be a
static solution to \eqref{s06} which implies $\psi = Q_{\ell,n} + \langle r \rangle\ell u$ is a harmonic map. By the
uniqueness of $Q_{\ell,n}$, it follows that $\overrightarrow u = (0,0)$ as desired. The proof that $u$ must be a static
solution to \eqref{s06} uses channels of energy arguments rooted in \cite{dkm4} which were then generalized and
used in the works \cite{kls1} \cite{klls2} on exterior wave maps. These arguments
focus only on the behavior of solutions in regions exterior to light cones, and this is what allows us to adapt them to our asymptotically
Euclidean setting.
\textbf{Acknowledgments}: This work was completed during the author’s doctoral studies at the University of Chicago. The author
would like to thank his adviser, Carlos Kenig, for his invaluable patience, guidance and careful reading of the original manuscript.
\section{Harmonic Maps and a Reduction to Higher Dimensions}
For the remainder of the
paper we fix an equivariance class $\ell \in \mathbb N$, topological degree $n \in \mathbb N \cup \{0\}$ and study solutions
to the wave map on a wormhole equation
\begin{align}\label{s21
\begin{split}
&\partial_t^2 \psi - \partial_r^2 \psi - \frac{2r}{r^2 + 1} \partial_r \psi + \frac{\ell(\ell+1)}{2(r^2 + 1)} \sin 2 \psi = 0, \quad (t,r) \in \mathbb R \times \mathbb R\\
&\psi(t,-\infty) = 0, \quad \psi(t,\infty) = n \pi, \quad \forall t, \\
&\overrightarrow \psi(0) = (\psi_0,\psi_1).
\end{split}
\end{align}
We recall that the energy
\begin{align}\label{s21e}
\mathcal E_\ell(\psi) = \frac{1}{2} \int \left[ |\partial_t \psi|^2 + |\partial_r \psi|^2 + \frac{\ell(\ell+1)}{r^2 + 1}
sin^2 \psi \right] (r^2 + 1)dr
\end{align}
is conserved along and the flow, and so, we take initial data $(\psi_0,\psi_1)$ in the metric space
\begin{align*}
\mathcal E_{\ell,n} = \left \{
(\psi_0,\psi_1) : \mathcal E_\ell(\psi_0,\psi_1) < \infty, \quad \psi_0(-\infty) = 0, \quad \psi_0(\infty) = n\pi
\right \}.
\end{align*}
In this section we review the theory of static solutions to \eqref{s21} (i.e. harmonic maps) and reduce the study of $\ell$--equivariant
wave maps on a wormhole to the study of a semilinear wave equation on a higher dimensional wormhole.
\subsection{Harmonic Maps}
In this subsection, we briefly review the theory of harmonic maps for \eqref{s21}. The main result is the following.
\begin{ppn}\label{pa21}
There exists a unique solution $Q_{\ell,n} \in \mathcal E_{\ell,n}$ to the equation
\begin{align}\label{sa21}
Q'' + \frac{2r}{r^2 + 1} Q' - \frac{\ell(\ell+1)}{2(r^2 + 1)} \sin 2 Q = 0.
\end{align}
In the case $n = 0$, $Q_{\ell,0} = 0$. If $n \in \mathbb N$, then $Q_{\ell,n}$ is increasing on $\mathbb R$, satisfies
$Q_{\ell,n}(r) + Q_{\ell,n}(-r) = n\pi$ and there exists $\alpha_{\ell,n} \in \mathbb R$ such that
\begin{align*}
Q_{\ell,n}(r) &= n\pi - \alpha_{\ell,n} r^{-\ell -1} + O(r^{-\ell - 3}), \quad \mbox{as } r \rightarrow \infty, \\
Q_{\ell,n}(r) &= \alpha_{\ell,n} |r|^{-\ell -1} + O(r^{-\ell - 3}), \quad \mbox{as } r \rightarrow -\infty.
\end{align*}
The $O(\cdot)$ terms satisfy the natural derivative bounds.
\end{ppn}
The proof of Proposition \ref{pa21} is nearly identical to the proof of the corresponding statement,
Proposition 2.1, in \cite{cpr} which was inspired by arguments in \cite{mctr}. We briefly sketch the argument.
\begin{proof}[Sketch of Proof]
We first can use simple ODE arguments to show that every solution $Q$ to \eqref{sa21} is defined on $\mathbb R$
and has limits $Q(\pm \infty)$ in $\mathbb Z \pi$ or $(\mathbb Z + \frac{1}{2}) \pi$. Moreover, if
$Q(\pm \infty) \in \mathbb Z \pi$, then $Q$ is monotonic and there exist $\alpha,\beta \in \mathbb R$ such that
\begin{align}
\begin{split}\label{sa22}
Q(r) &= Q(\infty) + \alpha r^{-\ell - 1} + O(r^{-\ell - 3}), \quad \mbox{as } r \rightarrow \infty, \\
Q(r) &= Q(-\infty) + \beta r^{-\ell - 1} + O(r^{-\ell - 3}), \quad \mbox{as } r \rightarrow -\infty.
\end{split}
\end{align} For existence, we use a classical shooting argument. For $b > 0$, we consider the solution $Q_b$ to
\begin{align*}
&Q_b'' + \frac{2r}{r^2 + 1} Q_b' - \frac{\ell(\ell+1)}{2(r^2 + 1)} \sin 2 Q_b = 0, \\
&Q_b(0) = \frac{n\pi}{2}, \quad Q_b'(0) = b,
\end{align*}
and show the existence of a special value $b_*$ for the shooting parameter $b$ such that
$Q_{b_*}(\infty) = n\pi$. Indeed, using the properties of general solutions to \eqref{sa21} already outlined
and simple ODE arguments, we can show that the sets
\begin{align*}
B_< &= \{ b > 0 : Q_b(\infty) < n\pi \}, \\
B_> &= \{ b > 0 : Q_b(\infty) > n\pi \},
\end{align*}
are both nonempty, open, proper subsets of $(0,\infty)$. By connectedness, there exists $b_*$ such that $Q_{b_*}(\infty) =
n\pi$. From the initial condition $Q_{b_*} = 0$ and the symmetry $Q(r) \mapsto n\pi - Q(-r)$ of \eqref{sa21}, we conclude that
$Q_{b_*}(r) = n\pi - Q_{b_*}(-r)$ as well as $Q_{b_*}(-\infty) = 0$. We then set $Q_{\ell,n} = Q_{b_*}$.
For the uniqueness of $Q_{\ell,n}$, suppose that there are two solutions $Q_1, Q_2$ to \eqref{sa21}. By the previous discussion each solution
is monotonic increasing on $\mathbb R$ and satisfies \eqref{sa22}. We change variables to $x = \arcsinh r$ so that
\eqref{sa21} becomes
\begin{align}
Q'' + \tanh x Q' - \frac{\ell(\ell+1)}{2}\sin 2 Q = 0. \label{sa23}
\end{align}
Based on \eqref{sa22} (in the $x$--variable) and \eqref{sa23}, we can then show that if we assume, without loss of generality, that
$\frac{dQ_2}{dx} > \frac{dQ_1}{dx}$ for $x$ large and positive, then
\begin{align*}
\frac{dQ_2}{dx} > \frac{dQ_1}{dx}, \quad \forall x \in \mathbb R.
\end{align*}
However, this can easily be shown to be incompatible with \eqref{sa22} as $x \rightarrow -\infty$. Thus, $Q_1 = Q_2$, and the solution
$Q_{\ell,n}$ constructed is unique. For the full details of the argument in the $\ell =1$ case, see Section 2 in \cite{cpr}.
\end{proof}
A fact that will be essential in the final section of this work is that we may always find a unique solution to \eqref{sa21}
with prescribed asymptotics as either $r \rightarrow \infty$ or $r \rightarrow -\infty$ (but not necessarily both).
\begin{ppn}\label{pa22}
Let $\alpha \in \mathbb R$. Then there exists a unique solution $Q^+_{\alpha}$ to \eqref{sa21} such that
\begin{align*}
Q^+_\alpha = n \pi + \alpha r^{-\ell - 1} + O( r^{-\ell - 3}), \quad \mbox{as } r \rightarrow \infty.
\end{align*}
Similarly, given $\beta \in \mathbb R$, there exists a unique solution $Q^-_{\beta}$ to \eqref{sa21} such that
\begin{align*}
Q^-_\beta = \beta r^{-\ell - 1} + O( r^{-\ell - 3}), \quad \mbox{as } r \rightarrow -\infty.
\end{align*}
\end{ppn}
\begin{proof}
The proof is nearly identical to the proof of Proposition 2.4 in \cite{cpr} and we omit the details.
\end{proof}
\subsection{Reduction to a Wave Equation on a Higher Dimensional Wormhole}
In this subsection we reduce the study of the large
data solutions to \eqref{s21} to the study of large data solutions to a semilinear wave equation on a
higher dimensional wormhole geometry. This process is a generalization of the analogous
step in the corotational case in \cite{cpr}.
By Proposition \ref{pa21}, there exists a unique static solution $Q_{\ell,n}(r) \in \mathcal E_{\ell,n}$ to \eqref{s21}.
For a solution $\psi$ to \eqref{s21}, we define $\varphi$ by
\begin{align*}
\psi(t,r) = Q_{\ell,n}(r) + \varphi(t,r).
\end{align*}
Then \eqref{s21} implies that $\varphi$ satisfies
\begin{align}\label{s23}
\begin{split}
&\partial_t^2 \varphi - \partial_r^2 \varphi - \frac{2r}{r^2 + 1} \partial_r \varphi + \ell(\ell+1)\frac{\cos 2 Q_{\ell,n}}{r^2 + 1} \varphi = Z(r,\varphi), \\
&\varphi(t,-\infty) = \varphi(t,\infty) = 0, \quad \forall t, \\
&\overrightarrow \varphi(0) = (\psi_0-Q_{\ell,n},\psi_1),
\end{split}
\end{align}
where
\begin{align*}
Z(r,\phi) = \frac{\ell(\ell+1)}{2(r^2 + 1)} \left [ 2 \varphi - \sin 2\varphi \right ] \cos 2 Q_{\ell,n} + ( 1 - \cos 2 \varphi)
\sin 2 Q_{\ell,n}.
\end{align*}
The left--hand side of \eqref{s23} has more dispersion than a free wave on $\mathcal M^3$ due to the repulsive potential
\begin{align*}
\ell(\ell+1)\frac{\cos 2 Q}{r^2 + 1} = \frac{\ell(\ell+1)}{r^2 + 1} + O(\langle r \rangle^{-2\ell-4} )
\end{align*}
as $r \rightarrow \pm \infty$. Here and throughout this work, we use the Japanese bracket notation $\langle r \rangle = (r^2 + 1)^{1/2}$. The $O(\cdot)$ term is a consequence of the
asymptotics from Proposition \ref{pa21}. We now make a standard
reduction that incorporates this extra dispersion. We define $u$ and $d$ via the relations
\begin{align*}
\varphi &= \langle r \rangle^{\ell} u, \\
d &= 2 \ell + 3.
\end{align*}
We define the $d$--dimensional wormhole $\mathcal M^d = \mathbb R \times \mathbb S^{d-1}$ with metric
\begin{align*}
ds^2 = dr^2 + (r^2 + 1) d\Omega^2_{d-1},
\end{align*}
where $d\Omega^2_{d-1}$ is the standard round metric on $\mathbb S^{d-1}$. Since we will only be dealing with
functions depending solely on $r$, we will abuse notation slightly and denote
the radial part of the Laplacian on $\mathcal M^d$ by $-\Delta_g$, i.e.
\begin{align*}
-\Delta_g u = - \partial_r^2 u - \frac{(d-1)r}{r^2 + 1} \partial_r u.
\end{align*}
By \eqref{s23}, $u$ satisfies
the radial semilinear wave equation
\begin{align}\label{s24}
\begin{split}
&\partial_t^2 u - \Delta_g u + V(r) u = N(r,u), \\
&u(t,-\infty) = u(t,\infty) = 0, \quad \forall t, \\
&\overrightarrow u(0) = (u_0,u_1),
\end{split}
\end{align}
where the potential term is given by
\begin{align}\label{s25
V(r) = \frac{\ell^2}{\langle r \rangle^4} + \ell(\ell+1) \frac{ \cos 2 Q - 1 }{\langle r \rangle^2},
\end{align}
and the nonlinearity $N(r,u) = F(r,u) + G(r,u)$ is given by
\begin{align}
\begin{split}\label{s26
F(r,u) &= \frac{\ell(\ell+1)}{\langle r \rangle^{\ell+2}} \sin^2 (\langle r \rangle^\ell u) \sin 2 Q_{\ell,n}, \\
G(r,u) &= \frac{\ell(\ell+1)}{2 \langle r \rangle^{\ell+2}} \left [ 2 \langle r \rangle^\ell u - \sin (2 \langle r \rangle^\ell u) \right ] \cos 2 Q_{\ell,n}.
\end{split}
\end{align}
By Proposition \ref{pa21}, the potential $V$ is smooth and satisfies
\begin{align}\label{s27
V(r) = \frac{\ell^2}{\langle r \rangle^4} + O ( \langle r \rangle^{-2\ell - 4} ).
\end{align}
Also, by Proposition \ref{pa21} $Q_{\ell,n}(-r) + Q_{\ell,n}(r) = n\pi$ which implies that $V(r)$ is an even function. The nonlinearities $F$ and $G$ satisfy
\begin{align
F(r,u) &= \left ( \ell(\ell+1) \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right ) u^2 + F_0(r,u), \label{s28a}
\end{align}
where
\begin{align
|F_0(r,u)| &\lesssim \langle r \rangle^{2\ell - 3} |u|^4, \label{s28b}
\end{align}
and
\begin{align
|G(r,u)| &\lesssim \langle r \rangle^{2\ell - 2} |u|^3, \label{s29}
\end{align}
where the implied constants depend only on $\ell$. Since the original azimuth angle $\psi = Q_{\ell,n} + \langle r \rangle^\ell u
\in \mathcal E_{\ell,n}$
, we take initial data $(u_0,u_1)
\in \mathcal H(\mathbb R; \langle r \rangle^{d-1} dr)$ for \eqref{s24}. For the remainder of this section and the next we denote
\begin{align*}
\mathcal H_0 := \mathcal H(\mathbb R; \langle r \rangle^2 dr), \quad \mathcal H:= \mathcal H(\mathbb R; \langle r \rangle^{d-1} dr),
\end{align*}
and note that
$\mathcal H_0$ is simply the space of radial functions in $\dot H^1 \times L^2(\mathcal M^3)$ while $\mathcal H$ is the space
of radial functions in $\dot H^1 \times L^2(\mathcal M^d)$.
In the remainder of the paper, we work in the \lq $u$--formulation' rather than with the original azimuth angle
$\psi$. We first show that a solution $\overrightarrow \psi(t) \in C(\mathbb R; \mathcal \mathcal H_n)$ to \eqref{s21} with initial data $(\psi_0,\psi_1) \in
\mathcal E_{\ell,n}$ yields a solution $\overrightarrow u(t) \in C(\mathbb R; \mathcal H)$ with initial data $(u_0,u_1) = \langle r \rangle^{-\ell}( \psi_0 - Q_{\ell,n}, \psi_1)
\in \mathcal H$ and vice versa. The only fact that needs to be checked is that
\begin{align}\label{s210
\| \overrightarrow u \|_{\mathcal H} \simeq \left \| \overrightarrow \psi - (Q_{\ell,n},0) \right \|_{\mathcal H_0}.
\end{align}
We define $\varphi = \psi - Q_{\ell,n} = \langle r \rangle^\ell u$ and compute
\begin{align}
\partial_r \varphi = \langle r \rangle^\ell \partial_r u + \ell r \langle r \rangle^{\ell-2} u. \label{s211
\end{align}
We first note that by the fundamental theorem of calculus, we have the Strauss estimates
\begin{align}
\begin{split}\label{s211s}
|\varphi(r)| &\lesssim \langle r \rangle^{-1/2} \left ( \int |\partial_r \varphi|^2 \langle r \rangle^2 dr \right )^{1/2}, \\
|u(r)| &\lesssim \langle r \rangle^{(2-d)/2} \left ( \int |\partial_r u|^2 \langle r \rangle^{d-1} dr \right )^{1/2}.
\end{split}
\end{align}
Using the Strauss estimates and integration by parts, we have the following Hardy's inequalities,
\begin{align}
\int |\varphi|^2 dr &\lesssim \int |\partial_r \varphi|^2 \langle r \rangle^2 dr, \notag \\
\int |u|^2 \langle r \rangle^{d-3} dr &\lesssim \int |\partial_r u|^2 \langle r \rangle^{d-1} dr. \label{s211h}
\end{align}
Recalling that $d$ and $\ell$ are related by $d = 2 \ell + 3$, we see that the relation \eqref{s211} and the two Hardy's inequalities
immediately imply \eqref{s210}. Hence, the two Cauchy problems \eqref{s21} and \eqref{s24} are equivalent.
The equivalent $u$--formulation of our main result, Theorem \ref{t01}, is the following.
\begin{thm}\label{t21
For any initial data $(u_0,u_1) \in \mathcal H$, there exists a unique global solution $\overrightarrow u(t) \in C(\mathbb R; \mathcal H)$ to \eqref{s24}
which scatters to free waves on $\mathcal M^d$, i.e. there exist solutions $v_L^{\pm}$ to
\begin{align*}
\partial_t^2 v - \partial_r^2 v - \frac{(d-1)r}{r^2 + 1} \partial_r v = 0, \quad (t,r) \in \mathbb R \times \mathbb R,
\end{align*}
such that
\begin{align*}
\lim_{t \rightarrow \pm \infty} \| \overrightarrow u(t) - \overrightarrow v_L^{\pm}(t) \|_{\mathcal H} = 0.
\end{align*}
\end{thm}
The remainder of this work is devoted to proving Theorem \ref{t21}.
\section{Small Data Theory and Concentration--Compactness}
In this section we begin the proof of Theorem \ref{t21} and the study of the nonlinear evolution introduced in the previous section:
\begin{align}
\begin{split}\label{s31}
&\partial_t^2 u - \Delta_g u + V(r) u = N(r,u), \quad (t,r) \in \mathbb R \times \mathbb R, \\
&\overrightarrow u(0) = (u_0,u_1) \in \mathcal H,
\end{split}
\end{align}
where $\mathcal H := \mathcal H(\mathbb R; \langle r \rangle^{d-1} dr)$, $d = 2 \ell + 3$, $-\Delta_g$ is the (radial) Laplace operator on the $d$--dimensional
wormhole $\mathcal M^d$, and $V(r)$ and
$N(r,u)$ are given in \eqref{s25} and \eqref{s26}.
As summarized in the introduction, the proof of Theorem \ref{t21}, or equivalently
Theorem \ref{t01}, uses the powerful concentration--compactness/rigidity
methodology introduced by Kenig and Merle in their study of energy--critical dispersive equations \cite{km06} \cite{km08}.
This methodology was used in the corototational case, $\ell = 1$, $d = 5$, in \cite{cpr}. The general situation $\ell \in \mathbb N$
requires many refinements due to the growing dimension $d$.
The proof of Theorem \ref{t21}
is split up into three main steps and is by contradiction. In the first step, we establish small data global well--posedness
and scattering for \eqref{s31}. In particular, we establish Theorem \ref{t21} if $\| (u_0,u_1) \|_{\mathcal H} \ll 1$. In the
second step, we use the first step and a concentration--compactness argument to show that the \emph{failure} of Theorem \ref{t21} implies
that that there exists a nonzero \lq critical element' $u_*$; a minimal non--scattering global solution to \eqref{s31}.
The minimality of $u_*$ imposes the following compactness property on $u_*$: the trajectory
\begin{align*}
K = \left \{ \overrightarrow u_*(t) : t \in \mathbb R \right \}
\end{align*}
is precompact in $\mathcal H$. In the third and final step, we establish the following rigidity theorem: every solution $u$ with $\{ \overrightarrow u(t) : t \in \mathbb R \}$
precompact in $\mathcal H$ must be identically 0. This contradicts the second step which implies that Theorem \ref{t21} holds.
In this section
we complete the first two two steps in the program: small data theory and concentration--compactness.
The proofs for these steps are straightforward generalizations of or nearly identical to those in the corototational case in \cite{cpr}. We will therefore only outline the main steps and refer
the reader to the relevant proofs in \cite{cpr} for full details.
\subsection{Small Data Theory}
In this subsection, we establish global well--posedness and scattering for small data solutions to \eqref{s31}. The key
tools for establishing this and facts found later in this section are Strichartz estimates
for the inhomogeneous wave equation with potential
\begin{align}
\begin{split}\label{s32
&\partial_t^2 u - \Delta_g u + V(r) u = h(t,r), \quad (t,r) \in \mathbb R \times \mathbb R, \\
&\overrightarrow u(0) = (u_0,u_1) \in \mathcal H.
\end{split}
\end{align}
Here, as in the previous section,
\begin{align*}
-\Delta_g u = - \partial_r^2 u - \frac{(d-1)r}{r^2 + 1} \partial_r u,
\end{align*}
and the potential $V$ is given by
\begin{align*}
V(r) = \frac{\ell^2}{\langle r \rangle^4} + \ell(\ell+1) \frac{ \cos 2 Q_{\ell,n} - 1 }{\langle r \rangle^2},
\end{align*}
where $Q_{\ell,n}$ is the unique $\ell$--equivariant harmonic map of degree $n$. The conserved energy
for the homogeneous problem, $h \equiv 0$ in \eqref{s32}, is given by
\begin{align*}
\mathcal E_V(\overrightarrow u) = \frac{1}{2} \int_\mathbb R \left ( |\partial_t u|^2 + |\partial_r u|^2 + V(r) |u|^2 \right ) \langle r \rangle^{d-1} dr.
\end{align*}
In exactly the same fashion as in the corotational case, it can be shown that the operator $-\Delta_g + V(r)$ defined (densely) on $L^2(\mathcal M^d) = L^2(\mathbb R; \langle r \rangle^{d-1}dr)$ is a nonnegative self--adjoint operator and 0 is neither an eigenvalue nor a resonance. Moreover, from this spectral information we conclude $\| \overrightarrow u \|_{\mathcal H}^2 \simeq
\mathcal E_V(\overrightarrow u)$ along with the following Strichartz estimates (see
Section 4 and Section 5 of \cite{cpr} for full details of the arguments).
We say that a triple $(p,q,\gamma)$ is \emph{admissible} if
\begin{align*}
p > 2, q \geq 2, \quad \frac{1}{p} + \frac{d}{q} = \frac{d}{2} - \gamma, \quad
\frac{1}{p} \leq \frac{d-1}{2} \Bigl ( \frac{1}{2} - \frac{1}{q} \Bigr ).
\end{align*}
In the sequel, we use the notation for spacetime norms over $I \times \mathcal M^d$ via
\begin{align*}
\| u \|_{L^p_t L^q_x(I)} := \left ( \int_I \left (
\int_\mathbb R |u(t,r)|^q \langle r \rangle^{d-1} dr \right )^{p/q} dt \right )^{1/p}.
\end{align*}
\begin{ppn}\label{p31}
Let $(p,q,\gamma)$ and $(r,s,\rho)$ be admissible. Then any solution $u$ to \eqref{s32} satisfies
\begin{align*}
\| |\nabla|^{-\gamma} \nabla u \|_{L^p_t L^q_x(I)} \lesssim \| \overrightarrow u(0) \|_{\mathcal H} +
\| |\nabla|^{\rho} h \|_{L^{r'}_t L^{s'}_\rho(I)},
\end{align*}
where $r'$ and $s'$ are the conjugates of $r$ and $s$.
\end{ppn}
Proposition \ref{p31} with $V = 0$ was proved in Section 3 of \cite{cpr}. Using the spectral information
for $-\Delta_g + V$ we can then transfer these estimates to the perturbed wave operator $\partial_t^2 - \Delta_g + V$.
This is done by first reducing Proposition \ref{p31} to a pair of local energy estimates. These estimates are then established using the spectral information and a distorted Fourier basis for $-\Delta_g + V$ (the fact that $V$ is even also plays a role in the analysis). Again, for full details see Section 4 of \cite{cpr}.
For $I \subseteq \mathbb R$, we denote the following spacetime norms
\begin{align*}
\| u \|_{S(I)} &:= \Bigl \| \langle r \rangle^{(d-5)/3} u \Bigr \|_{L^3_t L^6_x(I)}
+ \| u \|_{L^3_t L^{\frac{3d}{2}}_x(I)}, \\
\| u \|_{W(I)} &:= \| u \|_{L^3_t \dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x(I)}, \\
\| h \|_{N(I)} &:= \| F \|_{L^1_tL^2_x(I) + L^{3/2}_t \dot W^{\frac{1}{2},\frac{6d}{3d+5}}_x(I)}.
\end{align*}
We first use Proposition \ref{p31} to show that for any solution $u$ to \eqref{s32}, we have the estimate
\begin{align}\label{s33a
\| u \|_{S(I)} + \| u \|_{W(I)} \lesssim \| \overrightarrow u(0) \|_{\mathcal H} + \| h \|_{N(I)}.
\end{align}
Indeed, we have directly from Proposition \ref{p31}
\begin{align}\label{s33
\| u \|_{W(I)} \lesssim \| \overrightarrow u(0) \|_{\mathcal H} + \| h \|_{N(I)}.
\end{align}
We claim that for all radial $f \in C^\infty_0(\mathcal M^d)$, we have
\begin{align}\label{s34a}
\Bigl \| \langle r \rangle^{(d-5)/3} f \Bigr \|_{L^6_x} + \| f \|_{L^{\frac{3d}{2}}_x} \lesssim \| f \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x}
\end{align}
(we recall that the volume element is $\langle r \rangle^{d-1} dr$). Define $m_0, m_1 > 1$ by the relations
\begin{align}
\begin{split}\label{s34
\frac{1}{2} \cdot \frac{1}{3} + \frac{1}{2} \cdot \frac{1}{m_0} &= \frac{3d - 5}{6d}, \\
\frac{1}{2} \cdot \frac{4}{3d} + \frac{1}{2} \cdot \frac{1}{m_1} &= \frac{3d - 5}{6d},
\end{split}
\end{align}
i.e. $\frac{1}{m_0} = \frac{2d - 5}{3d}$ and $\frac{1}{m_1} = \frac{3d-9}{3d}$. By the fundamental theorem of calculus
\begin{align*}
|f(r)| &\lesssim \| f \|_{\dot W^{1,m_0}} \langle r \rangle^{-\frac{2}{3}(d-4)}, \\
|f(r)| &\lesssim \| f \|_{\dot W^{1,m_1}} \langle r \rangle^{-(d-4)}.
\end{align*}
Thus, we have the embeddings
\begin{align}
\begin{split}\label{s35
\Bigl \| \langle r \rangle^{\frac{2}{3}(d-4)} f \Bigr \|_{L^\infty_x} \lesssim \| f \|_{\dot W^{1,m_0}}, \\
\Bigl \| \langle r \rangle^{d-4} f \Bigr \|_{L^\infty_x} \lesssim \| f \|_{\dot W^{1,m_1}}.
\end{split}
\end{align}
From the trivial embedding $L^3_x \hookrightarrow L^3_x$, \eqref{s35}, \eqref{s34} and interpolation we conclude that
\begin{align*}
\Bigl \| \langle r \rangle^{(d-4)/3} f \Bigr \|_{L^6_x} \lesssim \| f \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x}
\end{align*}
which implies
\begin{align*}
\Bigl \| \langle r \rangle^{(d-5)/3} f \Bigr \|_{L^6_x} \lesssim \| f \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x}.
\end{align*}
Similarly, from the trivial embedding $L^{\frac{3d}{4}}_x \hookrightarrow L^{\frac{3d}{4}}_x$, \eqref{s35}, \eqref{s34} and interpolation we conclude that
\begin{align*}
\Bigl \| \langle r \rangle^{(d-4)/2} f \Bigr \|_{L^{\frac{3d}{2}}_x} \lesssim \| f \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x}
\end{align*}
which implies
\begin{align*}
\| f \|_{L^{\frac{3d}{2}}_x} \lesssim \| f \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x}.
\end{align*}
This proves the claim. In particular, $\| u \|_{S(I)} \lesssim \| u \|_{W(I)}$ which along with \eqref{s33}
proves \eqref{s33a}. Although it may seem redundant to also use the $S(I)$ norm along with the $W(I)$ norm, it is essential in later
concentration--compactness arguments to use the weaker norm $\| \cdot \|_{S(I)}$ rather than $\| \cdot \|_{W(I)}$ to measure errors.
We now use \eqref{s33} to establish an a priori estimate for solutions to \eqref{s31}. The case $\ell = 1$, $d= 5$
was covered in \cite{cpr}, so we assume that $d \geq 7$.
By the conservation of energy \eqref{s21e}, the Strauss estimate \eqref{s211s}, and
Hardy's inequality \eqref{s211h} it is easy to show by a
contraction mapping/time--stepping argument that given $(u_0,u_1) \in \mathcal H$, there exists
a unique global solution $\overrightarrow u(t) \in C(\mathbb R ; \mathcal H) \cap L^\infty(\mathbb R, \mathcal H)$ to \eqref{s31}. By the Strichartz estimate \eqref{s33a}, we
have that if
$u$ solves \eqref{s31}, then for any $I \subseteq \mathbb R$,
\begin{align}
\| u \|_{S(I)} + \| u \|_{W(I)} &\lesssim \| \overrightarrow u(0) \|_{\mathcal H} + \| N(\cdot, u) \|_{N(I)} \notag \\
&\lesssim \| \overrightarrow u(0) \|_{\mathcal H} + \| F(\cdot, u) \|_{N(I)} + \| G(\cdot, u \|_{N(I)}, \label{s36a
\end{align}
where the nonlinearities $F,G$ are given by \eqref{s26}. By \eqref{s29} and the relation $d = 2\ell + 3$, we may estimate
\begin{align}\label{s36
\| G(\cdot, u \|_{N(I)} \lesssim \Bigl \| \langle r \rangle^{d - 5} u^3 \Bigr \|_{L^1_t L^2_x(I)} \lesssim \| u \|_{S(I)}^3.
\end{align}
By \eqref{s28a}, \eqref{s28b} and the Strauss estimate \eqref{s211s} we have that
\begin{align}
\| F(\cdot, u) \|_{N(I)} &\lesssim \Bigl \|
\left ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right ) u^2
\Bigr \|_{L^{3/2}_t \dot W^{\frac{1}{2},\frac{6d}{3d+5}}_x(I)} + \| F_0 \|_{L^1_t L^2_x(I)} \notag \\
&\lesssim \Bigl \|
\left ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right ) u^2
\Bigr \|_{L^{3/2}_t \dot W^{\frac{1}{2},\frac{6d}{3d+5}}_x(I)} + \| \overrightarrow u \|_{L^\infty_t \mathcal H} \| u \|_{S(I)}^3. \label{s37
\end{align}
By Proposition \ref{pa21} we have
\begin{align*
\langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} = O ( \langle r \rangle^{-3} ), \\
\frac{d}{dr} \Bigl ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \Bigr ) = O ( \langle r \rangle^{-4} ),
\end{align*}
so that
\begin{align}\label{s38
\Bigl \| \left ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right )
\Bigr \|_{L^d_x \cap \dot W^{\frac{1}{2},d}_x} < \infty
\end{align}
by interpolation. By the Leibniz rule for
Sobolev spaces (see \cite{coul} for asymptotically conic manifolds) and \eqref{s38}, we conclude that
\begin{align*}
\Bigl \|
\left ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right ) u^2
\Bigr \|_{\dot W^{\frac{1}{2},\frac{6d}{3d+5}}_x}
&\lesssim
\Bigl \| \left ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right )
\Bigr \|_{\dot W^{\frac{1}{2},d}_x} \| u^2 \|_{L^{\frac{6d}{3d-1}}_x} \\
&\:+ \Bigl \| \left ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right )
\Bigr \|_{L^d_x} \| u \|_{L^{\frac{3d}{2}}_x} \| u \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x} \\
&\lesssim \| u \|^2_{L^{\frac{12d}{3d-1}}_x} + \| u \|_{L^{\frac{3d}{2}}_x} \| u \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x}.
\end{align*}
By H\"older's inequality and the fact that $d \geq 7$,
\begin{align*}
\Bigl (
\int_\mathbb R |u|^{\frac{12d}{3d-1}} \langle r \rangle^{d-1}
\Bigr )^{\frac{3d -1}{12d}} &\leq \Bigl ( \int_\mathbb R \Bigl | \langle r \rangle^{\frac{d-5}{3}}
u \Bigr |^6 \langle r \rangle^{d-1} dr \Bigr )^{\frac{1}{6}} \Bigl ( \int_\mathbb R \langle r \rangle^{\frac{20d - 4d^2}{d-1}} \langle r \rangle^{d-1}dr \Bigr )
^{\frac{d-1}{3d-1}} \\
&\lesssim
\Bigl ( \int_\mathbb R \Bigl | \langle r \rangle^{\frac{d-5}{3}}
u\Bigr |^6 \langle r \rangle^{d-1} dr \Bigr )^{\frac{1}{6}}.
\end{align*}
Thus,
\begin{align*}
\Bigl \|
\left ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right ) u^2
\Bigr \|_{\dot W^{\frac{1}{2},\frac{6d}{3d+5}}_x} \lesssim
\Bigl \| \langle r \rangle^{\frac{d-5}{3}} u \Bigr \|^2_{L^{6}_x} + \| u \|_{L^{\frac{3d}{2}}_x} \| u \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x}
\end{align*}
so that by H\"older's inequality in time
\begin{align}\label{s39
\Bigl \|
\left ( \langle r \rangle^{\ell - 2} \sin 2 Q_{\ell,n} \right ) u^2
\Bigr \|_{L^{3/2}_t \dot W^{\frac{1}{2},\frac{6d}{3d+5}}_x(I)} \lesssim
\| u \|_{S(I)}^2 + \| u \|_{S(I)} \| u \|_{W(I)}.
\end{align}
Combining \eqref{s39} with \eqref{s37} we obtain
\begin{align}\label{s310
\| F(\cdot, u) \|_{N(I)} \lesssim \| u \|_{S(I)}^2 + \| u \|_{S(I)} \| u \|_{W(I)} + \| \overrightarrow u \|_{L^\infty_t \mathcal H}
\| u \|^3_{S(I)}.
\end{align}
The estimates \eqref{s36a}, \eqref{s36}, and \eqref{s310} imply the following a priori estimate for $u$:
\begin{align}\label{s311
\| u \|_{S(I)} + \| u \|_{W(I)}
\lesssim \| \overrightarrow u(0) \|_{\mathcal H} + \| u \|_{S(I)}^2 + \| u \|_{S(I)} \| u \|_{W(I)} + \| \overrightarrow u \|_{L^\infty_t \mathcal H}
\| u \|^3_{S(I)} + \| u \|_{S(I)}^3.
\end{align}
Based on \eqref{s311} and continuity arguments we have the following small data theory and long--time perturbation theory
for \eqref{s31}. For full details, see the proofs of Proposition 5.1 and Proposition 5.2 respectively in \cite{cpr}.
\begin{ppn}\label{p32
For every $(u_0,u_1) \in \mathcal H$, there exists a unique global solution $u$ to \eqref{s31}
such that $\overrightarrow u(t) \in C(\mathbb R; \mathcal H) \cap L^\infty(\mathbb R; \mathcal H)$. A solution $u$ scatters to a free wave on $\mathcal M^d$ as $t \rightarrow \infty$, i.e.
there exists a solution $v_L$ to
\begin{align*}
\partial_t^2 v - \partial_r^2 v - \frac{(d-1)r}{r^2 + 1} \partial_r v = 0, \quad (t,r) \in \mathbb R \times \mathbb R,
\end{align*}
such that
\begin{align*}
\lim_{t \rightarrow \infty} \| \overrightarrow u(t) - \overrightarrow v_L^{\pm}(t) \|_{\mathcal H} = 0,
\end{align*}
if and only if
\begin{align*}
\| u \|_{S(0,\infty)} < \infty.
\end{align*}
A similar characterization of $u$ scattering to a free wave on $\mathcal M^d$ as $t \rightarrow -\infty$ also holds. Moreover, there
exists $\delta > 0$ such that if $\| \overrightarrow u(0) \|_{\mathcal H} < \delta$, then
\begin{align*}
\| \overrightarrow u \|_{L^\infty_t \mathcal H} + \| u \|_{S(\mathbb R)} + \| u \|_{W(\mathbb R)} \lesssim \| \overrightarrow u(0) \|_{\mathcal H}.
\end{align*}
\end{ppn}
\begin{ppn}[Long--time perturbation theory]\label{p33}
Let $A > 0$. Then there exists $\epsilon_0 = \epsilon_0(A) > 0$ and $C = C(A) > 0$ such that the following holds. Let $0 < \epsilon
< \epsilon_0$, $(u_0,u_1) \in \mathcal H$, and $I \subseteq \mathbb R$ with $0 \in I$. Assume that $\overrightarrow U(t) \in C(I; \mathcal H)$ satisfies on $I$
\begin{align*}
\partial_t^2 U - \Delta_g U + V U = N(\cdot, U) + e,
\end{align*}
such that
\begin{align
\sup_{t \in I} \| \overrightarrow U(t) \|_{\mathcal H} + \| U \|_{S(I)} &\leq A, \notag \\%\label{e92} \\
\| \overrightarrow U(0) - (u_0,u_1) \|_{\mathcal H} + \| e \|_{N(I)} &\leq \epsilon. \label{s312}
\end{align}
Then the unique global solution $u$ to \eqref{s31} with initial data $\overrightarrow u(0) = (u_0,u_1)$ satisfies
\begin{align*}
\sup_{t \in I} \| \overrightarrow u(t) - \overrightarrow U(t) \|_{\mathcal H} + \| u - U \|_{S(I)} \leq C(A) \epsilon.
\end{align*}
\end{ppn}
\subsection{Concentration--Compactness}
In this subsection we complete the second step of the concentration--compactness/rigidity method outlined
in the beginning of this section. A crucial tool
used in completing this step is the following linear \emph{profile decomposition} of a bounded sequence
in $\mathcal H$.
\begin{lem}[Linear Profile Decomposition]\label{l34
Let $\{ (u_{0,n}, u_{1,n}) \}_n$ be a bounded sequence in $\mathcal H$. Then
after extraction of subsequences and relabeling, there exist a sequence of solutions $\left \{ U_L^j \right \}_{j \geq 1}$ to
\eqref{s32} with $h \equiv 0$ which are bounded in $\mathcal H$ and a sequence of times $\{ t_{j,n} \}_n$ for $j \geq 1$ that
satisfy the orthogonality condition
\begin{align*}
\forall j \neq k, \quad \lim_{n \rightarrow \infty} |t_{j,n} - t_{k,n}| = \infty,
\end{align*}
such that for all $J \geq 1$,
\begin{align*}
(u_{0,n},u_{1,n}) = \sum_{j = 1}^J \overrightarrow U^j_L(-t_{j,n}) + (w^J_{0,n},w^J_{1,n}),
\end{align*}
where the error $w_n^J(t) := S_V(t)(w^J_{0,n},w^J_{1,n})$ satisfies
\begin{align
\lim_{J \rightarrow \infty} \varlimsup_{n \rightarrow \infty} \| w^J_n \|_{L^\infty_t L^r_x(\mathbb R) \cap S(\mathbb R)} = 0,
\quad \forall \: \frac{2d}{d-2} < r < \infty. \label{s313}
\end{align}
Moreover, we have the following Pythagorean expansion of the energy
\begin{align}\label{e314
\mathcal E_V( \overrightarrow u_n) = \sum_{j = 1}^J \mathcal E_V( \overrightarrow U^j_L ) + \mathcal E_V ( \overrightarrow w^J_n ) + o(1),
\end{align}
as $n \rightarrow \infty$.
\end{lem}
The proof is exactly the same as the corototational case which follows from the proof of Lemma 3.2 in \cite{ls}. However,
we will explain why the error $w_n^J$ satisfies \eqref{s313} since the reasoning is subtle. The $d = 5$ case is contained in \cite{cpr}, so
we assume that $d \geq 7$. The proof from Lemma 3.2 in \cite{ls} shows that we have
\begin{align}\label{e315
\lim_{J \rightarrow \infty} \varlimsup_{n \rightarrow \infty} \| w^J_n \|_{L^\infty_t L^r_x(\mathbb R)} = 0, \quad \forall \: \frac{2d}{d-2} < r < \infty,
\end{align}
as well as
\begin{align}\label{e316}
\varlimsup_{J \rightarrow \infty} \varlimsup_{n \rightarrow \infty} \| \overrightarrow w^J_n \|_{\mathcal H} < \infty.
\end{align}
We recall that in proving \eqref{s34a}, we in fact proved the stronger claim that
\begin{align}\label{e317}
\Bigl \| \langle r \rangle^{(d-4)/2} f \Bigr \|_{L^{\frac{3d}{2}}_x} + \Bigl \| \langle r \rangle^{(d-4)/3} f \Bigr \|_{L^6_x} \lesssim \| f \|_{\dot W^{\frac{1}{2},\frac{6d}{3d-5}}_x}.
\end{align}
We also observe that the admissible triple $\Bigl ( \frac{1}{2}, 3, \frac{6d}{3d - 5} \Bigr )$ is not sharp if
$d \geq 7$, i.e.
\begin{align}\label{e318}
\frac{1}{3} < \frac{d-1}{2} \Bigl ( \frac{1}{2} - \frac{3d - 5}{6d} \Bigr ).
\end{align}
The two observations \eqref{e317} and \eqref{e318} and continuity imply the following. Let $r > \frac{2d}{d-2}$
and $0 < \theta < 1$, and define a triple $(p,q,\gamma)$ and exponent $s$ by
\begin{align}
\begin{split}\label{e319
\gamma &= \frac{1}{\theta} \frac{1}{2}, \\
\frac{1}{p} &= \frac{1}{\theta} \frac{1}{3}, \\
\frac{1}{p} + \frac{d}{q} &= \frac{d}{2} - \gamma, \\
\frac{1}{s} &= \theta \frac{1}{q} + (1 - \theta) \frac{1}{r}.
\end{split}
\end{align}
Then as long as $r$ is sufficiently large and $\theta$ is sufficiently close to 1, we have that
$(p,q,\gamma)$ is admissible, $s > 1$, and
\begin{align}\label{e320
\| f \|_{L^{\frac{3d}{2}}_x} + \Bigl \| \langle r \rangle^{(d-5)/3} f \Bigr \|_{L^6_x} \lesssim \| f \|_{\dot W^{\frac{1}{2}, s}_x }, \quad \forall
f \in C^\infty_0.
\end{align}
Indeed, the fact that $(p,q,\gamma)$ defined by \eqref{e319} are admissible for $r$ large and $\theta$ close to 1 follows from
\eqref{e318} and continuity in $\theta$. Similarly, by \eqref{e317} if $r$ is large and $\theta$ is close to 1, then we can find
$m_0 = m_0(\theta)$ and $m_1 = m_1(\theta)$, analogous to $m_0, m_1$ from \eqref{s34}, so that
\begin{align*}
\frac{1}{2} \cdot \frac{1}{3} + \frac{1}{2} \cdot \frac{1}{m_0} &= \frac{1}{s}, \\
\frac{1}{2} \cdot \frac{4}{3d} + \frac{1}{2} \cdot \frac{1}{m_1} &= \frac{1}{s},
\end{align*}
with
\begin{align*}
|f(r)| &\lesssim \| f \|_{\dot W^{1,m_0}} \langle r \rangle^{-\alpha}, \\
|f(r)| &\lesssim \| f \|_{\dot W^{1,m_1}} \langle r \rangle^{-\beta},
\end{align*}
where $\alpha > \frac{2(d-5)}{3}$ and $\beta > 0$. By interpolation we conclude \eqref{e320}. We now fix $r$ sufficiently large
and $\theta$ sufficiently close to 1 so that if $(p,q,\gamma)$ and $s$ are defined as in \eqref{e319}, then
$(p,q,\gamma)$ is an admissible triple and \eqref{e320} holds. Then by \eqref{e320}, interpolation,
and Strichartz estimates we have that the errors satisfy
\begin{align*}
\| w^J_n \|_{S(\mathbb R)}
&\lesssim \| w^J_n \|_{L^3_t \dot W^{\frac{1}{2}, s}_x (\mathbb R)} \\
&\lesssim \| w^J_n \|_{L^p_t \dot W^{\gamma, q}_x(\mathbb R)}^\theta \| w^J_n \|_{L^\infty_t L^r_x(\mathbb R)}^{1 - \theta} \\
&\lesssim \| w^J_n \|_{\mathcal H}^{\theta} \| w^J_n \|_{L^\infty_t L^r_x(\mathbb R)}^{1 - \theta}
\end{align*}
whence by \eqref{e315} and \eqref{e316}
\begin{align*}
\lim_{J \rightarrow \infty} \varlimsup_{n \rightarrow \infty} \| w^J_n \|_{S(\mathbb R)} = 0
\end{align*}
as desired. We remark here that is unclear whether or not the errors satisfy the stronger
condition $\lim_J \varlimsup_n \| w^J_n \|_{W(\mathbb R)} = 0$. It is for this reason that we used the weaker $S(I)$ norm
in the previous subsection.
Using Lemma \ref{l34} and Proposition \ref{p33}, we establish that if our main result, Theorem \ref{t21}, fails, then
there exists a nonzero \lq critical element.' In particular, we establish the following.
\begin{ppn}\label{p34}
Suppose that Theorem \ref{t21} fails. Then there exists a nonzero global solution $u_*$ to \eqref{s31} such that the
set
\begin{align*}
K = \left \{ \overrightarrow u_*(t) : t \in \mathbb R \right \}
\end{align*}
is precompact in $\mathcal H$.
\end{ppn}
The proof of Proposition \ref{p34} is the same as in the corototational case; see the proof of Proposition 5.3 in \cite{cpr}
for full details. We remark that proving Proposition \ref{p34} uses the nonlinear perturbation theory, Proposition \ref{p33}, applied to
the linear profile decompositions provided by Lemma \ref{l34}. What makes this possible is that the perturbation theory is
established with certain errors measured in the weaker
norm $\| \cdot \|_{S(\mathbb R)}$ (see \eqref{s312}) and the errors $w^J_n$ in the linear profile decomposition satisfy
$\lim_J \varlimsup_n \| w^J_n \|_{S(\mathbb R)} = 0$ (but possibly not $\lim_J \varlimsup_n \| w^J_n \|_{W(\mathbb R)} = 0$).
\section{Rigidity Theorem}
In this section we prove that the critical element from Proposition \ref{p34} does not exist and conclude the proof of our main
result Theorem \ref{t21} (equivalently Theorem \ref{t01}). The main result of this section
is the following.
\begin{ppn}\label{p51}
Let $u$ be a global solution of \eqref{s31} such that the trajectory
\begin{align*}
K = \{ \overrightarrow u(t) : t \in \mathbb R \}
\end{align*}
is precompact in $\mathcal H := \mathcal H(\mathbb R; \langle r \rangle^{d-1} dr)$. Then $\overrightarrow u = (0,0)$.
\end{ppn}
We first note that for a solution $u$ as in Proposition \ref{p51}, we have the following uniform control of the energy
in exterior regions.
\begin{lem}\label{l52}
Let $u$ be as in Proposition \ref{p51}. Then we have
\begin{align}
\begin{split}\label{s51
\forall R \geq 0, \quad \lim_{|t| \rightarrow \infty} \| \overrightarrow u(t) \|_{\mathcal H(|r| \geq R + |t|; \langle r \rangle^{d-1} dr )} &= 0, \\
\lim_{R \rightarrow \infty} \left [ \sup_{t \in \mathbb R} \| \overrightarrow u(t) \|_{\mathcal H(|r| \geq R + |t|; \langle r \rangle^{d-1} dr)} \right ] &= 0.
\end{split}
\end{align}
\end{lem}
To prove that $\overrightarrow u = (0,0)$, we proceed as in the corotational case \cite{cpr} and show that $u$ is a finite energy static solution to \eqref{s31}.
\begin{ppn}\label{static soln}
Let $u$ be as in Proposition \ref{p51}. Then there exists a static solution $U$ to \eqref{s31} such that $\overrightarrow u = (U,0)$.
\end{ppn}
We will first show that $\overrightarrow u$ is equal to static solutions $(U_{\pm},0)$ on $\pm r > 0$ separately. The proof for $r < 0$ is identical to the proof for $r > 0$ so we will only consider the case $r > 0$. The major part of this section is devoted to proving the following.
\begin{ppn}\label{p53}
Let $u$ be as in Proposition \ref{p51}. Then there exists a static solution $(U_+,0)$ such that $\overrightarrow u(t,r) = (U_+(r),0)$ for all $t \in \mathbb R$ and $r > 0$.
\end{ppn}
\subsection{Proof of Proposition \ref{p53}}
Let $\eta > 0$ be arbitrary, and let $u$ be as in Proposition \ref{p51}.
As in \cite{cpr}, we will show that $\overrightarrow u(t,r)$ is equal to a static solution $(U_+(r),0)$ to \eqref{s31} on $\{ t \in \mathbb R, r \in (\eta,\infty)\}$. We now introduce a function related to $u$ that will be integral in the proof. Define
\begin{align*}
u_e(t,r) := \frac{\langle r \rangle^{(d-1)/2}}{r^{(d-1)/2}} u(t,r), \quad (t,r) \in \mathbb R \times (0,\infty).
\end{align*}
If $u$ solves \eqref{s31} then $u_e$ solves the following radial semilinear wave equation on $\mathbb R^{1+d}$
\begin{align}\label{s52e}
\partial_t^2 u_e - \partial^2_r u_e - \frac{d-1}{r} \partial_r u_e + V_e(r) u_e = N_e (r,u_e), \quad (t,r) \in \mathbb R \times (0,\infty),
\end{align}
where
\begin{align}
V_e(r) = V(r) - \frac{(d-1)(d-4)}{2} r^{-2} \langle r \rangle^{-2} + \frac{(d-1)(d-5)}{4} r^{-2} \langle r \rangle^{-4} , \label{s52}
\end{align}
and $N_e(r,u_e) = F_e(r,u_e) + G_e(r,u_e)$ with
\begin{align}
F_e(r,u_e) &= \frac{\langle r \rangle^{(d-1)/2}}{r^{(d-1)/2}} F \left (r, \frac{r^{(d-1)/2}}{\langle r \rangle^{(d-1)/2}} u_e \right ), \label{s53}\\
G_e(r,u_e) &= \frac{\langle r \rangle^{(d-1)/2}}{r^{(d-1)/2}} G \left (r, \frac{r^{(d-1)/2}}{\langle r \rangle^{(d-1)/2}} u_e \right ), \label{s54}
\end{align}
where $F$ and $G$ are given in \eqref{s26}.
Note that for all $R > 0$, we have
\begin{align}\label{s55
\| \overrightarrow u_e(t) \|_{\mathcal H( r \geq R; r^{d-1} dr)} \leq C(R) \| \overrightarrow u \|_{\mathcal H( r \geq R; \langle r \rangle^{d-1} dr)},
\end{align}
so that by Lemma \ref{l52}, $u_e$ inherits the compactness properties
\begin{align}
\begin{split}\label{s56
\forall R > 0, \quad \lim_{|t| \rightarrow \infty} \| \overrightarrow u_e(t) \|_{\mathcal H( r \geq R + |t|; r^{d-1} dr)} = 0, \\
\lim_{R \rightarrow \infty} \left [ \sup_{t \in \mathbb R} \| \overrightarrow u_e(t) \|_{\mathcal H( r \geq R + |t|; r^{d-1} dr)} \right ] = 0.
\end{split}
\end{align}
We also note that due to \eqref{s27}--\eqref{s29} and the definition of $V_e, F_e,$ and $G_e$, we have for all $r > 0$,
\begin{align}
| V_e(r) | &\lesssim r^{-4}, \label{s57} \\
|F_e(u_e,r)| &\lesssim r^{-3} |u_e|^{2}, \label{s58} \\
|G_e(u_e,r)| &\lesssim r^{d-5}|u_e|^3, \label{s59}
\end{align}
where the implied constants depend on $Q_{\ell,n}$ and $d$.
To prove Proposition \ref{p53}, we use channels of energy arguments that originate in the seminal work \cite{dkm4} on the $3d$ energy--critical wave equation. These arguments have since been used in the study of equivariant exterior wave maps \cite{kls1} \cite{klls2} and in the proof of the corotational case of Theorem \ref{t01} in \cite{cpr}. The arguments of this section are
derived from those in \cite{klls2}.
The proof is split into three main steps. In the first two steps, we determine the precise asymptotics of
$$(u_{e,0}(r)
,u_{e,1}(r)) := (u_e(0,r), \partial_t u_e(0,r)) \quad \mbox{as } r \rightarrow \infty.$$
In particular, we show that there exists $\alpha \in \mathbb R$ such that
\begin{align
r^{d-2} u_{e,0}(r) &= \alpha + O(r^{-2}), \label{s510} \\
\int_r^\infty u_{e,1}(\rho) \rho^{2j-1} d\rho &= O(r^{2j - d - 1}), \quad j = 1, \ldots, \left \lfloor \frac{d}{4} \right \rfloor, \label{s511}
\end{align}
as $r \rightarrow \infty$. In the final step, we use this information and channels of energy arguments to conclude the proof of Proposition \ref{p53}. In the remainder of this subsection we denote $\mathcal H(r \geq R) := \mathcal H (r \geq R; r^{d-1} dr)$.
As in the study of corotational wave maps on a wormhole, the key tool used in
establishing \eqref{s510} and \eqref{s511} is the following exterior energy estimate for radial free waves
on Minkowski space $\mathbb R^{1+d}$
with $d$ odd. The case $d = 5$ used for corotational wave maps on a wormhole and exterior wave maps was
proved in \cite{kls1}, and the general case of $d \geq 3$ and odd was proven in \cite{klls1}.
\begin{ppn}[Theorem 2, \cite{klls1}] \label{p54
Let $d \geq 3$ be odd. Let $v$ be a radial solution to the free wave equation in $\mathbb R^{1 + d}$
\begin{align*}
&\partial_t ^2 v - \Delta v = 0, \quad (t,x) \in \mathbb R^{1+d}, \\
&\overrightarrow v(0) = (f,g) \in \dot H^1 \times L^2 ( \mathbb R^d).
\end{align*}
Then for every $R > 0$,
\begin{align}\label{s512
\max_{\pm} \inf_{\pm t \geq 0} \int_{r \geq R + |t|} |\nabla_{t,r} v(t,r)|^2 r^{d-1} dr \geq \frac{1}{2} \| \pi^{\perp}_R (f,g)
\|_{\mathcal H(r \geq R)},
\end{align}
where $\pi_R = I - \pi_R^{\perp}$ is the orthogonal projection onto the plane
\begin{align*}
P(R) = \mbox{span} \Bigl \{ (r^{2i - d},0), (0,r^{2j - d}) : i = 1,\ldots, \Bigl \lfloor \frac{d+2}{4} \Bigr \rfloor,
j = 1,\ldots, \Bigl \lfloor \frac{d}{4} \Bigr \rfloor \Bigr \}
\end{align*}
in $\mathcal H( r \geq R)$. The left--hand side of \eqref{s512} is identically 0 for data satisfying $(f,g)|_{r \geq R} \in P(R)$
\end{ppn}
We remark here that Proposition \ref{p54} states, quantitatively, that generic solutions to the free wave equation
on $\mathbb R^{1+d}$ with $d$ odd emit
a fixed amount of energy into regions exterior to light cones. However, this property
fails in the case $R = 0$ for general data $(f,g)$ in even dimensions (see \cite{cks}).
In the remainder of this subsection, we denote
\begin{align*}
\tilde k := \left \lfloor \frac{d+2}{4} \right \rfloor, \quad
k := \left \lfloor \frac{d}{4} \right \rfloor.
\end{align*}
For $R \geq 1$, we define the projection coefficients $\lambda_i(t,R),\mu_j(t,R)$ for $i = 1, \ldots, \tilde k$, $j = 1,\dots, k$, via
\begin{align}
\pi_R \overrightarrow u_e (t,r) = \left ( u_e(t,r) - \sum_{i = 1}^{\tilde k} \lambda_i(t,R) r^{2i-d},
\partial_t u_e(t,r) - \sum_{j = 1}^{k} \mu_j(t,R) r^{2j-d} \right ). \label{s522}
\end{align}
We now give identities relating $u_e$ to the coefficients $\lambda_j(t,r), \mu_i(t,r)$ and an equivalent way of
expressing the relative size of $\| \pi_R \overrightarrow u_e(t) \|_{\mathcal H(r \geq R)}$ and $\| \pi_R^\perp \overrightarrow u_e(t) \|_{\mathcal H(r \geq R)}$ using projection
coefficients.
\begin{lem}[Lemma 4.5, Lemma 5.10, \cite{klls2}]\label{l54}
For each fixed $R > 0$, and $(t,r) \in \{ r \geq R + |t| \}$, we have the following identities:
\begin{align*}
u_e(t,r) &= \sum_{j = 1}^{\tilde k} \lambda_j(t,r) r^{2j-d}, \\
\int_r^\infty \partial_t u_e(t,\rho) \rho^{2i-1} d\rho &= \sum_{j = 1}^k \mu_j(t,r) \frac{r^{2i+2j-d}}{d-2i-2j}, \quad \forall 1
\leq i \leq k, \\
\mu_j(t,r) &= \sum_{i = 1}^k r^{d-2i-2j} \frac{c_i c_j}{d-2i-2j} \int_r^\infty \partial_t u_e(t,\rho)
\rho^{2i-1} d \rho, \quad \forall 1 \leq i \leq k, \\
\lambda_j(t,r) &= \frac{d_j}{d-2j} \left ( u_e(t,r)r^{d-2j} +
\sum_{i = 1}^{\tilde k -1} \frac{(2i)d_{i+1} r^{d-2i-2j}}{d-2i-2j} \int_r^\infty u_e(t,\rho) \rho^{2i-1} d\rho \right ),
\end{align*}
where the last identity holds for all $j \leq \tilde k$ and
\begin{align*}
c_j &:= \frac{\Pi_{1 \leq l \leq k} (d - 2j - 2l)}{\Pi_{1 \leq l \leq k, l \neq j} (2l-2j)}, \quad 1 \leq j \leq k, \\
d_j &:= \frac{\Pi_{1 \leq l \leq \tilde k} (d + 2- 2j - 2l)}{\Pi_{1 \leq l \leq \tilde k, l \neq j} (2l-2j)}, \quad 1 \leq j \leq \tilde k.
\end{align*}
Also, the following estimates hold
\begin{align*}
\| \pi_R \overrightarrow u_e(t) \|^2_{\mathcal H(r \geq R)}
&\simeq
\sum_{i = 1}^{\tilde k} \Bigl ( \lambda_i(t,R) R^{2i - \frac{d+2}{2}} \Bigr )^2
+ \sum_{j = 1}^{k} \Bigl ( \mu_j(t,R) R^{2j - \frac{d}{2}} \Bigr )^2, \\
\| \pi^\perp_R \overrightarrow u_e(t) \|^2_{\mathcal H(r \geq R)}
&\simeq \int_R^\infty
\sum_{i = 1}^{\tilde k} \Bigl ( \partial_r \lambda_i(t,r) r^{2i - \frac{d+1}{2}} \Bigr )^2
+ \sum_{j = 1}^{k} \Bigl ( \partial_r \mu_j(t,r) r^{2j - \frac{d-1}{2}} \Bigr )^2 dr,
\end{align*}
where the implied constants depend only on $d$.
\end{lem}
We now proceed to the first step in proving Proposition \ref{p53}.
\subsubsection*{Step 1: Decay rate for $\pi^{\perp}_R \overrightarrow u_e(t)$ in $\mathcal H(r \geq R)$}
In this step we establish the following decay estimate for $\pi_R^{\perp} \overrightarrow u_e(t)$.
\begin{lem}\label{l55
There exists $R_0 > 1$ such that for all $R \geq R_0$ and for all $t \in \mathbb R$ we have
\begin{align}\label{s513a
\begin{split}
\| \pi_R^{\perp} \overrightarrow u_e(t) \|_{\mathcal H(r \geq R)} \lesssim R^{-2} \| \pi_R \overrightarrow u_e(t) \|_{\mathcal H(r \geq R)} +
R^{-d/2} \| \pi_R \overrightarrow u_e(t) \|_{\mathcal H(r \geq R)}^2 + R^{-1}\| \pi_R \overrightarrow u_e(t) \|_{\mathcal H(r \geq R)}^3.
\end{split}
\end{align}
\end{lem}
Since we are only interested in the behavior of $\overrightarrow u_e(t,r)$ in exterior regions $\{ r \geq R + |t| \}$, we first consider a modified Cauchy problem.
In particular, we can, by finite speed of propagation, alter $V_e$, $F_e$, and $G_e$ appearing in \eqref{s52e} in the interior region
$\{ r \leq R + |t| \}$ without affecting the behavior of $\overrightarrow u_e$ on the exterior region $\{r \geq R + |t|\}$.
\begin{defn}\label{d56
Let $R \geq \eta$. For a function $f = f(r,u) : [\eta,\infty) \times \mathbb R \rightarrow \mathbb R$, we define
\begin{align*}
f_R(t,r,u) :=
\begin{cases}
f(R + |t|, u) \quad &\mbox{if } \eta \leq r \leq R + |t| \\
f(r,u) \quad &\mbox{if } r \geq R + |t|
\end{cases}, \quad (t,r,u) \in \mathbb R \times [\eta,\infty) \times \mathbb R.
\end{align*}
\end{defn}
We now consider solutions to a modified version of \eqref{s52e}:
\begin{align}\label{s513
\begin{split}
&\partial_t^2 h - \partial_r^2 h - \frac{d-1}{r} \partial_r h = N_R (t,r,h), \quad (t, r) \in \mathbb R \times (\mathbb R \backslash B(0,\eta)), \\
&\overrightarrow h(0) = (h_0,h_1) \in \mathcal H_0(r \geq \eta),
\end{split}
\end{align}
where $\mathcal H_0(r \geq \eta) = \{ (h_0,h_1) \in \mathcal H(r \geq \eta) : h_0(\eta) = 0 \}$ and
\begin{align*}
N_R(t,r,h) = - V_{e,R}(t,r) h + F_{e,R}(t,r,h) + G_{e,R}(t,r,h).
\end{align*}
We note that from Definition \ref{d56} and \eqref{s57}, \eqref{s58}, and \eqref{s59}, we have
\begin{align}
| V_{e,R}(t,r) | &\lesssim
\begin{cases}\label{s514
(R + |t|)^{-4} \quad &\mbox{if } \eta \leq r \leq R + |t|, \\
r^{-4} \quad &\mbox{if } r \geq R + |t|,
\end{cases}\\
|F_{e,R}(t,r,h)| &\lesssim
\begin{cases}\label{s515}
(R + |t|)^{-3}|h|^2 \quad &\mbox{if } \eta \leq r \leq R + |t|, \\
r^{-3}|h|^3 \quad &\mbox{if } r \geq R + |t|,
\end{cases}\\
|G_{e,R}(t,r,h)| &\lesssim
\begin{cases}\label{s516}
(R + |t|)^{d-5}|h|^3 \quad &\mbox{if } \eta \leq r \leq R + |t|, \\
r^{d-5}|h|^3 \quad &\mbox{if } r \geq R + |t|.
\end{cases}
\end{align}
\begin{lem}\label{l57}
There exist $R_0 > 0$ large and $\delta_0 > 0$ small such that for all $R \geq R_0$ and all $(h_0,h_1) \in \mathcal H_0(r \geq \eta)$ with
\begin{align*}
\| (h_0,h_1) \|_{\mathcal H(r \geq \eta)} \leq \delta_0,
\end{align*}
there exists a unique globally defined solution $h$ to \eqref{s513} such that
\begin{align}\label{s517
\left \| r^{(d-4)/3} h \right \|_{L^3_tL^6_x(\mathbb R \times (\mathbb R^d \backslash B(0,\eta)))}
\lesssim \| \overrightarrow h(0) \|_{\mathcal H(r \geq \eta)}.
\end{align}
Moreover, if we define $h_L$ to be the solution to the free equation $\partial_t^2 h_L - \Delta h_L = 0$, $(t,x) \in \mathbb R \times (
\mathbb R^d \backslash B(0,\eta))$, $\overrightarrow h_L(0) = (h_0,h_1)$, then
\begin{align}\label{s518
\begin{split}
\sup_{t \in \mathbb R} \| \overrightarrow h(t) - \overrightarrow h_L(t) \|_{\mathcal H(r \geq \eta)} \lesssim R^{-2} \| \overrightarrow h(0) \|_{\mathcal H(r \geq \eta)} +
R^{-d/2} \| \overrightarrow h(0) \|_{\mathcal H(r \geq \eta)}^2 + R^{-1} \| \overrightarrow h(0) \|_{\mathcal H(r \geq \eta)}^3.
\end{split}
\end{align}
\end{lem}
\begin{proof}
For the proof, we use the shorthand notation $\mathbb R^d_* = \mathbb R^d \backslash B(0,\eta)$.
The small data global well--posedness and spacetime estimate \eqref{s517} follow from standard contraction mapping and continuity
arguments using the following Strichartz estimate:
if $h$ is a radial solution to $\partial_t^2 h - \Delta h = F$ on $\mathbb R \times \mathbb R^d_*$ with $h(t,\eta) = 0$, $\forall t$, then
\begin{align*}
\left \| r^{(d-4)/3} h \right \|_{L^3_tL^6_x(\mathbb R \times \mathbb R^d_*)}
\lesssim \| \overrightarrow h(0) \|_{\mathcal H(r \geq \eta)} + \| F \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)}.
\end{align*}
This estimate follows from \cite{hmssz} and an argument similar to the one used to establish \eqref{s33a}. Rather than
give the details for proving \eqref{s517}, we
prove \eqref{s518} since the argument is similar. By the Duhamel formula and Strichartz estimates we have
\begin{align*}
\sup_{t \in \mathbb R} \| \overrightarrow h(t) - \overrightarrow h_L(t) \|_{\mathcal H(r \geq \eta)} &\lesssim \| N_R(\cdot, \cdot, h) \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)} \\
&\lesssim \| V_{e,R} h \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)} + \| F_{e,R}(\cdot, \cdot, h) \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)} \\ &\: +
\| G_{e,R}(\cdot, \cdot, h) \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)}.
\end{align*}
The third term is readily estimated by \eqref{s516} and \eqref{s517}
\begin{align*}
\| G_{e,R}(\cdot, \cdot, h) \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)}
\lesssim R^{-1} \| r^{d-4} h^3 \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)}
\lesssim R^{-1} \| \overrightarrow h(0) \|_{\mathcal H(r \geq \eta)}^3.
\end{align*}
For the first term we have
\begin{align*}
\| V_{e,R} h \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)} \leq
\left \| r^{-(d-4)/3}V_{e,R} \right \|_{L^{3/2}_t L^{3}_x(\mathbb R \times \mathbb R^d_*)}
\left \| r^{(d-4)/3} h \right \|_{L^3_t L^6_x(\mathbb R \times \mathbb R^d_*)}.
\end{align*}
By \eqref{s514}
\begin{align*}
\left \| r^{-(d-4)/3}V_{e,R} \right \|_{L^{3/2}_t L^{3}_x(\mathbb R \times \mathbb R^d_*)} \lesssim R^{-2}.
\end{align*}
Thus, by \eqref{s517}
\begin{align*}
\left \| r^{-(d-4)/3}V_{e,R} \right \|_{L^{3/2}_t L^{3}_x(\mathbb R \times \mathbb R^d_*)}
\left \| r^{(d-4)/3} h \right \|_{L^3_t L^6_x(\mathbb R \times \mathbb R^d_*)} \lesssim R^{-2} \| \overrightarrow h(0) \|_{\mathcal H(r \geq \eta)}.
\end{align*}
Similarly, using \eqref{s515}, \eqref{s517} and the Strauss estimate valid for all radial $f \in C^\infty_0(\mathbb R^d_*)$
\begin{align*}
|f(r)| \lesssim r^{\frac{2-d}{2}} \| \nabla f \|_{L^2(\mathbb R^d_*)},
\end{align*}
we conclude that $\| F_{e,R}(\cdot, \cdot,h) \|_{L^1_t L^2_x(\mathbb R \times \mathbb R^d_*)} \lesssim R^{-d/2} \| h(0) \|^2_{\mathcal H(r \geq \eta)}$ which proves \eqref{s518}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l55}]
We first prove Lemma \ref{l55} for $t = 0$. For $R > \eta$, define the truncated initial data $\overrightarrow u_R(0) = (u_{0,R},
u_{1,R}) \in \mathcal H_0(r \geq \eta)$ via
\begin{align}
u_{0,R}(r) &=
\begin{cases}\label{s519
u_e(0,r) \quad &\mbox{if } r \geq R, \\
\frac{r - \eta}{R-\eta} u_e(0,R) \quad &\mbox{if } r < R,
\end{cases} \\
u_{1,R}(r) &=
\begin{cases}\label{s520
\partial_t u_e(0,r) \quad &\mbox{if } r \geq R, \\
0 \quad &\mbox{if } r < R.
\end{cases}
\end{align}
Note that for $R$ large,
\begin{align}\label{s521
\| \overrightarrow u_R (0) \|_{\mathcal H(r \geq \eta)} \lesssim \| \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)}.
\end{align}
In particular, by \eqref{s56} there exists $R_0 \geq 1$ such that for all $R \geq R_0$, $\| \overrightarrow u_R(0) \|_{\mathcal H(r \geq \eta)} \leq \delta_0$
where $\delta_0$ is from Lemma \ref{l57}. Let $u_R(t)$ be the solution to \eqref{s513} with initial data $(u_{0,R}, u_{1,R})$, and let
$\overrightarrow u_{R,L}(t) \in \mathcal H_0(r \geq \eta)$ be the solution to the free wave equation $\partial_t^2 u_{R,L} - \Delta u_{R,L}
= 0,$ $(t,x) \in \mathbb R \times \mathbb R^d_*$, $\overrightarrow u_{R,L}(0) = (u_{0,R}, u_{1,R})$. By finite speed of propagation
\begin{align*}
r \geq R + |t| \implies \overrightarrow u_R(t,r) = \overrightarrow u_e(t,r).
\end{align*}
By Proposition \ref{p54}, for all $t \geq 0$ or for all $t \leq 0$,
\begin{align*}
\| \pi_R^{\perp} \overrightarrow u_{R,L}(0) \|_{\mathcal H(r \geq R)} \lesssim \| \overrightarrow u_{R,L}(t) \|_{\mathcal H(r \geq R + |t|)}.
\end{align*}
Suppose, without loss of generality, that the above bound holds for all $t \geq 0$. By \eqref{s518} we conclude that for all $t \geq 0$
\begin{align*}
\| \overrightarrow u_e(t) \|_{\mathcal H(r \geq R + |t|)} &\geq \| \overrightarrow u_{R,L}(t) \|_{\mathcal H(r \geq R + |t|)} - \| \overrightarrow u_R(t) - \overrightarrow u_{R,L}(t) \|_{\mathcal H(r \geq \eta)} \\
&\geq c \| \pi^{\perp}_R \overrightarrow u_{R,L}(0) \|_{\mathcal H(r \geq R)} - C \Bigl [ R^{-2} \| u_R(0) \|_{\mathcal H(r \geq \eta)} +
R^{-d/2} \| \overrightarrow u_R(0) \|_{\mathcal H(r \geq \eta)}^2 + R^{-1} \| \overrightarrow u_R(0) \|^3_{\mathcal H(r \geq \eta)} \Bigr ].
\end{align*}
Letting $t \rightarrow \infty$ and using the decay property \eqref{s56} and the definition of $(u_{0,R}, u_{1,R})$, we conclude that
\begin{align*}
\| \pi^{\perp}_R \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)} \lesssim
R^{-2} \| u_e(0) \|_{\mathcal H(r \geq R)} +
R^{-d/2} \| \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)}^2 + R^{-1} \| \overrightarrow u_e(0) \|^3_{\mathcal H(r \geq R)}.
\end{align*}
Note that $\| \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)}^2 = \| \pi^{\perp}_R \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)}^2 +
\| \pi_R \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)}^2$. Thus, if we take $R_0$ large enough to absorb terms involving
$\| \pi^{\perp}_R \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)}$ into the left hand side in the previous estimate, we obtain for all
$R \geq R_0$
\begin{align*}
\| \pi^{\perp}_R \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)} \lesssim
R^{-2} \| \pi_R u_e(0) \|_{\mathcal H(r \geq R)} +
R^{-d/2} \| \pi_R \overrightarrow u_e(0) \|_{\mathcal H(r \geq R)}^2 + R^{-1} \| \pi_R \overrightarrow u_e(0) \|^3_{\mathcal H(r \geq R)},
\end{align*}
as desired. This proves Lemma \ref{l55} for $t = 0$.
For general $t = t_0$ in \eqref{s513a}, we first set
\begin{align*}
u_{0,R,t_0} &=
\begin{cases}
u_e(t_0,r) \quad &\mbox{if } r \geq R, \\
\frac{r - \eta}{R-\eta} u_e(t_0,R) \quad &\mbox{if } r < R,
\end{cases} \\
u_{1,R,t_0} &=
\begin{cases}
\partial_t u_e(t_0,r) \quad &\mbox{if } r \geq R, \\
0 \quad &\mbox{if } r < R.
\end{cases}
\end{align*}
By \eqref{s56}
we can find $R_0 = R_0(\delta_0)$ independent of $t_0$ such that for all
$R \geq R_0$
\begin{align*}
\| (u_{0,R,t_0}, u_{1,R,t_0}) \|_{\mathcal H(r \geq \eta)} \lesssim \| \overrightarrow u_e(t_0) \|_{\mathcal H(r \geq R)} \lesssim \delta_0.
\end{align*}
The previous argument for $t_0 = 0$ repeated with obvious modifications yield \eqref{s513a} for $t = t_0$.
\end{proof}
Before proceeding to the next step, we reformulate the conclusion of Lemma \ref{l55} using the projection coefficients
$\lambda_i(t,R), \mu_j(t,R)$ for $\overrightarrow u_e(t)$.
The following is an immediate consequence of Lemma \ref{l54} and Lemma \ref{l55}.
\begin{lem}\label{l58
Let $\lambda_i(t,R),\mu_j(t,R)$, $1 \leq i \leq \tilde k$, $1 \leq j \leq k$ be the projection coefficients as in \eqref{s522}. Then there exists $R_0 \geq 1$ such that uniformly in $R > R_0$ and $t \in \mathbb R$
\begin{align*}
\int_R^\infty
\sum_{i = 1}^{\tilde k} \Bigl ( \partial_r \lambda_i(t,r) r^{2i - \frac{d+1}{2}} \Bigr )^2
&+ \sum_{j = 1}^{k} \Bigl ( \partial_r \mu_j(t,r) r^{2i - \frac{d-1}{2}} \Bigr )^2 dr \\
&\lesssim
\sum_{i = 1}^{\tilde k} R^{4i - d - 6} |\lambda_i(t,R)|^2 + R^{8i - 3d - 4} |\lambda_i(t,R)|^4
+ R^{12i - 3d - 8} |\lambda_i(t,R)|^6 \\
&\:+ \sum_{i = 1}^{k} R^{4i - d - 4} |\mu_i(t,R)|^2 + R^{8i - 3d} |\mu_i(t,R)|^4
+ R^{12i - 3d-2} |\mu_i(t,R)|^6.
\end{align*}
\end{lem}
\subsubsection*{Step 2: Asymptotics for $\overrightarrow u_e(0)$}
In this step, we prove that $\overrightarrow u_e(0)$ has the asymptotic expansions \eqref{s510}, \eqref{s511} which we now formulate as
a proposition.
\begin{ppn}\label{p58}
Let $u_e$ be a solution to \eqref{s52e} which satisfies \eqref{s56}. Let $\overrightarrow u_e(0)
= (u_{e,0},u_{e,1})$. Then there exists $\alpha \in \mathbb R$ such that
\begin{align*
r^{d-2} u_{e,0}(r) &= \alpha + O(r^{-2}), \\
\int_r^\infty u_{e,1}(\rho) \rho^{2j-1} d\rho &= O(r^{2j - d -1} ), \quad j = 1, \ldots, k,
\end{align*}
as $r \rightarrow \infty$.
\end{ppn}
The proof of Proposition \ref{p58} is split up into a several lemmas. First, we use Lemma \ref{l58} to prove the following difference
estimate for the projection coefficients.
\begin{lem}\label{l59
Let $\delta_1 \leq \delta_0$ where $\delta_0$ is from Lemma \ref{l57}. Let $R_1 \geq R_0 > 1$ be large enough so that
for all $R \geq R_1$ and for all $t \in \mathbb R$
\begin{align*}
\| \overrightarrow u_e(t) \|_{\mathcal H(r \geq R)} &\leq \delta_1, \\
R^{-2} &\leq \delta_1.
\end{align*}
Then for all $r, r'$ with $R_1 \leq r \leq r' \leq 2r$ and uniformly in $t$
\begin{align}
\begin{split} \label{s528}
|\lambda_j(t,r) - \lambda_j(t,r')|
&\lesssim
r^{-2j + 1} \sum_{i = 1}^{\tilde k} r^{2i - 3} |\lambda_i(t,r)| + r^{4i - d - 2} |\lambda_i(t,r)|^2
+ r^{6i - d - 4} |\lambda_i(t,r)|^3 \\
&\:+ r^{-2j+1} \sum_{i = 1}^{k} r^{2i - 2} |\mu_i(t,r)|^2 + r^{4i - d} |\mu_i(t,r)|^2
+ r^{6i - d-1} |\mu_i(t,r)|^3,
\end{split}
\end{align}
and
\begin{align}
\begin{split} \label{s529}
|\mu_j(t,r) - \mu_j(t,r')| &\lesssim
r^{-2j} \sum_{i = 1}^{\tilde k} r^{2i - 3} |\lambda_i(t,r)| + r^{4i - d - 2} |\lambda_i(t,r)|^2
+ r^{6i - d - 4} |\lambda_i(t,r)|^3 \\
&\:+ r^{-2j} \sum_{i = 1}^{k} r^{2i - 2} |\mu_i(t,r)|^2 + r^{4i - d} |\mu_i(t,r)|^2
+ r^{6i - d-1} |\mu_i(t,r)|^3.
\end{split}
\end{align}
\end{lem}
\begin{proof}
By the fundamental theorem of calculus and Lemma \ref{l58} we have, for all $r, r'$ such that $R_1 \leq r \leq r' \leq 2r$,
\begin{align*}
|\lambda_j(t,r) - \lambda_j(t,r')|^2 &=
\left ( \int_r^{r'} \partial_\rho \lambda_j(t,\rho) d\rho \right )^2 \\
&\leq \left ( \int_r^{r'} \rho^{-4j + d +1} d\rho \right ) \left ( \int_r^{r'} \left ( \rho^{2j-\frac{d+1}{2}} \partial_{\rho} \lambda(t,\rho) \right )^2 d\rho \right ) \\
&\lesssim r^{-4j+d+2} \sum_{i = 1}^{\tilde k} r^{4i - d - 6} |\lambda_i(t,r)|^2 + r^{8i - 3d - 4} |\lambda_i(t,r)|^4
+ r^{12i - 3d - 8} |\lambda_i(t,r)|^6 \\
&\:+ r^{-4j+d+2} \sum_{i = 1}^{k} r^{4i - d - 4} |\mu_i(t,r)|^2 + r^{8i - 3d} |\mu_i(t,r)|^4
+ r^{12i - 3d-2} |\mu_i(t,r)|^6
\end{align*}
which proves \eqref{s528}.
Similarly, we have
\begin{align*}
|\mu_j(t,r) - \mu_j(t,r')|^2
&\leq r^{-4j+d} \left ( \int_r^{r'} ( \rho^{2j - \frac{d-1}{2}} \partial_{\rho} \mu_j(t,\rho) )^2 d\rho \right ) \\
&\lesssim r^{-4j+d} \sum_{i = 1}^{\tilde k} r^{4i - d - 6} |\lambda_i(t,r)|^2 + r^{8i - 3d - 4} |\lambda_i(t,r)|^4
+ r^{12i - 3d - 8} |\lambda_i(t,r)|^6 \\
&\:+ r^{-4j+d} \sum_{i = 1}^{k} r^{4i - d - 4} |\mu_i(t,r)|^2 + r^{8i - 3d} |\mu_i(t,r)|^4
+ r^{12i - 3d-2} |\mu_i(t,r)|^6
\end{align*}
which proves \eqref{s529}.
\end{proof}
We note that with $\delta_1$ and $R_1$ fixed as in Lemma \ref{l59}, we have by Lemma \ref{l54} for all $r \geq R_1$
and uniformly in time
\begin{align}
\begin{split}\label{s527}
|\lambda_i(t,r)| &\lesssim \delta_1 r^{\frac{d+2}{2}-2i}, \quad \forall 1 \leq i \leq \tilde k, \\
|\mu_j(t,r)| &\lesssim \delta_1 r^{\frac{d}{2} - 2j}, \quad \forall 1 \leq j \leq k.
\end{split}
\end{align}
By using this observation, a simple consequence of Lemma \ref{l59} is the following.
\begin{cor}\label{c510
Let $\delta_1$ and $R_1$ be as in Lemma \ref{l59}. Then for all $r,r'$ with $R_1 \leq r \leq r' \leq 2r$ and for all $t \in \mathbb R$
\begin{align
|\lambda_j(t,r) - \lambda_j(t,r')| &\lesssim \delta_1 \left ( \sum_{i = 1}^{\tilde k} r^{2i-2j} |\lambda_i(t,r)| +
\sum_{i = 1}^k r^{2i-2j+1} |\mu_i(t,r)| \right ), \label{s530} \\
|\mu_j(t,r) - \mu_j(t,r')| &\lesssim r^{-1} \delta_1 \left ( \sum_{i = 1}^{\tilde k} r^{2i-2j} |\lambda_i(t,r)| +
\sum_{i = 1}^k r^{2i-2j+1} |\mu_i(t,r)| \right ). \label{s531}
\end{align}
\end{cor}
Before proceeding further, we state point wise and averaged difference (in time) estimates for the projection coefficients that will
be used in the sequel.
\begin{lem}[Lemma 5.10, Lemma 5.12 \cite{klls2}]\label{l511a}
For each $R > 0$, $r \geq R$, and $t_1 \neq t_2$ with $(t_j,r) \in \{ r \geq R + |t| \}$ we have for
any $1 \leq j \leq j' \leq \tilde k$
\begin{align}
\begin{split}\label{s532a
|\lambda_j(t_1,r) &- \lambda_j(t_2,r)| \\
&\lesssim r^{2j' - 2j} |\lambda_{j'}(t_1,r) - \lambda_{j'}(t_2,r)| + \sum_{m = 1}^k \left | r^{2m-2j} \int_{t_2}^{t_1}
\mu_m(t,r) dt \right |,
\end{split}
\end{align}
as well as for any $1 \leq j \leq k$
\begin{align}\label{s532b
\frac{1}{R} \int_R^{2R} \mu_j(t_1,r) - \mu_j(t_2,r) dr = \sum_{i = 1}^{\tilde k} \frac{c_ic_j}{d-2-2j} \int_{t_1}^{t_2}
I(i,j) + II(i,j) dt,
\end{align}
with
\begin{align}
\begin{split}\label{s532c}
I(i,j) &= -\frac{1}{R} (u_e(t,r) r^{d-2j-1}) \big |_{r = R}^{r = 2R} + (2i - 2j - 1) \frac{1}{R} \int_R^{2R} u_e(t,r) r^{d- 2j -2} dr \\
&\:- \frac{(2\ell - 2i + 3)(2i-2)}{R} \int_R^{2R} r^{d-2i-2j} \int_r^\infty u_e(t,\rho) \rho^{2i-3}d\rho dr, \\
II(i,j) &= \frac{1}{R} \int_R^{2R} r^{d-2i-2j} \int_r^\infty \bigl[ -V_e(\rho) u_e(t,\rho) + N_e(\rho, u_e(t,\rho))
\bigr ]\rho^{2i-1} d\rho
dr.
\end{split}
\end{align}
\end{lem}
Subtleties arise depending on when $d = 7, 11, 15,\ldots$ ($\ell$ is even) and when
$d = 5, 9, 13, \ldots$ ($\ell$ is odd) which are due to the relationship between $\tilde k$ and $k$. We first prove Proposition
\ref{p58} in the case that \textbf{$\ell$ is even}. When $\ell$ is even, we have the relations
\begin{align*}
d = 4 \tilde k - 1, \quad \tilde k = k+1.
\end{align*}
We now establish a growth estimate which improves \eqref{s527}.
\begin{lem}\label{l511
Let $\epsilon > 0$ be fixed and sufficiently small. Then as long as $\delta_1$ as in Lemma
\ref{l59} is sufficiently small, we have uniformly in $t$,
\begin{align
\begin{split}\label{s533}
|\lambda_{\tilde k}(t,r)| &\lesssim r^{\epsilon}, \\
|\mu_k(t,r)| &\lesssim r^{\epsilon}, \\
|\lambda_i(t,r)| &\lesssim r^{2\tilde k - 2j - 2 + 3\epsilon}, \quad \forall 1 \leq i < \tilde k, \\
|\mu_i(t,r)| &\lesssim r^{2\tilde k - 2j - 3 + 3\epsilon}, \quad \forall 1 \leq i < k.
\end{split}
\end{align}
\end{lem}
\begin{proof}
If $r > R_1$, by Corollary
\ref{c510} we have,
\begin{align
|\lambda_j(t,2r)| &\leq (1 + C \delta_1) |\lambda_j(t,r)| +
C \delta_1 \left ( \sum_{i = 1}^{\tilde k} r^{2i-2j} |\lambda_i(t,r)| +
\sum_{i = 1}^k r^{2i-2j+1} |\mu_i(t,r)| \right ), \label{s534} \\
|\mu_j(t,2r)| &\leq (1 + C \delta_1) |\mu_j(t,r)| +
r^{-1} C \delta_1 \left ( \sum_{i = 1}^{\tilde k} r^{2i-2j} |\lambda_i(t,r)| +
\sum_{i = 1}^k r^{2i-2j+1} |\mu_i(t,r)| \right ). \label{s535}
\end{align}
Fix $r_0 > R_1$ and define
\begin{align*}
a_n := \sum_{i = 1}^{\tilde k} (2^n r_0)^{2i-2\tilde k} |\lambda_i(t,2^n r_0)| +
\sum_{i = 1}^k (2^n r_0)^{2i-2\tilde k+1} |\mu_i(t,2^n r_0)|
\end{align*}
then \eqref{s534} and \eqref{s535} imply
\begin{align*}
a_{n+1} \leq ( 1 + C(k + \tilde k) \delta_1 ) a_n.
\end{align*}
By induction
\begin{align*}
a_n \leq ( 1 + C(k + \tilde k) \delta_1 )^n a_0
\end{align*}
Choose $\delta_1$ so small so that $1 + C(k + \tilde k) \delta_1 < 2^{\epsilon}$. We conclude (using the compactness of
$\overrightarrow u_e$) that
\begin{align*}
a_n \leq 2^{n \epsilon} a_0 \lesssim 2^{n \epsilon},
\end{align*}
whence by our definition of $a_n$
\begin{align}\label{s536a
|\lambda_i(t,2^n r_0)| \lesssim (2^n r_0)^{2\tilde k - 2i + \epsilon}, \quad |\mu_i(t,2^n r_0)| \lesssim (2^n r_0)^{2\tilde k - 2i -1 + \epsilon},
\end{align}
which is an improvement of \eqref{s527}.
We now insert \eqref{s536a} back into our difference estimates \eqref{s528} and \eqref{s529}. We first note that by \eqref{s536a} and
the relation $d = 4 \tilde k - 1 \geq 7$, we have the
estimates
\begin{align}
\begin{split}\label{537
(2^n r_0)^{2i - 3} |\lambda_i(t,2^n r_0)| &\lesssim (2^n r_0)^{2\tilde k - 3 + \epsilon}, \\
(2^n r_0)^{4i - d - 2} |\lambda_i(t,2^n r_0)|^2 &\lesssim (2^n r_0)^{-1 + 2\epsilon}
\lesssim (2^n r_0)^{2\tilde k - 3 + 3\epsilon}, \\
(2^n r_0)^{6i - d - 4} |\lambda_i(t,2^n r_0)|^3 &\lesssim (2^n r_0)^{2\tilde k - 3 + 3\epsilon},
\end{split}
\end{align}
as well as
\begin{align}
\begin{split}\label{538
(2^n r_0)^{2i - 2} |\mu_i(t,2^n r_0)| &\lesssim (2^n r_0)^{2\tilde k - 3 + \epsilon}, \\
(2^n r_0)^{4i - d} |\mu_i(t,2^n r_0)|^2 &\lesssim (2^n r_0)^{-1 + 2\epsilon}
\lesssim (2^n r_0)^{2\tilde k - 3 + 3\epsilon}, \\
(2^n r_0)^{6i - d} |\mu_i(t,2^n r_0)|^3 &\lesssim (2^n r_0)^{2\tilde k - 3 + 3\epsilon}.
\end{split}
\end{align}
Thus, by \eqref{s528} and \eqref{s529}, we deduce that
\begin{align}\label{s540
|\lambda_j(t,2^{n+1} r_0) - \lambda_j(t,2^n r_0)| \leq C \delta_1 |\lambda_j(2^n r_0)| + C (2^n r_0)^{2 \tilde k - 2j - 2 + 3 \epsilon}, \\
|\mu_j(t,2^{n+1} r_0)- \mu_j(t,2^n r_0)| \leq C \delta_1 |\mu_j(2^n r_0)| + C (2^n r_0)^{2 \tilde k - 2j - 3 + 3 \epsilon}.
\end{align}
From this we obtain
\begin{align*}
|\lambda_j(t,2^{n+1} r_0)| \leq (1+ C \delta_1) |\lambda_j(2^n r_0)| + C (2^n r_0)^{2 \tilde k - 2j - 2 + 3 \epsilon}.
\end{align*}
Using that we have chosen $\delta_1$ so that $(1 + C \delta_1) < 2^{\epsilon}$ and iterating we obtain
\begin{align*}
|\lambda_j(t,2^n r_0)| \leq (2^\epsilon)^n |\lambda_j(t, r_0)| + C \sum_{m = 1}^n
(2^m r_0)^{2\tilde k - 2j - 2 + 3 \epsilon} (2^\epsilon)^{n-m}.
\end{align*}
In the case $j = \tilde k$, the previous estimate is easily seen to be $O\left (2^{\epsilon n} \right )$ since the first term dominates, while if $j < \tilde k$, we have the
previous estimate is $O\left ((2^n r_0)^{2\tilde k - 2j -2 + 3 \epsilon} \right )$ since then the second term dominates. A similar
argument applies to the $\mu_j$'s, and we conclude that
\begin{align}
\begin{split}\label{s539
|\lambda_{\tilde k}(t, 2^n r_0)| &\lesssim (2^n r_0)^{\epsilon}, \\
|\mu_{k}(t, 2^n r_0)| &\lesssim (2^n r_0)^{\epsilon}, \\
|\lambda_{i}(t, 2^n r_0)| &\lesssim (2^n r_0)^{2\tilde k - 2i - 2 + 3\epsilon}, \quad \forall 1 \leq i < \tilde k, \\
|\mu_{i}(t, 2^n r_0)| &\lesssim (2^n r_0)^{2\tilde k - 2i -3 + 3\epsilon}, \quad \forall 1 \leq i < k.
\end{split}
\end{align}
The estimate \eqref{s539} is uniform in time and an improvement of \eqref{s536a}. Let $r \geq r_0$ with $2^n r_0 \leq r \leq 2^{n+1}r_0$. We plug \eqref{s539} into the difference estimate \eqref{s528} and obtain
\begin{align*}
|\lambda_{\tilde k}(t,r)| \leq (1 + C \delta_1) |\lambda_{\tilde k}(t,2^n r_0)| + C (2^n r_0)^{2 \tilde k - 2j - 2 + 3 \epsilon}
\lesssim (2^n r_0)^{\epsilon} \lesssim r^{\epsilon}.
\end{align*}
The other estimates in \eqref{s533} are obtained by similar reasoning. This concludes the proof.
\end{proof}
The following corollary is a consequence of the proof of Lemma \ref{l511}.
\begin{cor}\label{cor511
Let $\epsilon$ and $\delta_1$ be as in Lemma \ref{l511}, let $r_0 > R_1$ be fixed, and let $j \in \{ 1, \ldots, \tilde k\}$. If there exists $a \geq \epsilon$ such that for all $n \in \mathbb N$,
\begin{align*}
|\lambda_j(t,2^{n+1} r_0)| \leq (1 + C \delta_1) |\lambda_j(t,2^n r_0)| + (2^n r_0)^a,
\end{align*}
then for all $r \geq r_0$
\begin{align*}
|\lambda_j(t,r)| \lesssim r^{a},
\end{align*}
uniformly in time. A similar statement holds for the $\mu_j$'s as well.
\end{cor}
We now use the previous lemma as the base case for an induction argument. The main goal is to prove
the following decay estimates for the projection coefficients.
\begin{ppn}\label{p512
Suppose $d = 7,11,15,\ldots$ and $\epsilon$, $\delta_1$,$r_0$ are as in Lemma \ref{l511}. Then uniformly in time, the following estimates hold:
\begin{align}
\begin{split}\label{s541
|\lambda_j(t,r)| &\lesssim r^{-2j + 3 \epsilon}, \quad \forall 1 < j \leq \tilde k, \\
|\lambda_1(t,r)| &\lesssim r^{\epsilon}, \\
|\mu_j(t,r)| &\lesssim r^{-2j -1 + 3 \epsilon}, \quad \forall 1 \leq j \leq k.
\end{split}
\end{align}
\end{ppn}
Proposition \ref{p512} is a consequence of the following proposition with $P = k$.
\begin{ppn}\label{p513}
With the same hypotheses as in Proposition \ref{p512}, for $P = 0, 1, \ldots, k$ the following estimates hold uniformly in time:
\begin{align}
\begin{split}\label{s542
|\lambda_j(t,r)| &\lesssim r^{2(\tilde k - P -j) - 2 + 3 \epsilon}, \quad \forall 1 \leq j \leq \tilde k \mbox{ with } j \neq \tilde k - P, \\
|\lambda_{\tilde k - P}(t,r)| &\lesssim r^{\epsilon},\\
|\mu_j(t,r)| &\lesssim r^{2(k - P -j) - 1 + 3 \epsilon}, \quad \forall 1 \leq j \leq k \mbox{ with } j \neq k - P, \\
|\mu_{k - P}(t,r) &\lesssim r^{\epsilon}.
\end{split}
\end{align}
\end{ppn}
\begin{proof}[Proof of Proposition \ref{p513}]
As was mentioned before, we prove Proposition \ref{p513} by induction. The base case $P = 0$ is contained in Lemma
\ref{l511}. We now assume that the estimates \eqref{s542} hold for $P$ with $1 \leq P \leq k - 1$ and wish to show that
the estimates \eqref{s542} also hold for $P+1$. The proof is divided into several lemmas. The bulk of the argument is devoted to proving that
the coefficients $\lambda_{\tilde k - P}$ and $\mu_{k - P}$ satisfy certain decay estimates. We first show that they have spatial limits.
\begin{lem}\label{l514
There exist bounded functions $\alpha_{\tilde k - P}(t)$ and $\beta_{k - P}(t)$ such that
\begin{align
|\lambda_{\tilde k - P}(t,r) - \alpha_{\tilde k - P}(t) | = O(r^{-2}), \label{s543} \\
|\mu_{k - P}(t,r) - \beta_{k - P}(t) | = O(r^{-1}), \label{s544}
\end{align}
where the $O(\cdot)$ terms are uniform in time.
\end{lem}
\begin{proof}
Fix $r_0 > R_1$. We insert the estimates \eqref{s542} furnished by our induction hypothesis into the difference estimate \eqref{s528}. We first note
that based on \eqref{s542}, we can estimate the sum excluding the coefficients $\lambda_{\tilde k - P}$ and $\mu_{k - P}$:
\begin{align*}
\sum_{i \neq \tilde k - P}^{\tilde k}& (2^n r_0)^{2i - 3} |\lambda_i(t,2^n r_0)| + (2^n r_0)^{4i - d - 2} |\lambda_i(t,2^n r_0)|^2
+ (2^n r_0)^{6i - d - 4} |\lambda_i(t,2^n r_0))|^3 \\
&+ \sum_{i \neq k - P}^{k} (2^n r_0)^{2i - 2} |\mu_i(t,2^n r_0)|^2 + (2^n r_0)^{4i - d} |\mu_i(t,2^n r_0)|^2
+ (2^n r_0)^{6i - d-1} |\mu_i(t,2^n r_0)|^3 \\
&\lesssim
(2^n r_0)^{2(\tilde k - P - 1) - 3 + 3\epsilon} + (2^n r_0)^{-4P - 5 + 6 \epsilon}
+ (2^n r_0)^{2(\tilde k - 3P - 1) - 7 + 9 \epsilon}.
\end{align*}
In particular, we have the following estimate which will be used repeatedly,
\begin{align}
\begin{split}\label{s545
\sum_{i \neq \tilde k - P}^{\tilde k}& (2^n r_0)^{2i - 3} |\lambda_i(t,2^n r_0)| + (2^n r_0)^{4i - d - 2} |\lambda_i(t,2^n r_0)|^2
+ r^{6i - d - 4} |\lambda_i(t,2^n r_0)|^3 \\
&+ \sum_{i \neq k - P}^{k} (2^n r_0)^{2i - 2} |\mu_i(t,2^n r_0)|^2 + (2^n r_0)^{4i - d} |\mu_i(t,2^n r_0)|^2
+ (2^n r_0)^{6i - d -1} |\mu_i(t,2^n r_0)|^3 \\
&\lesssim (2^n r_0)^{2(\tilde k - P - 1) - 3 + 3 \epsilon}.
\end{split}
\end{align}
Using \eqref{s542} and the relation $d = 4 \tilde k - 1$, $k = \tilde k - 1$, we estimate
\begin{align}
\begin{split}\label{s546
(2^n r_0)&^{2(\tilde k - P) - 3}|\lambda_{\tilde k - P}(t, 2^n r_0)| + (2^n r_0)^{4(\tilde k - P) - d - 2}
|\lambda_{\tilde k - P}(t, 2^n r_0)|^2
+ (2^n r_0)^{6(\tilde k - P) - d - 4}|\lambda_{\tilde k - P}(t, 2^n r_0)|^3 \\
&+ (2^n r_0)^{2(k - P) - 2}|\mu_{k - P}(t, 2^n r_0)| + (2^n r_0)^{4(k - P) - d}
|\mu_{k - P}(t, 2^n r_0)|^2
+ (2^n r_0)^{6(k - P) - d - 1}|\mu_{k - P}(t, 2^n r_0)|^3 \\
&\lesssim (2^n r_0)^{2(\tilde k - P) - 3 + 3 \epsilon}.
\end{split}
\end{align}
Inserting \eqref{s545} and \eqref{s546} into our difference estimates \eqref{s528} and \eqref{s529}, we deduce for each $n \in \mathbb N$
\begin{align}
\begin{split}\label{s547
|\lambda_{\tilde k - P}(t,2^{n+1}r_0) - \lambda_{\tilde k - P}(t,2^n r_0)| \lesssim
(2^n r_0)^{-2 + 3 \epsilon}, \\
|\mu_{k - P}(t, 2^{n+1} r_0) - \mu_{k - P}(t,2^n r_0)| \lesssim (2^n r_0)^{-1 + 3 \epsilon}.
\end{split}
\end{align}
From \eqref{s547}, we deduce that
\begin{align*}
&\sum_{n = 0}^\infty |\lambda_{\tilde k - P}(t,2^{n+1}r_0) - \lambda_{\tilde k - P}(t,2^n r_0)| \lesssim \sum_{n = 0}^\infty
2^{(-2 + 3 \epsilon)n} \lesssim 1, \\
&\sum_{n = 0}^\infty |\mu_{ k - P}(t,2^{n+1}r_0) - \mu_{k - P}(t,2^n r_0)| \lesssim \sum_{n = 0}^\infty 2^{(-1 + 3 \epsilon)n}
\lesssim 1,
\end{align*}
uniformly in $t$.
In particular, for all $t \in \mathbb R$ there exist $\alpha_{\tilde k - P}(t), \beta_{k - P}(t) \in \mathbb R$ such that
\begin{align*}
\lim_{n \rightarrow \infty} \lambda_{\tilde k - P}(t,2^n r_0) &= \alpha_{\tilde k - P}(t), \\
\lim_{n \rightarrow \infty} \mu_{k - P}(t,2^n r_0) &= \beta_{\tilde k - P}(t),
\end{align*}
with the estimates
\begin{align}
\begin{split}\label{s549
&\left |\alpha_{\tilde k - P}(t) - \lambda_{\tilde k - P}(t, 2^n r_0)\right | \lesssim (2^n r_0)^{-2 + 3 \epsilon}, \\
&\left |\beta_{k - P}(t) - \mu_{k - P}(t, 2^n r_0) \right | \lesssim (2^n r_0)^{-1 + 3 \epsilon}.
\end{split}
\end{align}
Since the compactness of $\overrightarrow u_e(t)$ implies $|\lambda_{\tilde k - P}(t,r_0)|$ is uniformly bounded in $t$, we have
via \eqref{s549}
\begin{align*}
|\alpha_{\tilde k - P}(t)| &\leq |\alpha_{\tilde k - P}(t) - \lambda_{\tilde k - P}(t,r_0) | + |\lambda_{\tilde k - P}(t,r_0)|
\lesssim 1
\end{align*}
uniformly in $t$. Thus,
\begin{align*}
|\lambda_{\tilde k - P}(t,2^n r_0) | \lesssim 1,
\end{align*}
uniformly in $t$ and $n$. Similarly, $\beta_{k - P}(t)$ and $|\mu_{k - P}(t,2^n r_0)|$ are bounded uniformly in $t$ and $n$. In conclusion,
we have
\begin{align}\label{s548
|\lambda_{\tilde k - P}(t,2^n r_0)| + |\mu_{k - P}(t,2^n r_0)| \lesssim 1
\end{align}
uniformly in $t$ and $n$.
Let $r \geq r_0$ with $2^n r_0 \leq r \leq 2^{n+1} r_0$. If we insert \eqref{s548} back into the difference estimates
\eqref{s528} and \eqref{s529},
we deduce that
\begin{align}
\begin{split}\label{s551
|\lambda_{\tilde k - P}(t,r) - \lambda_{\tilde k - P}(t,2^n r_0)| \lesssim (2^n r_0)^{-2} \lesssim r^{-2}, \\
|\mu_{k - P}(t,r) - \mu_{k -P}(t,2^n r_0)| \lesssim (2^n r_0)^{-1} \lesssim r^{-1},
\end{split}
\end{align}
which imply the following improvements of \eqref{s549}
\begin{align}
\begin{split}\label{s550
&\left |\alpha_{\tilde k - P}(t) - \lambda_{\tilde k - P}(t, 2^n r_0)\right | \lesssim (2^n r_0)^{-2}, \\
&\left |\beta_{k - P}(t) - \mu_{k - P}(t, 2^n r_0) \right | \lesssim (2^n r_0)^{-1}.
\end{split}
\end{align}
Finally, using \eqref{s551} and \eqref{s550} we conclude that
\begin{align*}
|\alpha_{\tilde k - P}(t) - \lambda_{\tilde k - P}(t,r) |
\lesssim |\alpha_{\tilde k - P}(t) - \lambda_{\tilde k - P}(t,2^n r_0) | + |\lambda_{\tilde k - P}(t,r) - \lambda_{\tilde k - P}(t,2^n r_0)|
\lesssim (2^n r_0)^{-2} \lesssim r^{-2},
\end{align*}
and
\begin{align*}
|\beta_{k - P}(t) - \mu_{k - P}(t,r)| \lesssim
|\beta_{k - P}(t) - \mu_{k - P}(t,2^n r_0)| +
|\mu_{k - P}(t,r) - \mu_{k -P}(t,2^n r_0)| \lesssim (2^n r_0)^{-1} \lesssim r^{-1}.
\end{align*}
This concludes the proof.
\end{proof}
A corollary of Lemma \ref{l514} is the following preliminary asymptotics for $u_e$.
\begin{cor}\label{c516
We have
\begin{align}\label{s552
r^{-2(\tilde k - P) + d} u_e (t,r) = \alpha_{\tilde k - P}(t) + O(r^{-2+3\epsilon}).
\end{align}
The $O(\cdot)$ term is uniform in time.
\end{cor}
\begin{proof}
By Lemma \ref{l54}, \eqref{s543}, and our induction hypotheses \eqref{s542}, we have
\begin{align*}
r^{-2(\tilde k - P) - d} u_e(t,r) &=
\sum_{j = 1}^{\tilde k } \lambda_j(t,r) r^{2j - 2(\tilde k - P)} \\
&= \alpha_{\tilde k - P}(t) + \sum_{j \neq \tilde k - P}^{\tilde k} \lambda_j(t,r) r^{2j - 2(\tilde k - P)} + O(r^{-2}) \\
&= \alpha_{\tilde k - P}(t) + O(r^{-2+3\epsilon})
\end{align*}
uniformly in time.
\end{proof}
A corollary of the proof of Lemma \ref{l514} is the following.
\begin{cor}\label{c517}
Suppose that for all $r, r'$ with $R_1 \leq r \leq r' \leq 2r$, we have
\begin{align*}
|\lambda_j(t,r') - \lambda_j(t,r)| \lesssim r^{-a},
\end{align*}
with $a < 0$. Then $\lambda_j(t,r)$ has a limit, $\alpha_j(t)$, as $r \rightarrow \infty$. Moreover, $\alpha_j(t)$ is bounded in time
and
\begin{align*}
|\lambda_j(t,r) - \alpha_j(t) | \lesssim r^{-a}
\end{align*}
uniformly in time. A similar statement holds for the $\mu_j$'s.
\end{cor}
We will now show that
\begin{align*}
\alpha_{\tilde k - P}(t) \equiv 0, \quad \beta_{k - P}(t) \equiv 0.
\end{align*}
We first show that $\alpha_{\tilde k - P}(t)$ is constant in time.
\begin{lem}\label{l517
The function $\alpha_{\tilde k - P}(t)$ is constant in time. From now on, we will write $\alpha_{\tilde k - P}$ in place of $\alpha_{\tilde k - P}(t)$.
\end{lem}
\begin{proof}
Let $t_2 \neq t_1$. By \eqref{s532a} with $j = \tilde k - P$ and $j' = \tilde k - P - 1$, \eqref{s543}, and our induction hypotheses \eqref{s542}
we have
\begin{align*}
|\alpha_{\tilde k - P}(t_2) - \alpha_{\tilde k - P}(t_1) | &\lesssim
|\lambda_{\tilde k - P}(t_2,r) - \lambda_{\tilde k - P}(t_1,r)| + O(r^{-2}) \\
&\lesssim r^{-2} |\lambda_{\tilde k - P -1}(t_2,r) - \lambda_{\tilde k - P-1}(t_1,r)|
+ \sum_{m =1}^k \int_{t_1}^{t_2} r^{2m - 2(\tilde k - P)} |\mu_m (t,r)| dt
+ O(r^{-2}) \\
&\lesssim r^{-2+3\epsilon}(1 + |t_2 - t_1|).
\end{align*}
We let $r \rightarrow \infty$ and deduce that $\alpha_{\tilde k - P}(t_2) = \alpha_{\tilde k - P}(t_1)$ as desired.
\end{proof}
We now show that $\alpha_{\tilde k - P} = 0$. As a consequence, we will also obtain the fact that $\beta_{k - P}(t)$ is constant in time.
\begin{lem}\label{l518
We have $\alpha_{\tilde k - P} = 0$ and $\beta_{k - P}(t)$ is constant in time.
From now on, we will write $\beta_{k - P}$ in place of $\beta_{k - P}(t)$.
\end{lem}
\begin{proof}
The key tool for proving both assertions is Lemma \ref{l511a}. By \eqref{s532b} and \eqref{s544} we have
\begin{align*}
\beta_{k - P}(t_2) - \beta_{k - P}(t_1) &=
\frac{1}{R} \int_R^{2R} \beta_{k - P}(t_2) - \beta_{k - P}(t_1) dr \\
&= \frac{1}{R} \int_R^{2R}[\mu_{k - P}(t_2,r) - \mu_{k - P}(t_1,r) ] dr + O(R^{-1}) \\
&= \sum_{i = 1}^k \frac{c_i c_{k - P}}{d - 2i - 2(k - P)}
\int_{t_1}^{t_2} I(i, k - P) + II(i, k - P) dt
\end{align*}
where $I(i,k-P)$ and $II(i,k-P)$ are defined as in \eqref{s532c}. The estimates for the potential $V_e$ and nonlinearity
$N_e$, \eqref{s57}--\eqref{s59}, along with \eqref{s552} imply
\begin{align*}
\Bigl | -V_e(r) u_e + N_e(r, u_e) \Bigr | &\lesssim r^{-2 \tilde k - 2P - 3} + r^{-4\tilde k - 4P - 1} + r^{-2\tilde k - 6P - 3} \\
&\lesssim r^{-2\tilde k - 2P - 3}.
\end{align*}
Hence, using that $d = 4 \tilde k - 1$ and $k = \tilde k - 1$, we have
\begin{align}
|II(i, k - P)| = \Bigl | \frac{1}{R} \int_R^{2R} r^{d - 2i - 2k - 2P} \int_r^\infty [ -V_e(\rho) u_e(t,\rho)
+ N_e(\rho,u_e(t,\rho) ] \rho^{2i-1} d\rho dr \Bigr | \lesssim R^{-2}. \label{s553
\end{align}
We now estimate the remaining term,
\begin{align*}
I(i,k-P) &= -\frac{1}{R} (u_e(t,r) r^{d-2(k-P)-1}) \big |_{r = R}^{r = 2R} + (2i - 2(k-P) - 1) \frac{1}{R}
\int_R^{2R} u_e(t,r) r^{d- 2(k-P) -2} dr \\
&\:- \frac{(2\ell - 2i + 3)(2i-2)}{R} \int_R^{2R} r^{d-2i-2(k-P)} \int_r^\infty u_e(t,\rho) \rho^{2i-3}d\rho dr.
\end{align*}
By \eqref{s552}, we have
\begin{align*}
r^{d- 2(k - P) - 2} u_e(t,r) &= \alpha_{\tilde k - P} + O(r^{-2 + 3 \epsilon}), \\
r^{d - 2(k - P) - 2i} \int_r^\infty u_e(t,\rho) \rho^{2i - 3} d\rho &= \frac{\alpha_{\tilde k - P}}{d - 2i - 2(k - P)} + O(r^{-2 + 3\epsilon}),
\end{align*}
so that
\begin{align*}
I(i,k-P) = -\frac{2(k-P)(d - 2(k-P) - 2)}{d - 2i - 2(k - P)} \alpha_{\tilde k - P} + O(R^{-2 + 3 \epsilon}).
\end{align*}
Thus,
\begin{align}\label{s554
\sum_{i = 1}^k \frac{c_i c_{k - P}}{d - 2i - 2(k - P)}
\int_{t_1}^{t_2} I(i, k - P) dt = C_0(t_2 - t_1) \alpha_{\tilde k - P} + O(R^{-2 + 3\epsilon}(t_2 - t_1))
\end{align}
where
\begin{align*}
C_0 := - \sum_{i = 1}^k \frac{2c_i c_{k - P} (k - P) (d - 2(k-P)- 2)}{(d - 2i - 2(k - P))^2}
\end{align*}
It can be shown using contour integration that $C_0 \neq 0$ (see Remark 5.29 in \cite{klls2} for the explicit value for $C_0$). We let $R \rightarrow \infty$
in \eqref{s554} and deduce that
\begin{align}\label{s555
C_0(t_2 - t_1) \alpha_{\tilde k - P} = \beta_{k - P}(t_2) - \beta_{k - P}(t_1).
\end{align}
Since $|\beta_{k - P}(t)| \lesssim 1$ by Lemma \ref{l514} and $C_0 \neq 0$, we obtain
\begin{align*}
\alpha_{\tilde k - P} = \frac{1}{C_0} \lim_{t_2 \rightarrow \infty} \frac{\beta_{k - P}(t_2) - \beta_{k - P}(t_1)}{t_2 - t_1} = 0.
\end{align*}
Thus, $\alpha_{\tilde k - P} = 0$ which by \eqref{s555} implies that $\beta_{k - P}(t)$ is constant in time.
\end{proof}
We now conclude that $\beta_{k - P} = 0$.
\begin{lem}\label{l519
We have $\beta_{k - P} = 0$.
\end{lem}
\begin{proof}
By Lemma \ref{l514}, $\beta_{k -P} = \mu_{k - P}(t,R) + O(R^{-1})$ uniformly in time so that
\begin{align*}
\beta_{k - P} = \frac{1}{T} \int_0^T \mu_{k - P}(t,R) dt + O(R^{-1}).
\end{align*}
Since $\alpha_{\tilde k - P} = 0$, we have by \eqref{s552}
\begin{align*}
u_e(t,r) = O(r^{-d + 2(\tilde k - P) - 2 + 3\epsilon}),
\end{align*}
uniformly in time.
Thus, by Lemma \ref{l54} and the relations $d = 4 \tilde k - 1$, $\tilde k = k + 1$, we have
\begin{align*}
\Bigl | \int_0^T \mu_{k - P}(t,R) dt \Bigr | &\lesssim
\sum_{i = 1}^k R^{d - 2i - 2(k - P)} \Bigl | \int_R^\infty \int_0^T \partial_t u_e(t,\rho) dt \rho^{2i - 1} d\rho \Bigr | \\
&\lesssim
\sum_{i = 1}^k R^{d - 2i - 2(k - P)} \int_R^\infty |u_e(T,\rho) - u_e(0,\rho)| \rho^{2i - 1} d\rho \\
&\lesssim R^{3\epsilon}.
\end{align*}
It follows that
\begin{align*}
\beta_{k - P} = O ( R^{3\epsilon} / T) + O(R^{-1}).
\end{align*}
We set $R = T$ and let $T \rightarrow \infty$ to conclude that $\beta_{k - P} = 0$ as desired.
\end{proof}
In summary, we have now shown that if \eqref{s542} holds, then
\begin{align}
\begin{split}\label{s557
\lambda_{\tilde k - P}(t,r) &= O(r^{-2}), \\
\mu_{k - P}(t,r) &= O(r^{-1}),
\end{split}
\end{align}
uniformly in time. We will now insert \eqref{s557} back into the difference estimates \eqref{s528} and \eqref{s529}
to obtain \eqref{s542} for $P + 1$.
\begin{lem}\label{l520
Assume \eqref{s542} is true for $0 \leq P \leq k - 1$. Then \eqref{s542} holds for $P + 1$.
\end{lem}
\begin{proof}
We recall that by \eqref{s545}, we have for all $r > R_1$
\begin{align}
\begin{split}\label{s558
\sum_{i \neq \tilde k - P}^{\tilde k}& r^{2i - 3} |\lambda_i(t,r)| + r^{4i - d - 2} |\lambda_i(t,r)|^2
+ r^{6i - d - 4} |\lambda_i(t,r)|^3 \\
&+ \sum_{i \neq k - P}^{k} r^{2i - 2} |\mu_i(t,r)|^2 + r^{4i - d} |\mu_i(t,r)|^2
+ r^{6i - d -1} |\mu_i(t,r)|^3, \\
&\lesssim r^{2(\tilde k - P - 1) - 3 + 3 \epsilon}
\end{split}
\end{align}
with the main contribution coming from the linear terms. By \eqref{s557}, we have for all $r > R_1$
\begin{align}
\begin{split}\label{s558b
r&^{2(\tilde k - P) - 3}|\lambda_{\tilde k - P}(t, r)| + r^{4(\tilde k - P) - d - 2}
|\lambda_{\tilde k - P}(t, r)|^2
+ r^{6(\tilde k - P) - d - 4}|\lambda_{\tilde k - P}(t, r)|^3 \\
&+ r^{2(k - P) - 2}|\mu_{k - P}(t, r)| + r^{4(k - P) - d}
|\mu_{k - P}(t, r)|^2
+ r^{6(k - P) - d - 1}|\mu_{k - P}(t, r)|^3 \\
&\lesssim r^{2(\tilde k - P-1) - 3 + 3 \epsilon}
\end{split}
\end{align}
with the main contribution coming from the linear terms. Thus, inserting \eqref{s558} and \eqref{s558b} into our difference estimate \eqref{s528}, we have for all $R_1 \leq r \leq r' \leq 2r$,
\begin{align}\label{s559
|\lambda_j(t,r') - \lambda_j(t,r) | \lesssim r^{2(\tilde k - (P + 1) - j) - 2 + 3\epsilon}.
\end{align}
By our induction hypotheses \eqref{s542}, if $\tilde k - P < j \leq k - 1$, we have $\lambda_j(t,r) \rightarrow 0$. By
Corollary \ref{c517} we then deduce that
\begin{align*}
|\lambda_j(t,r) | \lesssim r^{2(\tilde k - (P +1) - j) -2 + 3 \epsilon}
\end{align*}
uniformly in time. If $j = \tilde k - (P +1)$, by \eqref{s559} and Corollary \ref{c517} we also deduce that
$$|\lambda_{\tilde k - (P + 1)}(t,r)| \lesssim 1 \lesssim r^{\epsilon}$$ uniformly in time. Finally, if $j > \tilde k - (P + 1)$,
we have $2(\tilde k - P - 1 - j) - 2 + 3\epsilon > \epsilon$ so that by \eqref{s559} and Corollary \ref{cor511}
\begin{align*}
|\lambda_j(t,r)| \lesssim r^{2(\tilde k - (P + 1) - j) - 2 + 3\epsilon}.
\end{align*}
In conclusion, we have shown that
\begin{align*}
|\lambda_j(t,r)| &\lesssim r^{2(\tilde k - (P+1) -j) - 2 + 3 \epsilon}, \quad \forall 1 \leq j \leq \tilde k \mbox{ with }
j \neq \tilde k - (P+1), \\
|\lambda_{\tilde k - (P+1)}(t,r)| &\lesssim r^{\epsilon},
\end{align*}
uniformly in time. A similar argument establishes
\begin{align*}
|\mu_j(t,r)| &\lesssim r^{2(k - (P+1) -j) - 1 + 3 \epsilon}, \quad \forall 1 \leq j \leq k \mbox{ with } j \neq k - (P+1), \\
|\mu_{k - (P+1)}(t,r)| &\lesssim r^{\epsilon}.
\end{align*}
This proves Lemma \ref{l520}.
\end{proof}
By Lemma \ref{l520} and induction, we have proved Proposition \ref{p513}.
\end{proof}
The final step in proving Proposition \ref{p58} is to establish that $\lambda_1(0,r)$ has a limit as $r \rightarrow \infty$. In what follows
we denote $\lambda_j(r) = \lambda_j(0,r)$ and $\mu_j(r) = \mu_j(0,r)$.
\begin{lem}\label{l521
There exists $\alpha \in \mathbb R$ such that
\begin{align}\label{s562
|\lambda_1(r) - \alpha| = O(r^{-2}).
\end{align}
Moreover, we have the slightly improved decay rates
\begin{align}
\begin{split}\label{s563
|\lambda_j(r)| &\lesssim r^{-2j}, \quad 1 < j \leq \tilde k, \\
|\mu_j(r)| &\lesssim r^{-2j - 1}, \quad 1 \leq j \leq k.
\end{split}
\end{align}
\end{lem}
\begin{proof}
By \eqref{s541},
\begin{align}
\begin{split}\label{s564
\sum_{i = 2}^{\tilde k}& r^{2i - 3} |\lambda_i(t,r)| + r^{4i - d - 2} |\lambda_i(t,r)|^2
+ r^{6i - d - 4} |\lambda_i(t,r)|^3 \\
&+ \sum_{i = 1}^{k} r^{2i - 2} |\mu_i(t,r)|^2 + r^{4i - d} |\mu_i(t,r)|^2
+ r^{6i - d -1} |\mu_i(t,r)|^3, \\
&\lesssim r^{-3 + 3\epsilon}
\end{split}
\end{align}
with the main contribution coming from the linear terms, and
\begin{align}
\begin{split}\label{s565
r&^{2 - 3}|\lambda_{1}(t, r)| + r^{4 - d - 2}
|\lambda_{1}(t, r)|^2
+ r^{6 - d - 4}|\lambda_{1}(t, r)|^3
\lesssim r^{-1 + \epsilon}.
\end{split}
\end{align}
We insert \eqref{s564} and \eqref{s565} into our difference estimate \eqref{s528} and conclude that for all $r,r'$
with $R_1 < r \leq r' \leq 2r$
\begin{align*}
|\lambda_1(r') - \lambda_1(r) | \lesssim r^{-2 + \epsilon}.
\end{align*}
By Corollary \ref{c517}, we deduce that there exists $\alpha \in \mathbb R$ such that
\begin{align*}
|\lambda_1(r) - \alpha | \lesssim r^{-2+\epsilon}.
\end{align*}
In particular, $|\lambda_1(r)| \lesssim 1$. This improves the estimate \eqref{s565} to
\begin{align}
\begin{split}\label{s566
r&^{2 - 3}|\lambda_{1}(t, r)| + r^{4 - d - 2}
|\lambda_{1}(t, r)|^2
+ r^{6 - d - 4}|\lambda_{1}(t, r)|^3
\lesssim r^{-1}.
\end{split}
\end{align}
We plug \eqref{s564} and \eqref{s566} back into our difference estimate \eqref{s528} and conclude that for all $r,r'$
with $R_1 < r \leq r' \leq 2r$
\begin{align*}
|\lambda_1(r') - \lambda_1(r) | \lesssim r^{-2}.
\end{align*}
Thus,
\begin{align*}
|\lambda_1(r) - \alpha| \lesssim r^{-2}.
\end{align*}
By \eqref{s564} and \eqref{s566} and the difference estimate \eqref{s528} we conclude that for the other coefficients,
for all $r,r'$ with $R_1 < r \leq r' \leq 2r$
\begin{align}
|\lambda_j(r') - \lambda_j(r) | &\lesssim r^{-2j}, \\
|\mu_j(r') - \mu_j(r) | &\lesssim r^{-2j-1}.
\end{align}
By \eqref{s541}, these coefficients go to 0 as $r \rightarrow \infty$ so that by Corollary \ref{c517} we conclude that
\begin{align*}
|\lambda_j(r) | &\lesssim r^{-2j}, \\
|\mu_j(r) | &\lesssim r^{-2j-1}.
\end{align*}
This completes the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{p58}]
By \eqref{s562}, \eqref{s563} and Lemma \ref{l54}
\begin{align*}
r^{d-2} u_e(0,r) &= \sum_{j = 1}^{\tilde k} \lambda_j(r) r^{2j - 2} \\
&= \lambda_1(r) + \sum_{j = 2}^{\tilde k} \lambda_j(r) r^{2j - 2} \\
&= \alpha + O(r^{-2})
\end{align*}
as well as
\begin{align*}
\int_r^{\infty} \partial_t u_e(0, \rho) \rho^{2i -1} d \rho &= \sum_{j = 1}^k \mu_j(r) \frac{r^{2i + 2j - d}}{d - 2i - 2j} \\
&= O(r^{2i - d - 1})
\end{align*}
as desired.
\end{proof}
We now establish Proposition \ref{p58} in the case that $d = 5, 9, 13,\ldots,$ i.e. when $d = 2\ell + 3$ with
$\ell$ \textbf{odd}. The case $\ell = 1$, $d = 5$, is contained in \cite{cpr}. When $\ell$ is odd, we have the identities
\begin{align*}
k = \tilde k = \frac{\ell + 1}{2}, \quad d = 4k + 1.
\end{align*}
The proof of Proposition \ref{p58} for when $\ell$ is odd is very similar to the case when $\ell$ is even but contains
subtleties because of the above identities. In particular, there is an extra $\mu$ coefficient,
$\mu_k$, which must be dealt with before we can proceed to showing $\lambda_j, \mu_{j-1}$ tend to 0 by induction.
We first establish an $\epsilon$--growth estimate for the coefficients.
\begin{lem}\label{l522
Let $\epsilon > 0$ be fixed and sufficiently small. Then as long as $\delta_1$ as in Lemma
\ref{l59} is sufficiently small, we have uniformly in $t$,
\begin{align
\begin{split}\label{s567a}
|\lambda_{k}(t,r)| &\lesssim r^{\epsilon}, \\
|\mu_k(t,r)| &\lesssim r^{\epsilon}, \\
|\lambda_i(t,r)| &\lesssim r^{2 k - 2j - 1 + 3\epsilon}, \quad \forall 1 \leq i < k, \\
|\mu_i(t,r)| &\lesssim r^{2 k - 2j - 2 + 3\epsilon}, \quad \forall 1 \leq i < k.
\end{split}
\end{align}
\end{lem}
\begin{proof}
Let $r > R_1$. By Corollary
\ref{c510} we have,
\begin{align}
\begin{split}\label{s567
|\lambda_j(t,2r)| &\leq (1 + C \delta_1) |\lambda_j(t,r)| +
C \delta_1 \left ( \sum_{i = 1}^{k} r^{2i-2j} |\lambda_i(t,r)| +
\sum_{i = 1}^k r^{2i-2j+1} |\mu_i(t,r)| \right ), \\
|\mu_j(t,2r)| &\leq (1 + C \delta_1) |\mu_j(t,r)| +
r^{-1} C \delta_1 \left ( \sum_{i = 1}^{k} r^{2i-2j} |\lambda_i(t,r)| +
\sum_{i = 1}^k r^{2i-2j+1} |\mu_i(t,r)| \right ).
\end{split}
\end{align}
Fix $r_0 > R_1$ and define
\begin{align*}
b_n := \sum_{i = 1}^{k} (2^n r_0)^{2i-2k-1} |\lambda_i(t,2^n r_0)| +
\sum_{i = 1}^k (2^n r_0)^{2i-2k} |\mu_i(t,2^n r_0)|.
\end{align*}
Then by \eqref{s567}
\begin{align*}
b_{n+1} \leq ( 1 + 2Ck \delta_1 ) b_n.
\end{align*}
By iterating we obtain
\begin{align*}
b_n \leq ( 1 + 2Ck \delta_1 )^n b_0
\end{align*}
Choose $\delta_1$ so small so that $1 + 2Ck \delta_1 < 2^{\epsilon}$. By the compactness of $\overrightarrow u_e(t)$,
$b_0 \lesssim 1$ uniformly in $t$, and we conclude that
\begin{align*}
b_n \leq 2^{n \epsilon} b_0 \lesssim 2^{n \epsilon}.
\end{align*}
By our definition of $b_n$ it follows that
\begin{align}\label{s568
|\lambda_i(t,2^n r_0)| \lesssim (2^n r_0)^{2k - 2i + 1 + \epsilon}, \quad |\mu_i(t,2^n r_0)| \lesssim (2^n r_0)^{2k - 2i + \epsilon},
\end{align}
which is an improvement of \eqref{s527}.
As in the proof of Lemma \ref{l511}, we insert \eqref{s568} back into our difference estimates \eqref{s528} and \eqref{s529}
and conclude that
\begin{align}\label{s540
|\lambda_j(t,2^{n+1} r_0) - \lambda_j(t,2^n r_0)| \leq C \delta_1 |\lambda_j(2^n r_0)| + C (2^n r_0)^{2 k - 2j - 1 + 3\epsilon}, \\
|\mu_j(t,2^{n+1} r_0)- \mu_j(t,2^n r_0)| \leq C \delta_1 |\mu_j(2^n r_0)| + C (2^n r_0)^{2 k - 2j - 2 + 3\epsilon},
\end{align}
with the dominant contribution coming from the cubic terms. By Corollary \ref{cor511} we conclude that uniformly in $t$
\begin{align*
|\lambda_{k}(t,r)| &\lesssim r^{\epsilon}, \\
|\mu_k(t,r)| &\lesssim r^{\epsilon}, \\
|\lambda_i(t,r)| &\lesssim r^{2\tilde k - 2j - 1 + 3\epsilon}, \quad \forall 1 \leq i < k, \\
|\mu_i(t,r)| &\lesssim r^{2\tilde k - 2j - 2 + 3\epsilon}, \quad \forall 1 \leq i < k,
\end{align*}
as desired.
\end{proof}
We now turn to showing that the extra term $\mu_k$ goes to 0 as $r \rightarrow \infty$.
\begin{lem}\label{l523}
There exists a bounded function $\beta_k(t)$ on $\mathbb R$ such that
\begin{align}\label{s569
|\mu_k(t,r) - \beta_k(t) | \lesssim r^{-2},
\end{align}
uniformly in $t$.
\end{lem}
\begin{proof}
We insert \eqref{s566} into the difference estimate \eqref{s529} with $j=k$ and obtain for all $R_1 < r_1 < r' < 2r$
\begin{align*}
|\mu_k(t,r') - \mu_k(t,r)| \lesssim r^{-2+3\epsilon}.
\end{align*}
The dominant contribution in the difference estimate \eqref{s529} comes from the cubic term $|\mu_k|^3$. By Corollary
\ref{c517}, we conclude that there exists a bounded function $\beta_k(t)$ such that
\begin{align*}
|\mu_k(t,r) - \beta_k(t)| \lesssim r^{-2+3\epsilon}.
\end{align*}
uniformly in $t$.
In particular, $|\mu_k(t,r)|\lesssim 1$ uniformly in $t$ and $r$. Using this information, we can improve the difference
estimate \eqref{s529} with $j = k$ to
\begin{align*}
|\mu_k(t,r') - \mu_k(t,r)| \lesssim r^{-2}
\end{align*}
and conclude that
\begin{align*}
|\mu_k(t,r) - \beta_k(t)| \lesssim r^{-2}
\end{align*}
uniformly in $t$ as desired.
\end{proof}
\begin{lem}\label{l524
We have $\beta_k(t) \equiv 0$.
\end{lem}
\begin{proof}
The proof is in similar spirit to the proofs of Lemmas \ref{l518} and \ref{l519}. We first note that by Lemma
\ref{l54}, \eqref{s567a}, and the relation $d = 4k + 1$, we have
\begin{align}\label{s570}
|u_e(t,r)| = \Bigl | \sum_{j = 1}^k \lambda_j(t,r) r^{2j-d} \Bigr | \lesssim r^{2k - d+\epsilon} = r^{-2k -1 + \epsilon}.
\end{align}
By \eqref{s569} and \eqref{s532b}
\begin{align*}
\beta_{k}(t_2) - \beta_{k }(t_1)
&= \frac{1}{R} \int_R^{2R}[\mu_{k}(t_2,r) - \mu_{k}(t_1,r) ] dr + O(R^{-2}) \\
&= \sum_{i = 1}^k \frac{c_i c_{k}}{d - 2i - 2k}
\int_{t_1}^{t_2} I(i, k) + II(i, k) dt
\end{align*}
where $I(i,k)$ and $II(i,k)$ are defined as in \eqref{s532c}. The estimates for the potential $V_e$ and nonlinearity
$N_e$, \eqref{s57}--\eqref{s59}, along with \eqref{s570} imply
\begin{align*}
\Bigl | -V_e(r) u_e + N_e(r, u_e) \Bigr |
\lesssim r^{-2k - 5 + \epsilon }.
\end{align*}
Using that $d = 4 k + 1$ we have
\begin{align*}
|II(i, k - P)| = \Bigl | \frac{1}{R} \int_R^{2R} r^{d - 2i - 2k - 2P} \int_r^\infty [ -V_e(\rho) u_e(t,\rho)
+ N_e(\rho,u_e(t,\rho) ] \rho^{2i-1} d\rho dr \Bigr | \lesssim R^{-4+\epsilon}.
\end{align*}
We now estimate the remaining term,
\begin{align*}
I(i,k) &= -\frac{1}{R} (u_e(t,r) r^{d-2k-1}) \big |_{r = R}^{r = 2R} + (2i - 2k - 1) \frac{1}{R}
\int_R^{2R} u_e(t,r) r^{d- 2k -2} dr \\
&\:- \frac{(2\ell - 2i + 3)(2i-2)}{R} \int_R^{2R} r^{d-2i-2k} \int_r^\infty u_e(t,\rho) \rho^{2i-3}d\rho dr.
\end{align*}
Using \eqref{s570}, it is simple to conclude that
\begin{align*}
|I(i,k)| \lesssim R^{-2 + \epsilon}.
\end{align*}
Thus,
\begin{align*}
\beta_k(t_2) - \beta_k(t_1) = O(R^{-2+ \epsilon})(1 + |t_2 - t_1|).
\end{align*}
We let $R \rightarrow \infty$ and conclude that $\beta_k(t_2) = \beta_k(t_1)$ as desired.
We now write $\beta_k$ in place of $\beta_k(t)$. By the previous paragraph, we have that
\begin{align*}
\beta_k = \mu_k(t,R) + O(R^{-2})
\end{align*}
where the $O(\cdot)$ term is uniform in time. Integrating the previous expression from 0 to $T$ and dividing by $T$ yields
\begin{align*}
\beta_{k} = \frac{1}{T} \int_0^T \mu_{k}(t,R) dt + O(R^{-2}).
\end{align*}
By Lemma \ref{l54}, \eqref{s570}, and the relations $d = 4 k + 1$, we have
\begin{align*}
\Bigl | \int_0^T \mu_{k}(t,R) dt \Bigr | &\lesssim
\sum_{i = 1}^k R^{d - 2i - 2k} \Bigl | \int_R^\infty \int_0^T \partial_t u_e(t,\rho) dt \rho^{2i - 1} d\rho \Bigr | \\
&\lesssim
\sum_{i = 1}^k R^{d - 2i - 2k} \int_R^\infty |u_e(T,\rho) - u_e(0,\rho)| \rho^{2i - 1} d\rho \\
&\lesssim R^{\epsilon}.
\end{align*}
It follows that
\begin{align*}
\beta_{k} = O ( R^{3\epsilon} / T) + O(R^{-2}).
\end{align*}
We set $R = T$ and let $T \rightarrow \infty$ to conclude that $\beta_{k} = 0$ as desired.
\end{proof}
\begin{lem}\label{l525
Let $\epsilon > 0$ be fixed and sufficiently small. Then as long as $\delta_1$ as in Lemma
\ref{l59} is sufficiently small, we have uniformly in $t$,
\begin{align
\begin{split}\label{s571}
|\lambda_{k}(t,r)| &\lesssim r^{\epsilon}, \\
|\mu_{k-1}(t,r)| &\lesssim r^{\epsilon}, \\
|\lambda_i(t,r)| &\lesssim r^{2k - 2j - 2 + \epsilon}, \quad \forall 1 \leq i < k, \\
|\mu_i(t,r)| &\lesssim r^{2k - 2j - 3 + \epsilon}, \quad \forall 1 \leq i \leq < k, i \neq k -1.
\end{split}
\end{align}
\end{lem}
\begin{proof}
We first establish
\begin{align*}
|\mu_k(t,r)| \lesssim r^{-3 + \epsilon},
\end{align*}
uniformly in time.
By \eqref{s567a}, we have uniformly in time
\begin{align}
\begin{split}\label{s572
\sum_{i = 1}^k & r^{2i - 3} |\lambda_i(t,r)| + r^{4i - d - 2}|\lambda_i(t,r)|^2 + r^{6i - d -4} |\lambda_i(t,r)|^3 \\
&+ \sum_{i = 1}^{k-1} r^{2i - 2} |\mu_i(t,r)| + r^{4i - d} |\mu_i(t,r)|^2 + r^{6i - d -1} |\mu_i(t,r)|^3
\lesssim r^{2k - 3 + \epsilon},
\end{split}
\end{align}
where the dominant contribution comes from the linear term involving $\lambda_k$. By \eqref{s569} and the fact that $\beta_k = 0$,
we have
\begin{align}\label{s573
r^{2k - 2} |\mu_k(t,r)| + r^{4k - d} |\mu_k(t,r)|^2 + r^{6k - d -1} |\mu_k(t,r)|^3 \lesssim r^{2k - 4},
\end{align}
uniformly in time. Inserting \eqref{s572} and \eqref{s573} into the difference estimate \eqref{s529} with $j = k$ implies that for all
$r,r'$ with $R_1 \leq r \leq r' \leq 2r$, we have
\begin{align*}
|\mu_k(t,r') - \mu_k(t,r)| \lesssim r^{-3 + \epsilon}
\end{align*}
uniformly in time. Since $\lim_{r \rightarrow \infty} \mu_k(t,r) = 0$, we conclude by Corollary \ref{c517} that
\begin{align}\label{s574}
|\mu_k(t,r)| \lesssim r^{-3 + \epsilon}
\end{align}
uniformly in time.
We now establish the other estimates in \eqref{s571}. Fix $r_0 > R_1$. By \eqref{s574}
\begin{align*}
(2^n r_0)^{2k - 2} |\mu_k(t,2^n r_0)| + (2^n r_0)^{4k - d} |\mu_k(t,2^n r_0)|^2 + (2^n r_0)^{6k - d -1}
|\mu_k(t,2^n r_0)|^3
\lesssim (2^n r_0)^{2k - 5 + \epsilon}
\end{align*}
uniformly in time. This estimate along with \eqref{s572} and the difference estimate \eqref{s528} imply for all $1 \leq j \leq k$
\begin{align*}
|\lambda_j(t,2^{n+1} r_0)| \leq (1 + C \delta_1) |\lambda_j(t,2^n r_0)| + C (2^n r_0)^{2k - 2j - 2 + \epsilon},
\end{align*}
uniformly in time. By Corollary \ref{cor511}, we conclude that
\begin{align*}
|\lambda_j(t,r)| &\lesssim r^{2k - 2j - 2 + \epsilon}, \quad \forall 1 \leq j < k, \\
|\lambda_k(t,r)| &\lesssim r^{\epsilon},
\end{align*}
uniformly in time. A similar argument establishes the remaining bounds in \eqref{s571} involving the $\mu_j$'s.
\end{proof}
As in the case that $\ell$ is even, we use Lemma \ref{l525} as the base case for an induction argument. In particular, we
will prove the following.
\begin{ppn}\label{p526}
Suppose $d = 5,9,13,\ldots$ and $\epsilon$, $\delta_1$,$r_0$ are as in Lemma \ref{l525}. For $P = 0, 1, \ldots, k-1$ the following estimates hold uniformly in time:
\begin{align}
\begin{split}\label{s575
|\lambda_j(t,r)| &\lesssim r^{2(k - P -j) - 2 + \epsilon}, \quad \forall 1 \leq j \leq k \mbox{ with } j \neq k - P, \\
|\lambda_{k - P}(t,r)| &\lesssim r^{\epsilon},\\
|\mu_j(t,r)| &\lesssim r^{2(k - P -j) - 3 + \epsilon}, \quad \forall 1 \leq j \leq k \mbox{ with } j \neq k - P-1, \\
|\mu_{k - P-1}(t,r)| &\lesssim r^{\epsilon}.
\end{split}
\end{align}
\end{ppn}
In we take $P = k-1$ in Proposition \ref{p526}, then we obtain the following.
\begin{ppn}\label{p527
With the same hypotheses as in Proposition \ref{p526}, the following estimates hold uniformly in time:
\begin{align}
\begin{split}\label{s576
|\lambda_j(t,r)| &\lesssim r^{-2j + \epsilon}, \quad \forall 1 < j \leq k, \\
|\lambda_1(t,r)| &\lesssim r^{\epsilon}, \\
|\mu_j(t,r)| &\lesssim r^{-2j -1 + \epsilon}, \quad \forall 1 \leq j \leq k.
\end{split}
\end{align}
\end{ppn}
\begin{proof}[Proof of Proposition \ref{p526}]
The proof of Proposition \ref{p526} is nearly identical to the proof of Proposition \ref{p513}. Therefore, we will only
outline the main steps of the proof and refer the reader to the proofs given for the case that $\ell$ is even for the details.
The proof is by induction on $P$. The case $P = 0$ is covered in Lemma \ref{l525}.
We now assume that \eqref{s575} holds for all $P$ with $0 \leq P < k-1$.
\begin{description}
\item[Step 1] There exist bounded functions $\alpha_{k-P}(t)$ and $\beta_{k-P-1}(t)$ defined on $\mathbb R$ such that
\begin{align*}
|\lambda_{k - P}(t,r) - \alpha_{k-P}(t)| &\lesssim r^{-2}, \\
|\mu_{k - P - 1}(t,r) - \beta_{k-P-1}(t)| &\lesssim r^{-1},
\end{align*}
uniformly in $t$. For details, see the proof of Lemma \ref{l514}.
\item[Step 2] We have
\begin{align*}
r^{d - 2(k-P)} u_e(t,r) = \alpha_{k-P}(t) + O ( r^{-2 + \epsilon} ),
\end{align*}
where the $O(\cdot)$ term is uniform in time. For details, see the proof of Corollary \ref{c516}.
\item[Step 3] The function $\alpha_{k-P}(t)$ is constant in time and from now on we write $\alpha_{k-p}$ in place of
$\alpha_{k - P}(t)$. For details, see the proof Lemma \ref{l517}.
\item[Step 4] We have $\alpha_{k-P} = 0$ and $\beta_{k - P - 1}(t)$ is constant in time. From now on we write $\beta_{k - P - 1}$
in place of $\beta_{k - P - 1}(t)$. For details, see the proof of Lemma \ref{l518}.
\item[Step 5] We have $\beta_{k - P - 1} = 0$. For details, see the proof of Lemma \ref{l519}.
From Steps 1--5, we conclude that
\begin{align}
\begin{split}\label{s577
\lambda_{k - P}(t) &= O(r^{-2}), \\
\mu_{k - P - 1}(t) &= O(r^{-1}),
\end{split}
\end{align}
uniformly in time. Inserting \eqref{s575} and \eqref{s577} into the difference estimates \eqref{s528} and \eqref{s529},
we conclude
that the following holds.
\item[Step 6] If \eqref{s575} holds for all $0 \leq P < k-1$, then \eqref{s575} holds for $P + 1$. For details,
see the proof of Lemma \ref{l520}.
\end{description}
By induction and Step 6, we have proved Proposition \ref{p526}.
\end{proof}
As in the case that $\ell$ is even, from Proposition \ref{p527} we deduce the following behavior for $\lambda_1$.
\begin{lem}\label{l528
There exists $\alpha \in \mathbb R$ such that
\begin{align}\label{s578
|\lambda_1(r) - \alpha| = O(r^{-2}).
\end{align}
Moreover, we have the slightly improved decay rates
\begin{align}
\begin{split}\label{s579
|\lambda_j(r)| &\lesssim r^{-2j}, \quad \forall 1 < j \leq k, \\
|\mu_j(r)| &\lesssim r^{-2j - 1}, \quad \forall 1 \leq j \leq k.
\end{split}
\end{align}
\end{lem}
\begin{proof}
The proof is identical to the proof of Lemma \ref{l521}.
\end{proof}
The proof of Proposition \ref{p58} for the case that $\ell$ odd is identical to the case that $\ell$ is even and we omit it.
\subsubsection*{Step 3: Conclusion of the Proof of Proposition \ref{p53}}
Let $\alpha$ be as in Proposition \ref{p58}. We now show that there exists a unique static solution $U_+$ to \eqref{s52e} such that
$\overrightarrow u(0) = (U_+, 0)$ on $r \geq \eta$ (where $U_+$ does not depend on $\eta$). We distinguish two cases: $\alpha = 0$ and $\alpha \neq 0$. For the case $
\alpha = 0$, we will show that $\overrightarrow u(0) = (0,0)$ on $r \geq \eta$. We first show that if $\alpha = 0$, then
$\overrightarrow u(0,r)$ is compactly supported.
\begin{lem}\label{l529
Let $\overrightarrow u_e$ be as in Proposition \ref{p53}, and let $\alpha$ be as in Proposition \ref{p58}. If $\alpha = 0$, then
$\overrightarrow u(0,r)$ is compactly supported in $r \in (\eta,\infty)$.
\end{lem}
\begin{proof}
If $\alpha = 0$, then by Lemma \ref{l521} and Lemma \ref{l528}, we have
\begin{align}
\begin{split}\label{s580
|\lambda_j(r)| &\lesssim r^{-2j}, \quad \forall 1 \leq j \leq \tilde k, \\
|\mu_j(r)| &\lesssim r^{-2j - 1}, \quad \forall 1 \leq j \leq k.
\end{split}
\end{align}
Thus, there exists $C_1$ such that for all $r \geq \eta$
\begin{align}\label{s581}
\sum_{j = 1}^{\tilde k} r^{2j} |\lambda_j(r)| + \sum_{j = 1}^k r^{2j + 1} |\mu_j(r)| \leq C_1.
\end{align}
Fix $r_0 > R_1$. By Corollary \ref{c510}, we have
\begin{align*}
|\lambda_j(2^{n+1} r_0)| \geq ( 1 &- C \delta_1 ) |\lambda_j(2^n r_0)| \\&- C \delta_1 (2^n r_0)^{-2j}
\left [
\sum_{i = 1}^{\tilde k} (2^n r_0)^{2i} |\lambda_i(2^n r_0)| + \sum_{i = 1}^k (2^n r_0)^{2i + 1} |\mu_i(2^n r_0)|,
\right ],
\end{align*}
and
\begin{align*}
|\mu_j(2^{n+1} r_0)| \geq ( 1 &- C \delta_1 ) |\mu_j(2^n r_0)| \\&- C \delta_1 (2^n r_0)^{-2j-1}
\left [
\sum_{i = 1}^{\tilde k} (2^n r_0)^{2i} |\lambda_i(2^n r_0)| + \sum_{i = 1}^k (2^n r_0)^{2i + 1} |\mu_i(2^n r_0)|,
\right ].
\end{align*}
We conclude that
\begin{align*}
\sum_{i = 1}^{\tilde k} (2^{n+1} r_0)^{2i} |\lambda_i(2^{n+1} r_0)|&+ \sum_{i = 1}^k (2^{n+1} r_0)^{2i + 1} |\mu_i(2^{n+1} r_0)|\\
&\geq 4 \Bigl ( 1 - C \delta_1 (\tilde k + k + 1) 2^{2k + 1} \Bigr ) \left [
\sum_{i = 1}^{\tilde k} (2^n r_0)^{2i} |\lambda_i(2^n r_0)| + \sum_{i = 1}^k (2^n r_0)^{2i + 1} |\mu_i(2^n r_0)| \right ].
\end{align*}
If we fix $\delta_1$ so small so that $C \delta_1 (\tilde k + k + 1) 2^{2k + 1} < \frac{1}{2}$, then we conclude that
\begin{align*}
\sum_{i = 1}^{\tilde k} (2^{n+1} r_0)^{2i} |\lambda_i(2^{n+1} r_0)|&+ \sum_{i = 1}^k (2^{n+1} r_0)^{2i + 1} |\mu_i(2^{n+1} r_0)|\\
&\geq 2 \left [
\sum_{i = 1}^{\tilde k} (2^n r_0)^{2i} |\lambda_i(2^n r_0)| + \sum_{i = 1}^k (2^n r_0)^{2i + 1} |\mu_i(2^n r_0)| \right ].
\end{align*}
Iterating, we conclude that for all $n \geq 0$,
\begin{align*}
\sum_{i = 1}^{\tilde k} (2^n r_0)^{2i} |\lambda_i(2^n r_0)| + \sum_{i = 1}^k (2^n r_0)^{2i + 1} |\mu_i(2^n r_0)|
\geq 2^n \left [
\sum_{i = 1}^{\tilde k} (r_0)^{2i} |\lambda_i(r_0)| + \sum_{i = 1}^k (r_0)^{2i + 1} |\mu_i(r_0)| \right ].
\end{align*}
By \eqref{s581}, we obtain for all $n \geq 0$,
\begin{align*}
\sum_{j = 1}^{\tilde k} (r_0)^{2i} |\lambda_i(r_0)| + \sum_{i = 1}^k (r_0)^{2i + 1} |\mu_i(r_0)| \leq 2^{-n} C_1.
\end{align*}
Letting $n \rightarrow \infty$ implies that
\begin{align*}
\lambda_j(r_0) = \mu_i(r_0) = 0, \quad \forall 1 \leq j \leq \tilde k, 1 \leq i \leq k.
\end{align*}
By Lemma \ref{l54} and Lemma \ref{l55}, it follows that $\| \overrightarrow u_e(0) \|_{\mathcal H(r \geq r_0)} = 0$. Thus,
$(\partial_r u_{e,0}, u_{e,1})$ is compactly supported in $(\eta,\infty)$. Since
\begin{align*}
\lim_{r \rightarrow \infty} u_{e,0}(r) = 0,
\end{align*}
we conclude that $(u_{e,0},u_{e,1})$ is compactly supported as well.
\end{proof}
\begin{lem}\label{l530
Let $\overrightarrow u_e$ be as in Proposition \ref{p53}, and let $\alpha$ be as in Proposition \ref{p58}. If $\alpha = 0$, then
$\overrightarrow u(0,r) = (0,0)$ on $r \geq \eta$.
\end{lem}
\begin{proof}
If $\alpha = 0$, then by Lemma \ref{l529}, $\overrightarrow u_e(0) =
(u_{e,0}(r), u_{e,1}(r))$ is compactly supported in $(\eta,\infty)$. Thus, we may define
\begin{align*}
\rho_0 := \inf \left \{ \rho : \| \overrightarrow u_e(0) \|_{\mathcal H(r \geq \rho)} = 0 \right \} < \infty.
\end{align*}
We now argue by contradiction and assume that $\rho_0 > \eta$. Let $\epsilon > 0$ to be fixed later, and choose $\rho_1 \in (\eta,\rho_0)$
close to $\rho_0$ so that
\begin{align}\label{s582
0 < \| \overrightarrow u_e(0) \|_{\mathcal H(r \geq \rho_1)} < \epsilon,
\end{align}
where $\delta_1$ is as in Lemma \ref{l59}.
By Lemma \ref{l54}, we have
\begin{align*}
0 &= \| \overrightarrow u_e(0) \|_{\mathcal H(r \geq \rho_0)}^2 \\
&\simeq
\sum_{i = 1}^{\tilde k} \Bigl ( \lambda_i(\rho_0) \rho_0^{2i - \frac{d+2}{2}} \Bigr )^2
+ \sum_{j = 1}^{k} \Bigl ( \mu_j(\rho_0) \rho_0^{2i - \frac{d}{2}} \Bigr )^2, \\
&\:+ \int_{\rho_0}^\infty
\sum_{i = 1}^{\tilde k} \Bigl ( \partial_r \lambda_i(r) r^{2i - \frac{d+1}{2}} \Bigr )^2
+ \sum_{j = 1}^{k} \Bigl ( \partial_r \mu_j(r) r^{2i - \frac{d-1}{2}} \Bigr )^2 dr.
\end{align*}
Thus, $\lambda_j(\rho_0) = \mu_i(\rho_0) = 0$ for all $1 \leq j \leq \tilde k, 1 \leq i \leq k$.
A simple reworking of the proofs of Lemma \ref{l57} and Lemma \ref{l55} shows that as long as $\epsilon$ and
$|\rho_0 - \rho_1|$ is sufficiently small, we have for all $\rho$ with
$1 < \rho_1 \leq \rho \leq \rho_0$,
\begin{align}
\begin{split}\label{s583
\| \pi_\rho^{\perp} \overrightarrow u_e(t) \|_{\mathcal H(r \geq \rho)} \lesssim (\rho_0 - \rho)^{1/3} \| \pi_\rho \overrightarrow u_e(t) \|_{\mathcal H(r \geq \rho)} +
\| \pi_\rho \overrightarrow u_e(t) \|_{\mathcal H(r \geq \rho)}^2 + \| \pi_\rho \overrightarrow u_e(t) \|_{\mathcal H(r \geq \rho)}^3,
\end{split}
\end{align}
where the implied constant is independent of $\rho$. In the argument, smallness is achieved by taking $\epsilon$ and
$|\rho_0 - \rho_1|$ sufficiently small, cutting off the potential term to the exterior region $\{ \rho + t \leq r \leq \rho_0 + t \}$,
and using the compact support of $\overrightarrow u_e$ along with finite speed of propagation. By taking $\rho_1$ even closer to $\rho_0$ so that
$|\rho_0 - \rho_1| < \epsilon^3$, we conclude as in Corollary \ref{c510} that
\begin{align*}
|\lambda_j(\rho_0) - \lambda_j(\rho_1)| &\leq C\epsilon \left ( \sum_{i = 1}^{\tilde k} |\lambda_i(\rho_1)| +
\sum_{i = 1}^k |\mu_i(\rho_1)| \right ), \\
|\mu_j(\rho_0) - \mu_j(\rho_1)| &\leq C\epsilon \left ( \sum_{i = 1}^{\tilde k} |\lambda_i(\rho_1)| +
\sum_{i = 1}^k |\mu_i(\rho_1)| \right ).
\end{align*}
Since $\lambda_j(\rho_0) = \mu_j(\rho_0) = 0$ we conclude by summing the previous expressions that
\begin{align*}
\sum_{i = 1}^{\tilde k} |\lambda_i(\rho_1)| +
\sum_{i = 1}^k |\mu_i(\rho_1)| \leq C(k + \tilde k)\epsilon \left ( \sum_{i = 1}^{\tilde k} |\lambda_i(\rho_1)| +
\sum_{i = 1}^k |\mu_i(\rho_1)| \right ).
\end{align*}
By fixing $\epsilon$ sufficiently small, it follows that
\begin{align*}
\sum_{i = 1}^{\tilde k} |\lambda_i(\rho_1)| +
\sum_{i = 1}^k |\mu_i(\rho_1)| = 0.
\end{align*}
Thus, $\lambda_j(\rho_1) = \mu_j(\rho_1) = 0$. By Lemma \ref{l54} and \eqref{s583}, we conclude that
\begin{align*}
\| \overrightarrow u_e(0) \|_{r \geq \rho_1)} = 0
\end{align*}
which contradicts \eqref{s582}. Thus, we must have $\rho_0 = \eta$ and $\overrightarrow u_e(0,r) = (0,0)$ for $r \geq \eta$ as desired.
\end{proof}
From the previous argument, we conclude even more for the case $\alpha = 0$.
\begin{lem}\label{allt lem}
Let $\alpha$ be as in Lemma \ref{l514}. If $\alpha = 0$, then
\begin{align*}
\overrightarrow u(t,r) = (0,0), \quad \forall (t,r) \in \mathbb R \times (0,\infty).
\end{align*}
\end{lem}
\begin{proof}
By Lemma \ref{l530} we know that if $\alpha = 0$ then $\overrightarrow u(0,r) = (0,0)$
on $\{ r \geq \eta\}$.
By finite speed of propagation, we conclude that
\begin{align}\label{finite speed}
\overrightarrow u(t,r) = (0,0) \quad \mbox{ on } \{ r \geq |t| + \eta \}.
\end{align}
Let $t_0 \in \mathbb R$ be arbitrary and define $u_{t_0}(t,r) = u(t+t_0,r)$. Then $\overrightarrow u_{t_0}$ inherits the following compactness property from $\overrightarrow u$:
\begin{align*}
\forall R \geq 0, \quad \lim_{|t| \rightarrow \infty} \| \overrightarrow u_{t_0}(t) \|_{\mathcal H(r \geq R + |t|; \langle r \rangle^{d-1} dr )} &= 0, \\
\lim_{R \rightarrow \infty} \left [ \sup_{t \in \mathbb R} \| \overrightarrow u_{t_0}(t) \|_{\mathcal H(r \geq R + |t|; \langle r \rangle^{d-1} dr)} \right ] &= 0,
\end{align*}
and by \eqref{finite speed} $\overrightarrow u_{t_0}(0,r)$ is supported in $\{ 0 < r
\leq \eta + |t_0| \}$. By the proof of Lemma \ref{l530} applied to $\overrightarrow u_{t_0}$ we conclude that $\overrightarrow u_{t_0}(0,r) = (0,0)$ on $r \geq \eta$. Since $t_0$ was arbitrary, we conclude that
\begin{align*}
\overrightarrow u(t_0,r) = (0,0) \quad \mbox{on } \{ r \geq \eta \},
\end{align*}
for any $t_0 \in \mathbb R$. Since $\eta > 0$ was arbitrarily fixed in the beginning of this subsection, we conclude that
\begin{align*}
\overrightarrow u(t,r) = (0,0), \quad \forall (t,r) \in \mathbb R \times (0,\infty).
\end{align*}
\end{proof}
We now consider the general case for $\alpha$.
\begin{lem}\label{l531
Let $\alpha$ be as in Lemma \ref{l514}. As before, we denote the unique $\ell$--equivariant finite
energy harmonic map of degree $n$ by $Q$ and recall that there exists a unique
$\alpha_{\ell,n} > 0$ such that
\begin{align*}
Q(r) = n\pi - \alpha_{\ell,n} r^{-\ell-1} + O(r^{-\ell-3}) \quad \mbox{ as } r \rightarrow \infty.
\end{align*}
Let $Q_{\alpha - \alpha_{\ell,n}}$ denote the unique solution to \eqref{sa21} with the property that
\begin{align}\label{s91}
Q_{\alpha - \alpha_{\ell,n}}(r) = n\pi + (\alpha - \alpha_{\ell,n}) r^{-\ell-1} + O(r^{-\ell-3}) \quad \mbox{ as } r \rightarrow \infty.
\end{align}
Note that $Q_{\alpha - \alpha_{\ell,n}}$ exists and is unique by
Proposition \ref{pa22}. Define a static solution $U_+$ to \eqref{s31} via
\begin{align*}
U_+(r) = \langle r \rangle^{-\ell} \bigl ( Q_{\alpha - \alpha_{\ell,n}}(r) - Q(r) \bigr ).
\end{align*}
Then
\begin{align*}
\overrightarrow u(t,r) = (U_+(r),0), \quad \forall (t,r) \in \mathbb R \times (0,\infty).
\end{align*}
\end{lem}
\begin{proof}
Lemma \ref{l531} follows from the proof for the $\alpha = 0$ case and a change of variables. Let $Q_{\alpha - \alpha_{\ell,n}}$ be as in the statement of the lemma. We define
\begin{align}
\begin{split}\label{s92}
u_{\alpha}(t,r) &:= u(t,r) - \langle r \rangle^{-\ell} \left ( Q_{\alpha - \alpha_{\ell,n}}(r) - Q(r) \right ) \\
&= u(t,r) - U_+(r)
\end{split}
\end{align}
and observe that $u_{\alpha}$ solves
\begin{align*}
\partial_t^2 u_{\alpha} - \partial_r^2 u_\alpha - \frac{(d-1)r}{r^2 + 1} \partial_r u_\alpha + V_\alpha(r) u_\alpha = N_\alpha(r,u_\alpha),
\end{align*}
where the potential $V_\alpha$ is given by
\begin{align}\label{s93
V_\alpha(r) = \ell^2 \langle r \rangle^{-4} + 2 \langle r \rangle^{-2} ( \cos 2 Q_{\alpha - \alpha_{\ell,n}} - 1 ),
\end{align}
and $N_\alpha(r,u) = F_\alpha(r,u) + G_\alpha(r,u)$ with
\begin{align}
\begin{split}\label{s94
F_\alpha(r,u) &= \ell(\ell+1) \langle r \rangle^{-\ell-2} \sin^2 (\langle r \rangle^\ell u) \sin 2 Q_{\alpha - \alpha_{\ell,n}} , \\
G_\alpha(r,u) &= \frac{\ell(\ell+1)}{2} \langle r \rangle^{-\ell-2} \left [ 2 \langle r \rangle^\ell u - \sin (2 \langle r \rangle^\ell u) \right ] \cos 2 Q_{\alpha - \alpha_{\ell,n}} .
\end{split}
\end{align}
By \eqref{s91}, the potential $V_\alpha$ is smooth and satisfies
\begin{align*
V_\alpha(r) = \ell^2 \langle r \rangle^{-4} + O ( \langle r \rangle^{-2\ell-4} ),
\end{align*}
as $r \rightarrow \infty$ and the nonlinearities $F_\alpha$ and $G_\alpha$ satisfy
\begin{align*
|F_\alpha(r,u)| &\lesssim \langle r \rangle^{-3} |u|^2, \\
|G_\alpha(r,u)| &\lesssim \langle r \rangle^{d-5}|u|^3,
\end{align*}
for $r \geq 0$. Moreover, by \eqref{s92} we see that $\overrightarrow u_{\alpha}$ inherits the compactness property from $\overrightarrow u$:
\begin{align}
\begin{split}\label{comp prop al
\forall R \geq 0, \quad \lim_{|t| \rightarrow \infty} \| \overrightarrow u_{\alpha}(t) \|_{\mathcal H( r \geq R + |t|; \langle r \rangle^{d-1} dr)} = 0, \\
\lim_{R \rightarrow \infty} \left [ \sup_{t \in \mathbb R} \| \overrightarrow u_{\alpha}(t) \|_{\mathcal H( r \geq R + |t|; \langle r \rangle^{d-1} dr)} \right ] = 0.
\end{split}
\end{align}
Let $\eta > 0$. We now define for $r \geq \eta$,
\begin{align}\label{s96}
u_{\alpha,e}(t,r) := \frac{\langle r \rangle^{(d-1)/2}}{r^{(d-1)/2}} u_{\alpha}(t,r)
\end{align}
and note that $u_{\alpha,e}$ satisfies an equation analogous to $u_e$:
\begin{align}\label{s97}
\partial_t^2 u_{\alpha,e} - \partial^2_r u_{\alpha,e} - \frac{d-1}{r} \partial_r u_{\alpha,e} + V_{\alpha,e}(r) u_{\alpha,e} = N_{\alpha,e} (r,u_{\alpha,e}),
\quad t \in \mathbb R, r \geq \eta,
\end{align}
where
\begin{align*}
V_{\alpha,e}(r) = V_\alpha(r) - \frac{(d-1)(d-4)}{2 r^2 \langle r \rangle^2}
+ \frac{(d-1)(d-5)}{4 r^{2} \langle r \rangle^{4}},
\end{align*}
and $N_{\alpha,e}(r,u_e) = F_{\alpha,e}(r,u_e) + G_{\alpha,e}(r,u_e)$ where
\begin{align*}
F_{\alpha,e}(r,u_{\alpha,e}) &= \frac{\langle r \rangle^{(d-1)/2}}{r^{(d-1)/2}} F_\alpha \left (r, \frac{r^{(d-1)/2}}{\langle r \rangle^{(d-1)/2}} u_{\alpha,e} \right ), \\
G_{\alpha,e}(r,u_{\alpha,e}) &= \frac{\langle r \rangle^{(d-1)/2}}{r^{(d-1)/2}} G_\alpha \left (r, \frac{r^{(d-1)/2}}{\langle r \rangle^{(d-1)/2}} u_{\alpha,e} \right ).
\end{align*}
In particular, we have the analogues of \eqref{s57}, \eqref{s58}, and \eqref{s59}: for all $r > 0$,
\begin{align}
| V_{\alpha,e}(r) | &\lesssim r^{-4}, \label{s98} \\
|F_{\alpha,e}(r,u)| &\lesssim r^{-3} |u|^{2}, \label{s99} \\
|G_{\alpha,e}(r,u)| &\lesssim r^{d-5}|u|^3. \label{s910}
\end{align}
Moreover, $u_{\alpha,e}$ inherits the following compactness properties from $u_\alpha$:
\begin{align}
\begin{split}\label{s911
\forall R \geq \eta, \quad \lim_{|t| \rightarrow \infty} \| \overrightarrow u_{\alpha,e}(t) \|_{\mathcal H( r \geq R + |t|; r^{d-1} dr)} = 0, \\
\lim_{R \rightarrow \infty} \left [ \sup_{t \in \mathbb R} \| \overrightarrow u_{\alpha,e}(t) \|_{\mathcal H( r \geq R + |t|; r^{d-1} dr)} \right ] = 0.
\end{split}
\end{align}
Finally, by construction we see that
\begin{align}
\begin{split}\label{e100}
r^{2-d} u_{\alpha, e,0}(r) &= O(r^{-2}), \\
\int_r^\infty u_{\alpha,e,1}(\rho) \rho^{2j-1} d\rho &= O(r^{2j - d - 1}), \quad j = 1, \ldots,k.
\end{split}
\end{align}
Using \eqref{s97}--\eqref{e100}, we may repeat the previous arguments with $u_{e,\alpha}$ in place
of $u_e$ to conclude the following analog of Lemma \ref{l530}:
\begin{lem}\label{l530 al
$\overrightarrow u_{\alpha}(0,r) = (0,0)$ for $r \geq \eta$.
\end{lem}
Finally, we obtain the following analog of Lemma \ref{allt lem}:
\begin{lem}\label{allt lem al}
We have
\begin{align*}
\overrightarrow u_{\alpha}(t,r) = (0,0)
\end{align*}
for all $t \in \mathbb R$ and $r > 0$.
\end{lem}
Equivalently, Lemma \ref{allt lem al} states that
\begin{align*}
\overrightarrow u(t,r) = (U_+(r),0)
\end{align*}
for all $t \in \mathbb R$ and $r > 0$. This concludes the proof of Lemma \ref{l531} and Proposition \ref{p53}.
\end{proof}
\subsection{Proof of Proposition \ref{static soln}}
Using Proposition \ref{p53} and its analog for $r < 0$, we quickly conclude the proof of Proposition \ref{static soln}. Indeed, we know that there exists static solutions $U_{\pm}$ to \eqref{s31} such that
\begin{align}\label{allt pmr}
\overrightarrow u(t,r) = (U_{\pm}(r), 0)
\end{align}
for all $\pm r > 0$ and $t \in \mathbb R$. In particular, $\partial_t u(t,r) = 0$,
$\partial_r u(t,r) = \partial_r u(0,r)$ and $u(t,r) = u(0,r)$ for all $t$ and almost every $r$. Let $\psi \in C^\infty_0(\mathbb R)$ with
$\int \psi dt = 1$ and let $\varphi \in C^\infty_0(\mathbb R)$. Then since
$u$ solves \eqref{s31} in the weak sense, we conclude that
\begin{align*}
0 &= \int \int \bigl [ \psi'(t) \varphi(r) \partial_t u(t,r) + \psi (t)\varphi'(r) \partial_r u(t,r) + V(r) \psi(t) \varphi(r) u(t,r) \\&\hspace{1.2 cm} - \psi(t) \varphi(r) N(r,u(t,r)) \bigr ] \langle r \rangle^{d-1} dr dt \\
&= \int \int \psi(t) \bigl [ \varphi'(r) \partial_r u(0,r) + V(r) \varphi(r) u(0,r) - \varphi(r) N(r,u(0,r)) \bigr ] \langle r \rangle^{d-1} dr dt \\
&= \int \bigl [ \varphi'(r) \partial_r u(0,r) + V(r) \varphi(r) u(0,r) - \varphi(r) N(r,u(0,r)) \bigr ] \langle r \rangle^{d-1} dr.
\end{align*}
Since $\varphi$ was arbitrary, we see that $u(0,r)$ is a weak solution
in $H^1(\mathbb R)$ to the static equation $-\partial_r^2 u - \frac{(d-1)r}{r^2+1} \partial_r u + V(r) u = N(r,u)$ on $\mathbb R$. By simple elliptic arguments we conclude that $u(0,r)$ is a classical solution. Thus, $\overrightarrow u(t,r) = (U(r),0) := (u(0,r),0)$ for all $t,r \in \mathbb R$ as desired.
\qed
\subsection{Proofs of Proposition \ref{p51} and Theorem \ref{t21}}
We now briefly summarize the proofs of Proposition \ref{p51} and Theorem \ref{t21}.
\begin{proof}[Proof of Proposition \ref{p51}]
By Proposition \ref{static soln}, we have that $\overrightarrow u = (U, 0)$ for some finite energy static solution to \eqref{s31}. Thus,
$\psi = Q_{\ell,n} + \langle r \rangle^\ell u$ is a finite energy solution to \eqref{s21}, i.e. a harmonic map.
By the uniqueness part of Proposition \ref{pa21}, we conclude that
$\overrightarrow u = (0,0)$ as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t21}]
Suppose that Theorem \ref{t21} fails. Then by Proposition \ref{p34}, there exists a nonzero solution $u_*$ to \eqref{s31} such that
the trajectory
\begin{align*}
K := \left \{ \overrightarrow u_*(t) : t \in \mathbb R \right \},
\end{align*}
is precompact in $\mathcal H(\mathbb R; \langle r \rangle^{d-1} dr)$. By Proposition \ref{p51} we conclude that $\overrightarrow u_* = (0,0)$, which contradicts the fact
that $u_*$ is nonzero. Thus, Theorem \ref{t21} must hold.
\end{proof}
|
1,477,468,751,382 | arxiv | \section{Impact on Astrophysical Analyses}
\label{sec:astrophysics}
Systematic errors in Advanced LIGO’s calibrated data, such as those introduced by not compensating for the time-dependence of some calibration model parameters, have the potential to impact astrophysical results that flow from the reconstructed strain data.
Calibration errors have a complex frequency structure, especially when frequency-dependent temporal variations are ignored.
Fig.~\ref{fig:fc_miscal} shows examples of calibration errors with different intentional offsets of the time-dependent cavity pole frequency parameter $f_{\rm cc}$.
Fig.~\ref{fig:all_miscal} shows examples of calibration errors with different intentional offsets of TDCF multipliers.
With the exception of the nominal calibration error in each plot, these figures represent simulations of potential calibration errors rather than realized calibration errors from previous observing runs.
A full discussion of the realized calibration errors in previous observing runs, including both the calibration systematic error and its associated uncertainty, can be found in previous publications~\cite{Craig,Sun:2020wke,Sun:2021qcg}.
Since systematic errors in LIGO’s calibrated data impact the reconstructed strain and astrophysical source parameters impact the physical strain, potentially in similar ways, we sought to address the question of how much the systematic error caused by not compensating for changes in the TDCF filters could impact astrophysical results.
We investigated a specific scenario related to this question by studying the impact of the additional systematic error caused by not compensating for temporal changes in the coupled cavity pole frequency $f_{\rm cc}$ on the source parameter estimation of a binary neutron star (BNS) system.
In addition, we separately studied how the systematic error caused by not compensating for TDCF mulitipliers would impact the source parameter estimation of the same BNS system.
Previous studies have developed sophisticated frameworks to fully incorporate the calibration systematic error and its associated uncertainty into the parameter estimation procedure.
We have used a much simpler framework than those developed in these previous studies~\cite{Payne:2020myg, Vitale:2020gvb}.
We apply systematic calibration errors, both including and not including compensation for TDCFs, to a simulated signal in order to study biases in the parameter estimation results that are a consequence of the systematic error only.
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/fc_cal_errors_H1.pdf}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/fc_cal_errors_L1.pdf}
\label{fig:sub2}
\end{subfigure}
\caption{\label{fig:fc_miscal} Magnitude (top plots) and phase (bottom plots) of calibration errors at H1 (left) and L1 (right) when purposely offsetting the cavity pole frequency $f_{\rm cc}$ by $5\%$, $10\%$, and $20\%$ from its nominal value of 410.6 Hz (H1) or 454.0 Hz (L1). For the H1 results, $f_{\rm cc}$ was increased by $5\%$, $10\%$, and $20\%$, and for the L1 results, $f_{\rm cc}$ was decreased by these percentages. The solid line in each figure indicates the calibration errors present for nominal choices of all TDCFs.}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/all_cal_errors_H1.pdf}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/all_cal_errors_L1.pdf}
\label{fig:sub2}
\end{subfigure}
\caption{\label{fig:all_miscal} Magnitude (top plots) and phase (bottom plots) of calibration errors at H1 (left) and L1 (right) when purposely offsetting the TDCF multipliers ($\kappa_{\rm c}$, $\kappa_{\rm T}$, $\kappa_{\rm P}$, and $\kappa_{\rm U}$) by $5\%$, $10\%$, and $20\%$. For the H1 results, the TDCF multipliers were increased by $5\%$, $10\%$, and $20\%$, and for the L1 results, the TDCF multipliers were decreased by these percentages. The solid line in each figure indicates the calibration errors present for nominal choices for all TDCFs. The total systematic error induced by adjusting all of the TDCF multipliers can be quite large, relative to the individual adjustment of each TDCF multiplier. For example, a 20\% decrease in all TDCF multipliers leads to errors as large as 80\% in magnitude and $>60$ degrees in phase in certain frequency ranges.}
\end{figure*}
In general, during O2 and O3, the systematic error present in the calibrated strain data is estimated numerically by producing many iterations of possible response functions.
The set of possible response functions are produced by using a combination of a Markov Chain Monte Carlo (MCMC) method and Gaussian process regression (GPR).
The MCMC is used to estimate the DARM model parameters.
The maximum likelihood values obtained from the MCMC are then used to construct a nominal DARM model.
With the nominal DARM model in hand, a GPR is used to estimate any remaining deviations of the DARM model from the full interferometer response using measurements obtained with the Pcal.
Possible response functions are generated by sampling from the distribution of the MCMC and GPR results while also accounting for the TDCF uncertainty at a given time and the Pcal uncertainty.
For more detail on estimating Advanced LIGO calibration errors and uncertainties in O2 and O3 see Refs.~\cite{Craig,Sun:2020wke,Sun:2021qcg}.
To study the potential impact of systematic calibration errors on the estimation of astrophysical parameters from a binary coalescence event~\cite{PhysRevD.49.2658}, we developed a range of calibration systematic error estimates that included different deviations of TDCFs from their nominal values.
The nominal calibration error estimate that each manipulation was based on was chosen from June 11, 2019 (GPS time 1244307456) for H1 and March 27, 2019 (GPS time 1237745764) for L1.
We produced six modifications to this nominal calibration error estimate for each detector.
One set of modifications focused on manipulating only the TDCF for the coupled cavity pole frequency $f_{\rm cc}$, and the other set of modifications manipulated all of the TDCF multipliers ($\kappa_{\rm c}$, $\kappa_{\rm T}$, $\kappa_{\rm P}$, and~$\kappa_{\rm U}$).
Fig.~\ref{fig:fc_miscal} shows the nominal systematic calibration errors in H1 and L1 as well as the calibration errors resulting from three manipulations of $f_{\rm cc}$.
The cavity pole frequency parameter was intentionally offset by $5\%$, $10\%$ and $20\%$ from its nominal value and then the calibration systematic error was computed for the resulting response functions.
The cavity pole frequency was increased by these percentages in H1 and decreased by these percentages in L1 in order to mimic a maximal relative calibration error, manifesting in a similar way to a relative timing error, between each set of detector strain data.
Since the relative timing of the signal between H1 and L1 is the primary contributor to the sky localization of a detected signal, we wanted to study how offsetting these parameters in opposite ways at H1 and L1 would impact the sky localization.
In Ref.~\cite{maddiethesis}, which this work builds on, results are shown for an offset of the TDCFs in the same direction for H1 and L1, which found calibration errors to have a minimal effect on the measured source parameter distributions.
Here we only highlight the more interesting results with offsets in opposite directions at H1 and L1.
Fig.~\ref{fig:all_miscal} similarly shows the nominal systematic calibration error in H1 and L1 as well as the calibration errors resulting from three manipulations of the TDCF multipliers.
Similarly to above, the manipulations involved intentionally offsetting each TDCF multiplier from its nominal value by $5\%$, $10\%$, and $20\%$ in opposite directions for H1 and L1.
For comparison to expected physical deviations, it is rare for the coupled cavity pole to deviate from its nominal value by more than $5\%$.
The TDCF multipliers, however, are known to deviate by as much as $\sim10\%$ from their nominal values.
The set of simulated calibration errors used in this study should therefore encompass both realistic and extreme calibration error situations.
In order to test the impact of the above calibration systematic errors on the estimation of source parameters of a binary system, we applied the calibration systematic errors to a simulated BNS gravitational-wave signal as well as the simulated noise floor (power spectral density) before performing parameter estimation using the \texttt{lalinference} software package.
We specifically used \texttt{lalinference}'s MCMC sampler for all simulations presented here, the details of which can be found in Ref.~\cite{PhysRevD.91.042003}.
The MCMC sampler produces posterior probability distributions (or just posteriors for short) for the signal's source parameters.
Calibration errors were not marginalized over to produce the posteriors.
We chose to add no synthetic noise to the simulated BNS signal (sometimes referred to as ``injecting into zero-noise" \cite{Rodriguez_2014}) in order to isolate biases in the source parameter estimation caused by the calibration systematic error from the varying effects of a given instance of synthetic noise \cite{Nissanke_2010}.
Since the likelihood calculation involves a division by the power spectral density (PSD) of the synthetic noise, this approach does still incorporate the general properties of the noise in the analysis.
We used the simulated Advanced LIGO sensitivity PSD within the likelihood calculation.
The waveform approximant used both for the simulated signal and for the template was \texttt{TaylorF2} carried out to 3.5 post-Newtonian order \cite{PhysRevD.80.084043,PhysRevLett.112.101101,Wade:2014vqa}.
The parameters of the simulated BNS source were chosen from a random distribution to have masses $m_1=1.523194 \ M_\odot$ and $m_2 = 1.522147 \ M_\odot$, zero spin, and tidal deformability parameters $\Lambda_1 = 311.368$ and $\Lambda_2 = 312.666$ \cite{PhysRevD.77.021502,PhysRevD.81.123016}.
We performed parameter estimation on this simulated signal placed at two different distances, resulting in two different signal-to-noise ratios (SNRs).
The high-SNR signal had a simulated distance of 58~Mpc, which resulted in a network SNR of 56.
The low-SNR signal had a simulated distance of 200~Mpc, which resulted in a network SNR of 16.
For this study we used a two-detector network consisting of H1 and L1, since the we were focused on calibration errors specific to the LIGO detectors.
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/fcc_chirp_mass_posteriors.pdf}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/low_snr_fcc_chirp_mass_posteriors.pdf}
\label{fig:sub2}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/fcc_lambda_tilde_posteriors.pdf}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/low_snr_fcc_lambda_tilde_posteriors.pdf}
\label{fig:sub2}
\end{subfigure}
\caption{\label{fig:fcc_results} Posteriors for the chirp mass $\mathcal{M}$ (top) and tidal deformability $\tilde \Lambda$ (bottom) from the simulated BNS signal with an SNR of 56 (left) and the same signal with an SNR of 16 (right). These plots show the posteriors for the situation with nominal calibration errors applied to the signal and with calibration errors computed from a 5\%, 10\% and 20\% offset of $f_{\rm cc}$ from its nominal value. The black dashed line shows the true parameter value. The results indicate no significant difference between the situation with nominal calibration errors and the situation with a 5\%, 10\% or 20\% offset in the $f_{\rm cc}$ value, which suggests that errors induced in the calibration by a lack of compensation for the TDCF $f_{\rm cc}(t)$ would not significantly impact the results of source parameter estimation for this type of BNS signal. The slight apparent measurement bias in $\mathcal{M}$ is due to marginalization over mass ratio $m_2 / m_1$ to obtain the one-dimensional posterior distribution.}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/fcc_errors_map.pdf}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/low_snr_fcc_errors_map.pdf}
\label{fig:sub2}
\end{subfigure}
\caption{\label{fig:skymaps_fcc} Sky maps generated using the \texttt{ligo.skymap} software package~\cite{PhysRevD.93.024013} for the simulated BNS signal with an SNR of 56 (left) and the same signal with an SNR of 16 (right). Each contour represents the 90\% credible interval for the sky location for different offsets of the cavity pole frequency TDCF $f_{\rm cc}$. The star represents the true sky position of the simulated signal. The $+$ indicates the maximum posterior sky position value for each corresponding data set.}
\end{figure*}
\begin{table*}
\centering
\begin{tabular}{c | c c | c c | c c }
\hline
& \multicolumn{2}{c |}{90\% area (sq. deg.)} & \multicolumn{2}{c |}{Area w/ true loc. (sq. deg.)} & \multicolumn{2}{c}{Prob. w/ true loc. (\%)} \\ [0.5ex]
\hline
& High SNR & Low SNR & High SNR & Low SNR & High SNR & Low SNR \\
\hline\hline
Nominal calib. errors & 29.2 & 435 & 1.72 & 9.51 & 17 & 7 \\
Offset $f_{\rm cc}$ 5\% & 31.1 & 359 & 4.53 & 3.57 & 32 & 4 \\
Offset $f_{\rm cc}$ 10\% & 31.4 & 408 & 12.3 & 19.6 & 61 & 13 \\
Offset $f_{\rm cc}$ 20\% & 31.9 & 381 & 27.9 & 1.60 & 87 & 2 \\
\hline
\end{tabular}
\caption{\label{tab:fcc_results} Summary of sky map statistics generated by the \texttt{ligo.skymap} software package~\cite{PhysRevD.93.024013} for the simulated BNS signal with nominal calibration errors applied as well as calibration errors including offsets of the $f_{\rm cc}$ parameter. The first two columns show the area of the sky map in units of square degrees enclosed by the measured 90\% credible interval for both the higher-SNR and lower-SNR signal. The third and fourth columns show the area of the smallest credible region that includes the true sky location for the higher and lower-SNR signals. The fifth and sixth columns show the smallest credible region percentage that includes the true sky location. The higher-SNR signal shows an increase in the area of the 90\% credible region as well as increase in the area and probability region containing the true signal as the calibration error increases. There is not a clear pattern that emerges for the lower-SNR signal.}
\end{table*}
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/all_tdcf_chirp_mass_posteriors.pdf}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/low_snr_all_tdcf_chirp_mass_posteriors.pdf}
\label{fig:sub2}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/all_tdcf_lambda_tilde_posteriors.pdf}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/low_snr_all_tdcf_lambda_tilde_posteriors.pdf}
\label{fig:sub2}
\end{subfigure}
\caption{\label{fig:alltdcf_results} Posteriors for the chirp mass $\mathcal{M}$ (top) and tidal deformability $\tilde \Lambda$ (bottom) from the simulated BNS signal with an SNR of 56 (left) and the same signal with an SNR of 16 (right). These plots show the posteriors for the situation with nominal calibration errors applied to the signal and with calibration errors computed from a 5\%, 10\% and a 20\% offset of the TDCF multipliers ($\kappa_{\rm c}$, $\kappa_{\rm T}$, $\kappa_{\rm P}$, and $\kappa_{\rm U}$) from their nominal values. The black dashed line indicates the true parameter value. The results show a measurable difference in both $\mathcal{M}$ and $\tilde \Lambda$ for both the high and low SNR signals when the TDCF multipliers are offset by 20\%. When the TDCF multipliers are offset by 10\% there is a measurable bias in the $\mathcal{M}$ and $\tilde \Lambda$ for the higher-SNR signal, but there is no significant bias introduced for the lower-SNR signal. The slight apparent measurement bias in $\mathcal{M}$ for the nominal calibration errors and 5\% offset in TDCF multpliers is due to marginalization over mass ratio $m_2 / m_1$ to obtain the one-dimensional posterior distribution.}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/all_tdcf_errors_map.pdf}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/low_snr_all_tdcf_errors_map.pdf}
\label{fig:sub2}
\end{subfigure}
\caption{\label{fig:skymaps_alltdcf} Sky maps generated using the \texttt{ligo.skymap} software package \cite{PhysRevD.93.024013} for the simulated BNS signal with an SNR of 56 (left) and the same signal with an SNR of 16 (right). Each contour represents the 90\% credible interval for sky location for different offsets of the TDCF multipliers ($\kappa_{\rm c}$, $\kappa_{\rm T}$, $\kappa_{\rm P}$, and $\kappa_{\rm U}$). The star represents the true sky position of the simulated signal. The $+$ indicates the maximum posterior sky position value for each corresponding data set.}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figures/mc_q.pdf}
\caption{\label{fig:mc_q} Corner plot for the chirp mass $\mathcal{M}$ and mass ratio $q$ parameters for the data set where nominal calibration errors were applied to a BNS injection with an SNR of 56. This corner plot illustrates how the one-dimensional $\mathcal{M}$ posterior distribution contains an apparent bias due to the marginalization over $q$.}
\end{figure}
We have highlighted the estimation of two source parameters: chirp mass $\mathcal{M}$ and the combined tidal deformability parameter $\tilde \Lambda$.
These parameters are both given by linear combinations of the binary component parameters,
\begin{eqnarray}
\mathcal{M} &=& \frac{(m_1 m_1)^{3/5}}{(m_1+m_2)^{1/5}} \\
\nonumber
\tilde \Lambda &=& \frac{8}{13}\left[ \left(1+7\eta - 31 \eta^2\right)\left(\Lambda_1 + \Lambda_2\right) \right .+ \\
&& \left . \sqrt{1-4\eta} \left(1+9\eta-11\eta^2\right) \left(\Lambda_1 - \Lambda_2\right)\right] \ ,
\end{eqnarray}
where $\eta = m_1 m_2 / (m_1+m_2)^2$ is the symmetric mass ratio.
Chirp mass is one of the most precisely measured parameters encoded in the gravitational waves of binary coalescence events.
The tidal deformability parameter $\tilde \Lambda$ is the most precisely measured parameter related to neutron star matter deformability and is known to be an effect that appears at high frequencies~\cite{PhysRevD.77.021502,PhysRevD.81.123016,PhysRevLett.112.101101,Wade:2014vqa}.
For this reason, we investigated whether systematic errors introduced by not compensating for temporal variations of $f_{\rm cc}$, which is also a parameter that will impact the data at higher frequencies (around a few hundred Hz and above), would impact the estimation of $\tilde \Lambda$.
In addition to highlighting the posteriors for $\mathcal{M}$ and $\tilde \Lambda$, we produced sky maps from the posterior samples using the \texttt{ligo.skymap} software package \cite{PhysRevD.93.024013} showing the 90\% credible intervals for the sky localization of each signal.
As mentioned above, we investigated whether relatively maximal offsets between the H1 and L1 detector would impact sky localization, since this is largely determined from the relative timing of the signal between two or more detectors.
The sky maps are also accompanied by two relevant statistics.
The first statistic is related to the precision of sky localization and is a measure of the area in square degrees of the 90\% credible region.
The second statistic is related to the accuracy of the sky localization and is constructed by searching for the smallest credible region that would contain the true sky location.
Each of these statistics are discussed in more detail in Ref.~\cite{Singer:2015ema}.
Results are shown in Figs.~\ref{fig:fcc_results} -- \ref{fig:skymaps_alltdcf} and Tabs.~\ref{tab:fcc_results} and~\ref{tab:alltdcf_results}.
The results shown in Fig.~\ref{fig:fcc_results} indicate that systematic calibration errors introduced by not compensating for temporal variations in the coupled cavity pole frequency parameter $f_{\rm cc}$ have a no measurable impact on the estimation of $\mathcal{M}$ and $\tilde \Lambda$.
This is true for both the lower-SNR signal (SNR of 16) and the higher-SNR signal (SNR of 56).
The slight apparent measurement bias in $\mathcal{M}$ is due to marginalization over mass ratio $q = m_2 / m_1$ to obtain the one-dimensional posterior distribution.
Since we assume $m_1 > m_2$, the injected value of $q$ is very close to its maximum possible value of 1.
Fig.~\ref{fig:mc_q} is a corner plot in $\mathcal{M}$ and $q$, illustrating how the marginalization over $q$ will skew the peak of a one-dimensional posterior distribution for $\mathcal{M}$.
Sky localization of the higher-SNR signal is impacted slightly by reasonably low uncompensated variations in $f_{\rm cc}$, as shown in Fig.~\ref{fig:skymaps_fcc} and Tab.~\ref{tab:fcc_results}.
Increasing calibration errors do lead to an increase in the area enclosed by the 90\% credible interval for the higher-SNR signal.
Additionally, there is a bias introduced into the sky localization for the higher-SNR signal that increases as the size of the calibration error increases.
Tab.~\ref{tab:fcc_results} shows the area enclosed by the smallest credible interval containing the true sky location as well as the probability that corresponds to this credible interval.
The lower-SNR signal, however, does not show a clear trend in the uncertainty of the sky localization, quantified by the area enclosed by the 90\% credible interval, or the bias introduced into the sky localization, quantified by the credible interval containing the true sky location, as calibration error is increased.
When we focused on systematic calibration errors induced by not compensating for the TDCF multipliers, we did see biases enter the source parameter estimation results.
We observed noticeable changes in the measurability of source parameters when the TDCF multipliers were offset from their nominal values by 20\% for both the higher and lower-SNR signals, as shown in Fig.~\ref{fig:alltdcf_results}.
When the TDCF multipliers were offset by 10\% from their nominal values there was a bias in the recovered source parameters for the higher-SNR signal only.
For the 10\% offset, the lower-SNR signal was still dominated by statistical error.
No noticeable change in measurable parameters was seen for a 5\% offset in the TDCF multipliers.
The most impactful consequence of calibration errors induced from the lack of compensation for the TDCF multipliers comes through in the sky localization of each signal, as shown in Fig.~\ref{fig:skymaps_alltdcf} and Tab.~\ref{tab:alltdcf_results}.
For both signals, the area enclosed by the 90\% credible region increases as the calibration error increases, with a noticeable increase for even the 5\% offset of scalar TDCFs.
The lower SNR-signal also shows a steady increase to the bias of the sky localization as the calibration error increases.
By the time a 10\% offset to all scalar TDCFs is introduced, the higher-SNR signal sky localization is only found in a credible region larger than 99\%, which demonstrates that even a 10\% offset to the scalar TDCFs is devastating to our ability to locate the signal on the sky if such an offset to scalar TDCFs were left uncompensated.
It is critical to note, however, that the released calibrated data has always compensated for scalar TDCFs and therefore has not included such errors.
\begin{table*}
\centering
\begin{tabular}{c | c c | c c | c c }
\hline
& \multicolumn{2}{c |}{90\% area (sq. deg.)} & \multicolumn{2}{c |}{Area w/ true loc. (sq. deg.)} & \multicolumn{2}{c}{Prob. w/ true loc. (\%)} \\ [0.5ex]
\hline
& High SNR & Low SNR & High SNR & Low SNR & High SNR & Low SNR \\
\hline\hline
Nominal calib. errors & 29.2 & 435 & 1.72 & 9.51 & 17 & 7 \\
Offset scalar TDCFs 5\% & 68.2 & 364 & 69.7 & 21.3 & 90 & 16 \\
Offset scalar TDCFs 10\% & 56.2 & 389 & $>1000$ & 79.3 & $>99$ & 36 \\
Offset scalar TDCFs 20\% & 114.7 & 587 & $>1000$ & 857 & $>99$ & 95 \\
\hline
\end{tabular}
\caption{\label{tab:alltdcf_results} Summary of sky map statistics generated by the \texttt{ligo.skymap} software package~\cite{PhysRevD.93.024013} for the simulated BNS signal with nominal calibration errors applied as well as calibration errors including offsets of the TDCF multipliers ($\kappa_{\rm c}$, $\kappa_{\rm T}$, $\kappa_{\rm P}$, and $\kappa_{\rm U}$). The first two columns show the area of the sky map in units of square degrees enclosed by the measured 90\% credible region for both the higher-SNR and lower-SNR signal. The third and fourth columns show the area of the smallest credible region that includes the true sky location for the higher and lower-SNR signals. The fifth and sixth columns show the smallest credible region percentage that includes the true sky location. The higher and lower-SNR signals both shows an increase in the area of the 90\% credible region as the calibration error increases. The lower-SNR signal also shows a steady increase in the area and probability region containing the true signal as the calibration error increases. However, the higher-SNR signal caps out at more than 99\% credible region containing the true sky location by the time we reach an offset of 10\% in the scalar TDCFs. Therefore, the change between the 10\% and 20\% scalar TDCF offset results are negligible. By the time an uncompensated 10\% offset in the scalar TDCFs is reached, the sky localization is severely impacted.}
\end{table*}
In summary, we found that calibration errors induced by not correcting for the TDCF $f_{\rm cc}$ is only impactful through the sky localization.
In particular, for louder signals we would be able to detect a broadening of the sky location credible regions and a bias introduced in the sky localization if variations of the TDCF $f_{\rm cc}$ were left uncompensated.
Not compensating for the TDCF multipliers ($\kappa_{\rm c}$, $\kappa_{\rm T}$, $\kappa_{\rm P}$, $\kappa_{\rm U}$) can lead to significant biases in the estimation of source parameters, especially for louder signals when offsets of the scalar TDCFs reach 10\% of their nominal values.
Other studies \cite{Payne:2020myg, Vitale:2020gvb, Huang:2022rdg} have investigated these questions in more detail, including in their studies the uncertainty on the calibration systematic error, and work is ongoing to better understand the impact of calibration errors outside of source parameter estimation for compact binary events.
Our limited study here aimed to investigate specifically how not compensating for TDCF filters, such as $f_{\rm cc}$, in the calibration procedure could impact a subset of astrophysical results, since we did know that not compensating for TDCF filters leaves a measurable systematic error in the calibrated strain data (see Sec.~\ref{sec:calibrationAccuracy}).
We found that compensation for the TDCF $f_{\rm cc}$ is important when considering sky localization.
We also demonstrated that correcting for more general time dependence of the calibration model through the TDCF multipliers can have a significant impact on source parameter estimation including the sky localization of a signal.
\section{Impact on Calibration Accuracy}
\label{sec:calibrationAccuracy}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\textwidth]{figures/H1_deltal_over_pcal_1269114695-21353_cropped.png}
\end{center}
\caption{The ratio $\Delta L_{\rm free}(f) / x_{\rm pc}(f)$ at three Pcal line frequencies for three versions of calibrated data for H1. 150 seconds of $\Delta L_{\rm free}$ data and $x_{\rm pc}$ data were demodulated before taking the ratio to produce each point. The red points (labeled ``Multipliers") represent calibrated data that was corrected for the time dependence of $\kappa_{\rm T}$, $\kappa_{\rm P}$, $\kappa_{\rm U}$, and $\kappa_{\rm C}$, requiring no filter updates. The green points (labeled ``$+ f_{\rm cc}$") show improved accuracy resulting from additionally compensating for time-dependence in $f_{\rm cc}$. The blue points (labeled ``$+ f_{\rm s} + Q$") indicate compensation for time dependence in $f_{\rm s}$ and $Q$. The yellow points (labeled ``$+ \tau_i$") indicate compensation for all known time-dependence. Some colors appear below other colors in certain figure panels due to the alignment of different results.
\label{fig:pcal2darm}}
\end{figure*}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figures/H1_DCS-CALIB_STRAIN_tau_i_over_CAL-PCALY_RX_PD_OUT_DQ_1269021699-178_cropped.png}
\caption{The transfer function $\Delta L_{\rm free}(f) / x_{\rm pc}(f)$ computed for four versions of calibrated data for H1. Each transfer function was produced from 178 seconds of data during a Pcal broadband injection on 2020-03-23. This shows an instance in which compensation for the $\tau_i$, although it improves accuracy at the Pcal lines, results in increased systematic errors at most frequencies.
\label{fig:pcalBroadbandManual}}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{figures/H1_actuation_timing_1185753945-20937.pdf}
\caption{A time series of the $\tau_i$ (bottom plot) and ratio $\Delta L_{\rm free}(f^{\rm pc}_1) / x_{\rm pc}(f^{\rm pc}_1)$ (top two plots) at the Pcal line frequency $f^{\rm pc}_1$ = 36.7 Hz. The sudden simultaneous shift in the $\tau_i$ and the phase of $\Delta L_{\rm free}(f^{\rm pc}_1) / x_{\rm pc}(f^{\rm pc}_1)$ indicates a change to a computational time delay, whose negative impact on calibration accuracy is corrected by compensating for time dependence in the $\tau_i$.
\label{fig:actTiming}}
\end{figure}
To assess the impact of compensating for frequency-dependent temporal variations, we compared calibrated $h(t)$ data to the calibration Pcal signal at the Pcal line frequencies.
Since the calibration lines run continuously during observation, this comparison can be tracked for long periods of time to test the accuracy and stability of the calibration at the Pcal line frequencies.
The impact of TDCF filters in $C$ and $A$ was significantly larger at H1 than at L1 during O3. Therefore, the data used for this analysis was taken from H1.
Fig.~\ref{fig:pcal2darm} shows a time series of the magnitude and phase of the ratio $\Delta L_{\rm free}(f) / x_{\rm pc}(f)$ at three Pcal lines.
As expected, correcting for all known time dependence yields the most accurate result at the Pcal lines, with systematic errors generally less than 1\% in magnitude and 1$^{\circ}$ in phase.
This result, however, only indicates that the solution and application of the TDCFs correctly enforces agreement at the calibration lines used to compute the TDCFs and does not necessarily imply correctness in the time-dependent calibration model.
The time-dependence of the coupled cavity pole $f_{\rm cc}$ is the most significant source of systematic error at the $f^{\rm pc}_2=410.3$~Hz Pcal line, where a systematic error of 1\% in magnitude and 1$^{\circ}$ in phase is reduced to a negligible level by compensating for the time dependence of $f_{\rm cc}$.
The time dependence of $\tau_{\rm T}$, $\tau_{\rm P}$, $\tau_{\rm U}$, $f_{\rm s}$, and $Q$ is primarily impactful at lower frequencies.
The systematic error at $f^{\rm pc}_1 = 17.1$~Hz seen in Fig.~\ref{fig:pcal2darm} is reduced to negligible levels by compensating for time-dependence in these parameters.
In order to assess the impact of compensating for frequency-dependent temporal variations on calibration accuracy across a broad range of frequencies, we used broadband Pcal injections to compute the transfer function $\Delta L_{\rm free}(f) / x_{\rm pc}(f)$ from 10 Hz to 400 Hz.
Fig.~\ref{fig:pcalBroadbandManual} shows broadband injection results from March 23, 2020.
Due to the fact that driving DARM motion becomes more difficult with increasing frequency, the transfer function estimate becomes noisier at higher frequencies.
At the higher frequencies, a small improvement in systematic error is seen, as expected, due to compensation for time dependence in $f_{\rm cc}$. Since $f_{\rm cc}$ was fairly close to its nominal value at the time of the injection ($f_{\rm cc}^{\rm static} = 411$~Hz, $f_{\rm cc}(t) = 420$~Hz), the improvement is small.
The impact is expected to be larger at times when $f_{\rm cc}$ undergoes more noticeable change, such as in the hours following the interferometer acquiring lock, which encompasses the time during which the interferometer optics are thermalizing.
A more obvious impact is the increase in systematic error caused by compensating for time dependence in the $\tau_i$.
This seems to indicate that the deviation of $\tau_i$ from zero was dominated by systematic errors in their estimates at this time, as was the case throughout much of O3.
For this reason, for most of O3, compensation for time dependence in the $\tau_i$ was omitted.
Further discussion of why estimates of the $\tau_i$ are known to deviate from zero can be found in Appendix~\ref{app:tau}.
In the future, this problem can be mitigated through use of the exact solution for the TDCFs developed in Ref.~\cite{VietsDissertation}.
As opposed to what was observed in O3, evidence from O2 data generally indicates an improvement in calibration accuracy due to compensation for time dependence in the $\tau_i$ parameters, as suggested by Fig.~\ref{fig:actTiming}.
This figure shows a time series of the $\tau_i$ and the ratio $\Delta L_{\rm free}(f^{\rm pc}_1) / x_{\rm pc}(f^{\rm pc}_1)$, including a moment at which a computational time-delay in the DARM loop suddenly increased by $\sim$\SI{30}{\micro \second}.
The data that is compensated for time dependence in the $\tau_i$ is immune to this sudden shift.
By employing a calculation for $\tau_i$ that is free of many of the systematic errors that plagued this parameter in O3~\cite{VietsDissertation}, we would be able to ensure improved calibration accuracy when compensating for temporal variations in $\tau_i$.
\section{Calibration Models}
\label{sec:CalibrationModels}
As seen in Fig.~\ref{fig:DARM_loop} and Eq.~\eqref{eq:DeltaL}, the sensing and actuation functions are the key components of the interferometer response needed to reconstruct the dimensionless strain from the digital control loop outputs, $d_{\rm err}$ and $d_{\rm ctrl}$.
The sensing function models the interferometer's response to residual differential arm motion in the ETMs. The actuation function models the active suppression of external DARM motion \cite{calcompanion}.
The full model for the sensing function is
\begin{eqnarray}\label{eq:C}
\tilde{C}(f; t) &=& \kappa_{\rm C}(t) \left(\frac{H_{\rm C}}{1 + if/f_{\rm cc}(t)}\right) \\
\nonumber
&& \times \left(\frac{f^2}{f^2 + f_{\rm s}(t)^2 - i f f_{\rm s}(t) / Q(t)}\right) \\
\nonumber
&& \times \ C_{\rm R}(f) \, \rm{exp}\left[{-2\pi \it{if}\tau_{\rm C}}\right] \ .
\end{eqnarray}
The gain $H_{\rm C}$ represents the conversion from meters of DARM displacement to counts as measured in the reference model.
The dimensionless scalar $\kappa_{\rm C}(t)$ has a nominal value of one and encodes the time-dependence of the gain $H_{\rm C}$, observed to fluctuate by $\sim$10\%.
The coupled cavity pole frequency $f_{\rm cc}$ is the characteristic frequency at which the detector response is significantly attenuated due to finite average photon storage time in the Fabry-P\'erot cavities.
During the second part of the third observing run (O3b), the coupled cavity pole frequency had a nominal value of 411 Hz at H1 and 461 Hz at L1, and it was observed to fluctuate as much as $\sim$20 Hz.
$\tau_{\rm C}$ is a constant time delay due to light-travel time across the length of each arm and an additional time delay in acquiring the digital signal.
The factor $C_{\rm R}(f)$ encodes the remaining frequency dependence above $\sim$1 kHz due to photodiode electronics and signal-processing filters.
The second term in parenthesis represents the impact of the slightly detuned signal recycling cavity (SRC) on the sensing function, impactful mainly below $\sim$50 Hz. $f_{\rm s}(t)$ and $Q(t)$ are, respectively, the resonant frequency and quality factor of the optical spring of the SRC~\cite{Craig, PhysRevD.74.022001, hall2017long}.
An optical spring (or anti-spring) exists in an optomechanical cavity if there is a linear relationship between the length of the cavity and the radiation pressure on the mirrors.
By design, the SRC should have no impact on the frequency-dependence of the sensing function.
During O2 however, the SRC at H1 was found to be slightly detuned from antiresonance, leading to an optical anti-spring with a time-varying resonant frequency $f_{\rm s} \lessapprox 10$ Hz.
During O3, H1 and L1 observed measurements indicative of both optical spring and anti-spring behavior in the SRC.
Moreover, at H1, there is evidence of a two-way cross-coupling of the DARM feedback control loop with feedback used to control angular motion of the test masses (L2A2L cross-coupling)~\cite{Sun:2020wke, Brooks2021}.
Although a mathematical model is not yet available for L2A2L cross-coupling, it also impacts the sensing function at low frequencies.
It is therefore unclear how much impact SRC detuning and L2A2L cross-coupling have individually on the low-frequency sensing function, and whether or not each impact is time-dependent.
It is clear that at least one of these low-frequency effects is time-dependent, since the sensing function undergoes a significant, measurable change at low frequencies in the first few hours of low-noise operation \cite{Sun:2020wke}.
The full actuation model we will use for this analysis is given by
\begin{eqnarray}\label{eq:A}
\tilde{A}(f;t) &=& \Big[ \kappa_{\rm T}(t) e^{2\pi i f \tau_{\rm T}(t)} \tilde{A}_{\rm T}(f) \\
&& + \kappa_{\rm P}(t) e^{2\pi i f \tau_{\rm P}(t)} \tilde{A}_{\rm P}(f) \nonumber \\
&& + \kappa_{\rm U}(t) e^{2\pi i f \tau_{\rm U}(t)} \tilde{A}_{\rm U}(f) \Big] \exp\left[-2\pi if\tau_{\rm A}\right] \nonumber ,
\end{eqnarray}
where $\tilde{A}_i(f) = \tilde{A}_{i,0}(f) {\displaystyle \prod_{j \leq i}} \tilde{F}_{j}(f)$ represents the frequency response of the $i$-th actuator.
Lower-frequency content of $d_{\rm ctrl}$ is directed to higher stages of actuation, and higher-frequency content is directed to lower stages.
$\tau_{\rm A}$ is a constant computational time delay.
$\kappa_{\rm T}(t)$, $\kappa_{\rm P}(t)$, and $\kappa_{\rm U}(t)$, all nominally equal to one, represent the time dependence of the strength of each stage of actuation.
$\tau_{\rm T}$, $\tau_{\rm P}$, and $\tau_{\rm U}$, all nominally zero, represent time-dependent time advances relative to the timing of the reference model for each $\tilde{A}_i$.
Before O3, the time dependence of the penultimate and upper-intermediate stages of actuation were tracked together using the factors $\kappa_{\rm PU}(t)$ and $\tau_{\rm PU}(t)$.
This was tracked using a calibration line injected via $x_{\rm ctrl}$ into $d_{\rm ctrl}$ instead of the penultimate and upper-intermediate stages of actuation (see Fig.~\ref{fig:DARM_loop}).
$\kappa_{\rm T}$ has been observed to fluctuate by $\sim$10\%, and $\kappa_{\rm P}$ and $\kappa_{\rm U}$ have been observed to fluctuate by $\sim$5\%.
The variable time delays $\tau_i$ have nominal values of zero and are generally expected to remain small, but as suggested in Sec.~\ref{sec:calibrationAccuracy}, may drift as much as $\sim$\SI{100}{\micro\second}\footnote{Previous publications treat $\kappa_i$ as complex parameters whose imaginary parts are expected to be close to zero instead of defining $\tau_i$.}.
The most likely source of true changes in these time advances is variation in computational time delays in the digital portion of the actuation function, observed as occasional sudden changes in the values of the $\tau_i$.
However, the breakdown of approximations used to estimate the $\kappa_i$ can also lead to erroneous changes in the estimates of the $\tau_i$, making it impratical to compensate for their time dependence (see Appendix~\ref{app:tau}).
Before the end of O2, the reconstruction of the calibrated strain $h(t)$ included only compensation for the time dependence of $\kappa_{\rm T}$, $\kappa_{\rm PU}$, and $\kappa_{\rm C}$.
These corrections can be applied to the sensing and actuation functions as multiplicative factors, and are therefore referred to as TDCF multipliers.
It is now possible to compensate for all modeled time dependence, including time dependence requiring updates to time-domain filters constructed from the calibration models, using the adaptive filtering techniques described in Sec.~\ref{sec:filterupdates}.
\section{Computing Time-dependent Correction Factors}
\label{sec:kappas}
In order to measure changes in the time-dependent correction factors (TDCFs) associated with the calibration models, calibration lines are injected through one of the Pcals, the stages of the actuation, and, during O2, the control signal $d_{\rm ctrl}$. The location of each injection is shown in Fig.~\ref{fig:DARM_loop}.
The injections made in the actuation system, as well as one Pcal injection, are generally all placed together within a narrow ($\sim$2 Hz) frequency window, in order to achieve sufficient accuracy in the approximation of the TDCFs of the actuation.
The approximations, however, can still lead to significant systematic errors at times \cite{Sun:2020wke, VietsDissertation}.
Table~\ref{tab:callines} shows the purpose and approximate frequency of each calibration line.
\begin{table}[h!]
\centering
\caption{\label{tab:callines} Summary of the purpose of each calibration line. $^\ast$~denotes a specific parameter computation for a given line only applicable in O2. $^\dag$~denotes a specific parameter computation for a given line only applicable in O3.}
\bigskip
\begin{tabular}{|| c | l | c ||}
\hline
\textbf{Line} & \textbf{Purpose} & \textbf{Frequency} \\
\hline
\hline
$f_{\rm ctrl} \rule{0pt}{2.3ex} $ & Computation of $\kappa_{\rm PU}^\ast$ & 10 - 40 Hz \\
$f_{\rm T}$ \rule{0pt}{2.3ex} & Computation of $\kappa_{\rm T}$ & 10 - 40 Hz \\
$f_{\rm P}$ \rule{0pt}{2.3ex} & Computation of $\kappa_{\rm P}$ & 10 - 40 Hz \\
$f_{\rm U}$ \rule{0pt}{2.3ex} & Computation of $\kappa_{\rm U}$ & 10 - 40 Hz \\
$f^{\rm pc}_1$ \rule{0pt}{2.3ex} & Computation of $\kappa_{\rm T}$, $\kappa_{\rm PU}^\ast$ or $\kappa_{\rm P}^\dag$, $\kappa_{\rm U}^\dag$; $f_{\rm s}^\dag$ and $Q^\dag$ & 10 - 40 Hz \\
$f^{\rm pc}_2$ \rule{0pt}{2.3ex} & Computation of $\kappa_{\rm C}$ and $f_{\rm cc}$ & $\sim$400 Hz \\
$f^{\rm pc}_3$ \rule{0pt}{2.3ex} & Check on high-frequency calibration & $\sim$ 1 kHz \\
$f^{\rm pc}_4$ \rule{0pt}{2.3ex} & Computation of $f_{\rm s}^\ast$ and $Q^\ast$ & $\sim$ 8 Hz \\
\hline
\end{tabular}
\end{table}
To compute the TDCFs, the amplitude and phase of each calibration line is measured in the error signal $d_{\rm err}$ and in the injection channels $x_{\rm pc}$, $x_{\rm T}$, $x_{\rm P}$, $x_{\rm U}$, and $x_{\rm ctrl}$ (O2 only) by demodulating each signal at the appropriate frequency.
Ratios are then taken in order to compare the signal in $d_{\rm err}$ to the expected signal based on the injection channel and the calibration model at that frequency.
The TDCFs are then computed using the methods described in Refs.~\cite{Darkhan, hoft}, and their computed values are averaged over $\sim$2 minutes before being applied as described in Sec.~\ref{sec:filterupdates}.
During O2, the values of $f_{\rm s}$ and $Q$ at H1 estimated using the Pcal line at $f^{\rm pc}_4$ were subject to both large noisy fluctuations and systematic errors, evidenced by the fact that the average value of the quality factor $Q$ estimated using that line was negative.
However, we found that the calibration line at $f^{\rm pc}_1$ measures $f_{\rm s}$ and $Q$ with better precision and produces results that are generally consistent with reference-model measurements.
The improvement in precision is not surprising given the reduced seismic noise at the higher frequency, allowing for a calibration line with a higher signal-to-noise ratio.
The apparent improvement in accuracy may be due to the increased impact of L2A2L cross-coupling at lower frequencies and the failure of the current sensing function model to correctly capture the frequency dependence of the low-frequency sensing function.
For these reasons, calculations of $f_{\rm s}$ and $Q$ were based on the Pcal line at $f^{\rm pc}_1$ during O3.
However, compensation for time dependence in $f_{\rm s}$ and $Q$ in the reconstruction of $h(t)$ was omitted during O3 due to insufficient evidence of improvement in calibration accuracy by including time-dependence in $f_{\rm s}$ and $Q$.
Again, this may have been due in part to the impact of L2A2L cross-coupling, but the approximations used to estimate the actuation TDCFs are also known to cause errors in the the estimates of $f_{\rm s}$ and $Q$.
\section{Conclusion}
\label{sec:conclusion}
Temporally varying filters in the sensing and actuation functions have been a source of systematic error in Advanced LIGO's calibrated data when left uncompensated, as shown in Sec.~\ref{sec:calibrationAccuracy}.
In order to correct these systematic errors, we developed adaptive filtering algorithms in the \texttt{gstlal} calibration pipeline.
Compensating for the time dependence of the coupled cavity pole $f_{\rm cc}$, as was done in an offline version of the O2 calibration and throughout O3, removes most of the systematic error above $\sim$100 Hz, with negligible errors remaining at the $f^{\rm pc}_2 \sim 400$~Hz Pcal line.
In order to remove the remaining systematic error at the $f^{\rm pc}_1 \sim 10-40$~Hz Pcal line, the calibration needs to additionally compensate for temporal variations in the low-frequency regime of the sensing function.
The low-frequency variations of the sensing function were not sufficiently well modeled during previous observing runs to compensate appropriately, which left a lingering systematic error in this frequency region.
While compensating for the calibration model parameters $f_s$, $Q$ and time delays $\tau_i$ can improve the calibration accuracy exactly at the Pcal lines used to compute these corrections, these compensations did not lead to improved broadband calibration accuracy in most situations, especially during O3 (Fig.~\ref{fig:pcalBroadbandManual}).
Further improvement in the calibration accuracy may be possible by improving the estimates of the TDCFs.
Decreasing the separation of the calibration line frequencies $f_{\rm ctrl}$, $f_{\rm T}$, and $f^{\rm pc}_1$ has been shown to improve the estimates of the TDCFs.
An exact algebraic solution for all the TDCFs, although challenging, is likely the next best improvement to the calibration accuracy related to time dependence of the calibration models~\cite{VietsDissertation}.
We demonstrated that compensating for both TDCF multipliers and filters improves the overall accuracy of the calibrated strain data.
An open question from here is how much this improved accuracy will lead to improved results in astrophysical analyses that flow from the calibrated strain data.
We investigated a narrow version of this question by studying the impact of simulated systematic calibration errors that would be caused by not compensating for different levels of temporal variations in the calibration model, isolating TDCF filters from TDCF multipliers.
For this study, we looked at a simulated BNS signal with two different SNRs.
Systematic calibration errors caused by not compensating for TDCF filters could be impactful when determining the sky localization of louder events.
Additionally, systematic calibration errors caused by not compensating for TDCF multipliers can lead to significant biases in the source parameter estimation, including sky localization.
It's important to note that all released LIGO strain data contains the appropriate compensation for TDCF multipliers and filters, using methods such as those described in this work.
While this work was ongoing, other researchers~\cite{Payne:2020myg, Vitale:2020gvb} developed a more sophisticated framework to build the calibration systematic error and its associated uncertainty into source parameter estimation algorithms directly.
In the future, the type of infrastructure developed by Payne et al. and Vitale et al. can be used to investigate specific calibration error scenarios to help inform future development related to compensating for temporal variations in the calibration models.
From this work alone we can conclude that ongoing compensation for the scalar TDCFs is critical in the final calibrated strain data products.
Additionally, compensation for TDCF filters should be included whenever possible to ensure accurate sky localization results are obtained, especially as louder signals become more commonplace with improving sensitivity of the LIGO detectors.
\section{Compensating for TDCF Filters}
\label{sec:filterupdates}
Historically, the calibration of LIGO strain data has compensated for the TDCF multipliers, $\kappa_{\rm C}$, $\kappa_{\rm T}$, $\kappa_{\rm P}$, $\kappa_{\rm U}$, and $\kappa_{\rm PU}$ (O2 only)~\cite{Darkhan,hoft}.
Here, we outline the methods developed in the calibration procedure to compensate for temporal variations in the interferometer response that require updating time-domain filters, known as TDCF filters, such as temporal variations in $f_{\rm cc}$, $f_{\rm s}$, and $Q$.
The portion of the calibration procedure that applies the techniques described below uses finite impulse response (FIR) filters for all filtering processes, including the application of reference-model-based filters for $A$ and $C^{-1}$.
\subsection{Compensating for Temporal Variations in the Coupled Cavity Pole Frequency}
During O1 and O2, real-valued corrections for $\kappa_{\rm C}$, $\kappa_{\rm T}$, and $\kappa_{\rm PU}$ were applied to $h(t)$ by simply multiplying the components of $\Delta L_{\rm free}$ before summing:
\begin{eqnarray}
h(t) = \frac{1}{L} \Big( \kappa_{\rm T}(t) A_{\rm T} \ast d_{\rm ctrl}(t) &+& \kappa_{\rm PU}(t) A_{\rm PU} \ast d_{\rm ctrl}(t) \\
\nonumber
&+& \frac{1}{\kappa_{\rm C}(t)} C^{-1} \ast d_{\rm err}(t) \Big).
\end{eqnarray}
Shortly after O2, an improved calibration was produced that additionally compensated for the time-dependence of the cavity pole frequency $f_{\rm cc}$ by applying and periodically updating a short correction filter just before the inverse sensing filter:
\begin{equation}
\label{eq:O2fccCorrection}
\Delta L_{\rm res}(t) = \frac{1}{\kappa_{\rm C}(t)} C^{-1}_{\rm static} \ast \left( \frac{1 + i f / f_{\rm cc}(t)}{1 + i f / f_{\rm cc}^{\rm static}} \right) \ast d_{\rm err}(t).
\end{equation}
where $f_{\rm cc}^{\rm static}$ refers to the cavity pole frequency of the static reference model for $C^{-1}$, and $f_{\rm cc}(t)$ is the cavity pole frequency as measured by the calibration pipeline using the methods discussed in Sec.~\ref{sec:kappas}.
As can be inferred from Eq.~\eqref{eq:C}, the cavity pole frequency enters the inverse sensing function in the form $1 + i f / f_{\rm cc}(t)$.
The term in parenthesis in Eq.~\eqref{eq:O2fccCorrection} is a short FIR filter used to compensate for time dependence in $f_{\rm cc}$; it both divides out the cavity pole component of $C^{-1}_{\rm static}$ and multiplies in the updated cavity pole component of the sensing function.
After measuring $f_{\rm cc}(t)$ and averaging its value over a specified time as described in Ref.~\cite{hoft}, correction FIR filters are created in regularly spaced intervals and smoothly transitioned into application for the $\Delta L_{\rm res}$ calculation by tapering out the previous correction filter and tapering in the current correction filter.
\subsection{Compensating for General TDCF Filters}
Since O2, this method has been further generalized to correct for the time dependence of $f_{\rm s}$ and $Q$ as well.
For this purpose, we developed a filter-generation algorithm in the \texttt{gstlal-calibration} software package~\cite{gstlalcalibration} that takes as inputs an arbitrary number of zeros and poles, a gain, and a phase factor.
The resulting filter is of the form
\begin{equation}
\tilde{\mathcal{F}}_{\rm corr}(f) = \frac{\prod_m \left(1 + i f / z_m\right)}{\prod_n \left(1 + i f / p_n\right)} K e^{2 \pi i f\tau},
\end{equation}
where the $z_m$ are the zero frequencies, the $p_n$ are the pole frequencies, $K$ is the gain of the filter, and $\tau$ is a time advance.
The zeros and poles can be either read in as a time series or passed to the algorithm as constants, which is useful when dividing out zeros and poles from the static reference model.
Additionally, the static reference-model filter or frequency-domain model can be multiplied by the correction filter in the frequency domain, so that the final product is the circular convolution $\mathcal{F} = \mathcal{F}_{\rm corr} \ast \mathcal{F}_{\rm static}$.
This feature allows FIR filters for $C^{-1}$ and $A$ to be replaced, eliminating the need to have two filters in series and allowing for reduced filter latency and higher quality application of frequency-dependent corrections.
The procedure used to update the inverse sensing filter is as follows:
\begin{enumerate}
\item In the frequency domain, compute the correction filter
\begin{eqnarray}
\tilde{C}^{-1}_{\rm corr}(f) &=& \frac{1}{\kappa_{\rm C}(t)} \frac{1 + i f / f_{\rm cc}(t)}{1 + i f / f^{\rm static}_{\rm cc}} \\
\nonumber
&\times& \frac{f^2 + f_{\rm s}^2(t) - i f f_{\rm s}(t) / Q(t)}{f^2 + f^{\rm static,2}_{\rm s} - i f f^{\rm static}_{\rm s} / Q^{\rm static}}.
\end{eqnarray}
In the frequency domain, the number of samples required to produce the correction filter is one more than half the length of the static time-domain filter $C^{-1}_{\rm static}$.
The transformation into the time domain via an inverse discrete Fourier transform (step 5) then produces a filter of the desired length.
\item In the frequency domain, multiply the correction filter by the frequency-domain static model. If this is not provided to the algorithm, it is computed from the static filter using a discrete Fourier transform.\footnote{The static filter contains an added delay which must be removed before the multiplication.}
\item Add a delay to the filter of half the length of the filter to ensure that the resulting time-domain filter is centered in time. Assuming the filter has an even length, this can be done simply by negating every other value in the frequency-domain filter, starting after the DC component and ending before the Nyquist component. This is equivalent to multiplying each frequency-domain value by $e^{-\pi i f \tau_{\rm filt}}$, where $\tau_{\rm filt}$ is the temporal duration of the time-domain filter.
\item Take the inverse real discrete Fourier transform of the frequency-domain filter to produce a time-domain filter equal in length to the static filter $C^{-1}_{\rm static}$. The algorithm used assumes that the output is to be real, and that the input is only the first half of a conjugate-symmetric array, containing the frequency-domain filter only from the DC component to the Nyquist component.
\item Apply a window function to the time-domain FIR filter so that it falls off smoothly at the edges. A Tukey window was used for this purpose during O3. In the future, a Slepian window will be used instead to maximize energy concentration in the main lobe of the filter's frequency response, resulting in a significant improvement in filtering quality.
\item Pass the updated filter to an algorithm which applies FIR filters and smoothly handles filter updates by using half of a Hann window to taper out the old filter and taper in the new filter \cite{Leo}. During a transition of duration $t_{\rm trans}$ beginning at $t_0$, the output is therefore
\begin{eqnarray}
\Delta L_{\rm res}(t) &=& \cos^2 \left( \frac{\pi}{2} \cdot \frac{t - t_0}{t_{\rm trans}} \right) C^{-1}_{\rm old} \ast d_{\rm err}(t) \label{eq:tdwhiten} \\
\nonumber
&+& \sin^2 \left( \frac{\pi}{2} \cdot \frac{t - t_0}{t_{\rm trans}} \right) C^{-1}_{\rm new} \ast d_{\rm err}(t).
\end{eqnarray}
During O3, the transition time $t_{\rm trans}$ was 2 seconds for the inverse sensing filter.
\end{enumerate}
A similar procedure can also been used to apply corrections that include both magnitude and phase to the actuation filters $A_{\rm T}$, $A_{\rm P}$, and $A_{\rm U}$. It is possible to compensate for temporal variations in the variable time advances $\tau_{\rm T}$, $\tau_{\rm P}$, and $\tau_{\rm U}$, which have shown occasional changes, using a linear-phase FIR filter. The time-varying actuation filters are therefore
\begin{equation}
\tilde{A}_i(f) = \kappa_i e^{-2 \pi i f \tau_i} \tilde{A}^{\rm static}_i,
\end{equation}
where $i \in$ \{T, P, U\}.
\subsection{Two-Tap Filters for Time-Varying Zeros}
The TDCFs of the inverse sensing function consist of only a gain, $\kappa_{\rm C}$, and three zeros: $f_{\rm cc}$ and the two zeros associated with $f_{\rm s}$ and $Q$.
We developed an alternative method to compensate for time-dependent gains and zeros, since it is possible to model a zero (and a gain) using an FIR filter with only two taps.
For a gain $K$ and a single zero at frequency $f_{\rm z}$, the filter coefficients $a_i$ are
\begin{equation}
\begin{array}{ll}
a_0 &= K \left(\frac{1}{2} + \frac{f_{\rm samp}}{2 \pi f_{\rm z}}\right) \\
a_1 &= K \left(\frac{1}{2} - \frac{f_{\rm samp}}{2 \pi f_{\rm z}}\right) ,
\end{array}
\end{equation}
where $f_{\rm samp}$ is the sampling frequency.
A time-varying filter modeling the impact of $\kappa_{\rm C}$ and $f_{\rm cc}$ can be applied in series with a static filter associated with the time-independent portion of the inverse sensing function.
The primary advantage of using this method is a significant reduction in computational cost, as the adaptive filtering used to implement Eq.~\eqref{eq:tdwhiten} is the most computationally expensive single-threaded process in the calibration procedure.
Replacing this $\sim$10,000-tap adaptive filter, which is applied at a sample rate of 16 kHz, with a static filter and a very short adaptive filter reduces computational time by almost 50\%.
For this reason, it is likely that this method will be implemented during Advanced LIGO's fourth observing run.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=\columnwidth]{figures/two_tap_errors.pdf}
\end{center}
\caption{Errors induced in the inverse sensing function by using a two-tap filter to compensate for temporal variations in $f_{\rm cc}$, assuming a static error compensation with a value of $f_{\rm cc} = 400$~Hz has already been applied to the inverse sensing function. The errors shown for different values of $f_{\rm cc}$ therefore show the remaining error caused by the fact that $f_{\rm cc}$ does not remain statically at a value of 400~Hz. Fluctuations in $f_{\rm cc}$ larger than 20~Hz are uncommon, so these results show dramatic maximal errors that would be induced in this situation, which remain below 0.04\% in magnitude and 0.2 degrees in phase.
\label{fig:twoTap}}
\end{figure}
There is a small error introduced in magnitude and phase when using the two-tap filter compared to the full adaptive filter.
The error is mostly due to an uncompensated time-delay equal to half of a sample period.
This can be corrected by compensating for the error in the static inverse sensing filter.
To compute the correction, we compute a static inverse sensing filter based on the full static inverse sensing function model, and then divide out the exact frequency response of a two-tap filter corresponding to the static reference model value of $f_{\rm cc}$.
The static filter exactly compensates for the error of the two-tap filter if $f_{\rm cc}$ is equal to its nominal value.
Although this error changes slightly as $f_{\rm cc}$ changes, the magnitude of this change is so small that compensating for the error as though it were static results in negligible errors in the inverse sensing function, as shown in Fig.~\ref{fig:twoTap}.
Compensation for additional zeros using this method can be achieved simply by convolving additional two-tap filters to produce a longer filter.
Therefore, compensation for the gain and three time-dependent zeros of the inverse sensing function requires a four-tap filter.
\section{Introduction}
\label{sec:intro}
The Advanced Laser Interferometer Gravitational-wave Observatory (Advanced LIGO) \cite{TheLIGOScientific:2014jea} is a network of two ground-based gravitational-wave detectors, located in Hanford, WA (H1), and in Livingston, LA (L1).
To date, Advanced LIGO has completed three observing runs (O1, O2, and O3) and, together with the Advanced Virgo detector \cite{VIRGO:2014yos}, made numerous detections of gravitational waves (GWs) originating from transient astrophysical sources \cite{LIGOScientific:2018mvr, PhysRevX.11.021053,LIGOScientific:2021usb,LIGOScientific:2021djp}.
The first step in analyzing the data from these ground-based gravitational-wave interferometers is the reconstruction of the interferometer strain time series.
This process, known as calibration, involves developing physically motivated models of the interferometer optics and feedback systems in order to compute the incident detector strain from the interferometer's digital feedback loops.
These developed models, referred to below as the calibration models, involve parameters that vary in time in ways that reflect the evolution of physical systems within the interferometer.
Historically, the Advanced LIGO calibration process has compensated for temporal variations in the calibration model that can be applied as multiplicative correction factors, which we will refer to as time-dependent correction factor (TDCF) multipliers~\cite{Darkhan, hoft} but has left parametric updates requiring the generation of new calibration filters, which we will refer to as TDCF filters, uncompensated.
In this work, we describe updated methodology, used during Advanced LIGO's third observing run (O3), to additionally compensate for TDCF filters as part of the calibration procedure.
Each Advanced LIGO detector consists of two orthogonal 4-km arms called the X arm and the Y arm.
Near-infrared (1064-nm) laser light is passed through a beamsplitter into each arm, before being reflected back to the beamsplitter by mirrors on the end test masses (ETMs).
In an unperturbed state, the length of each arm is held such that destructive interference will prevent light from exiting the detector at the GW readout port (see Fig.~\ref{fig:aLIGO}).
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{figures/aLIGO.pdf}
\caption{Simplified diagram of an Advanced LIGO detector. Laser light is sent through a power recycling mirror (PRM) and is split by a beamsplitter (BS) to enter a pair of Fabry-P\'erot cavities. Light is held inside the cavities by mirrors on the input test masses (ITMX and ITMY) and the end test masses (ETMX and ETMY). After exiting the cavities, the light recombines nearly 180$^{\circ}$ out of phase at the beamsplitter. Light that passes through the signal recycling mirror (SRM) is sent to a photodetector (PD). One of the dual-chain quadruple pendulum suspension systems with actuators is shown on the right. Differential arm motion is actively suppressed in the lowest three stages, called the test mass (T) stage, the penultimate (P) stage, and the upper-intermediate (U) stage.
\label{fig:aLIGO}}
\end{figure}
Gravitational waves incident on the LIGO interferometers induce changes in the differential arm length (DARM) degree of freedom in the detectors:
\begin{equation}
\Delta L_{\rm free}(t) = \Delta L_{\rm X}(t) - \Delta L_{\rm Y}(t).
\end{equation}
Changes in DARM cause fluctuations in the intensity of the laser light at the GW readout port, which is recorded as a 16384~Hz digital error signal $d_{\rm err}$ in arbitrary units called counts.
In order to improve sensitivity by increasing the power stored in the arms, additional mirrors are placed in the arms near the beamsplitter, forming a 4-km Fabry-P\'erot cavity in each arm.
A power-recycling mirror is also included before the beamsplitter to increase the power entering the detector, further improving the sensitivity.
Lastly, in order to enhance sensitivity in LIGO's most sensitive frequency band [20 Hz, 2 kHz], a signal-recycling mirror is placed just before the GW readout port.
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{figures/DARM_loop_verbose.pdf}
\caption{Block diagram of the Advanced LIGO differential arm (DARM) length feedback control loop during the third observing run (O3).
The sensing function $C$ represents the detector's response to residual DARM motion in the test masses.
The digital filter $D$ converts the error signal $d_{\rm err}$ to the control signal $d_{\rm ctrl}$.
The actuation function $A$ is split into three stages, represented by $A_{\rm U}$, $A_{\rm P}$, and $A_{\rm T}$, corresponding to the three suspension stages at which DARM motion is actively suppressed.
External DARM motion $\Delta L_{\rm free}$ and the controlled length differential $\Delta L_{\rm ctrl}$ enter the diagram in the upper-left corner to produce the residual DARM motion $\Delta L_{\rm res}$ sensed by the detector.
Known excitations are intentionally injected into the DARM loop using both the actuation system (through the injections $x_{\rm T}$, $x_{\rm P}$, and $x_{\rm U}$) and a radiation pressure actuator known as a photon calibrator ($x_{\rm pc}$). The transfer functions $F_{\rm T}$, $F_{\rm P}$, and $F_{\rm U}$ are filter functions that occur before the injections are added. During Advanced LIGO's second observing run (O2), the injection $x_{\rm ctrl}$ was also added to $d_{\rm ctrl}$. \label{fig:DARM_loop}}
\end{figure*}
Despite the use of seismic isolation and the quadruple suspension systems to attenuate excess noise at low frequencies, the detectors cannot achieve a resonant low-noise state without additional mitigation of noise.
Therefore, the error signal is filtered with a digital filter $D$ to produce a control signal $d_{\rm ctrl} = D \ast d_{\rm err}$, where $\ast$ denotes a time-domain convolution, equivalent to a frequency-domain multiplication.
The control signal is fed into a set of actuators that form a quadruple pendulum similar to that of the test masses, separated by 0.5 cm from the test masses (Fig.~\ref{fig:aLIGO}) \cite{Robertson:2002jf, Aston:2012ona}.
Although the actuators on both the X and Y arms are used to control the common arm length degree of freedom, the DARM feedback is sent to only one set of actuators.
The digital control signal $d_{\rm ctrl}$ is related to the analog controlled length differential $\Delta L_{\rm ctrl}$ through the actuation function $A$: $\Delta L_{\rm ctrl} = A \ast d_{\rm ctrl}$.
The actuation system removes the controlled length differential $\Delta L_{\rm ctrl}$ from $\Delta L_{\rm free}$ to form the residual length differential sensed by the detector,
\begin{equation}
\Delta L_{\rm res} = \Delta L_{\rm free} - \Delta L_{\rm ctrl} \ .
\end{equation}
The digital error signal $d_{\rm err}$ is related to the residual length differential by a sensing function $C$ defined by $d_{\rm err} = C \ast \Delta L_{\rm res}$.
This feedback loop, known as the DARM loop, is diagrammed in Fig.~\ref{fig:DARM_loop}.
The DARM loop can be used to solve for $\Delta L_{\rm free}$, the result being
\begin{equation}\label{eq:DeltaL}
\Delta L_{\rm free} = C^{-1} \ast d_{\rm err} + A \ast d_{\rm ctrl},
\end{equation}
or expressed alternatively,
\begin{equation}\label{eq:DeltaLwithR}
\Delta L_{\rm free} = R \ast d_{\rm err},
\end{equation}
where $\tilde{R}(f) = [1 + \tilde{A}(f) \tilde{D}(f) \tilde{C}(f)] / \tilde{C}(f)$ is the response function of the interferometer, represented here in the frequency domain for simplicity.
The final product of Advanced LIGO's calibration process is the dimensionless strain,
\begin{equation}
h(t) = \frac{\Delta L_{\rm free}(t)}{L},
\end{equation}
where $L = (L_{\rm X} + L_{\rm Y}) / 2$ is the average arm length.
Static reference models for $A$ and $C$ are produced in the frequency domain at the beginning of observing runs~\cite{calcompanion}, and periodically throughout observing runs, to check calibration accuracy and inform a calibration uncertainty estimate, described in other works~\cite{Sun:2021qcg,Sun:2020wke,Craig}.
These models are used to produce time-domain filters for $A$ and $C^{-1}$ which are then implemented into LIGO's calibration pipelines~\cite{hoft}.
For more detailed information on the methods used to compute the calibrated strain $h(t)$, see Refs.~\cite{hoft, VietsDissertation}.
Measurements used to determine the parameters in the reference models for the actuation and sensing functions (see Sec.~\ref{sec:CalibrationModels}) are taken using an auxiliary laser, known as a photon calibrator (Pcal) \cite{pcal}.
There are two Pcals, one located at each end test mass (ETM).
Radiation pressure exerted by a Pcal on an ETM produces a fiducial displacement that acts as the primary reference for absolute displacement calibration.
The overall 1-$\sigma$ uncertainty in the displacements induced by the Pcals is 0.41\% during O3 \cite{O3pcal}.
Swept-sine injections, which are sinusoidal excitations where the frequency is swept across the band at set intervals, are made using a Pcal and the actuation system, and these injections are used to find best-fit values for the parameters of $A$ and $C$.
In addition, sinusoidal excitations at select frequencies, called calibration lines, are continuously injected using a Pcal and the actuation system, in order to track parameters that are known to show slow temporal variations \cite{Darkhan}.
Occasionally, broadband injections, which are excitations that span a broad range of frequencies, are also made using a Pcal to check calibration accuracy across all frequencies.
Broadband injections and swept sine injections only occur when the interferometer is not being actively used for astrophysical observation.
The calibration lines are run constantly throughout the observing run in order to track calibration parameters throughout data being used for astrophysical observation.
The rest of this paper is structured as follows. In Sec.~\ref{sec:CalibrationModels} we give a brief introduction to the fundamental models used in the calibration procedure.
In Sec.~\ref{sec:kappas}, we outline the methods used to compute time-dependent correction factors (TDCFs) for the calibration models.
Sec.~\ref{sec:filterupdates} describes the methods developed for computing and applying TDCF filters.
Sec.~\ref{sec:calibrationAccuracy} discusses the impact of applying TDCF filters on calibration accuracy.
Sec.~\ref{sec:astrophysics} discusses the impact of some example systematic calibration errors arising from not compensating for temporal variations in the calibration models on an astrophysical analysis.
Finally, we conclude in Sec.~\ref{sec:conclusion}.
\section{Time-dependence of Calibration Model Time Delay Parameters}
\label{app:tau}
As mentioned in Secs.~\ref{sec:CalibrationModels} and~\ref{sec:calibrationAccuracy}, compensation for time dependence in the $\tau_i$ does not always lead to improvements in the calibration accuracy.
The calculation of $\tau_i$ can include systematic errors that result in declining accuracy of the calibration when these corrections are applied.
Estimates of the $\tau_i$ are known to deviate from zero for several reasons:
\begin{itemize}
\item {\it Variable computational time delays in the actuation function.} Computational delays in the digital portion of $A$ appear to shift on occasion by multiples of the 65-kHz sampling intervals at which operations are done by the computers that run the interferometer control models. Such shifts manifest themselves as sudden changes in the $\tau_i$, such as those seen in Fig.~\ref{fig:actTiming}.
\item {\it Breakdown of approximations used to estimate the actuation TDCFs.}
The accuracy of these approximations decreases with increasing separation of the calibration line frequencies used to measure the actuation function (see Table~\ref{tab:callines}), and with increasing deviation of the true response function from the nominal response function of the static reference model at those frequencies. The true response function may deviate from that of the reference model due to changes in the TDCFs or other systematic errors present in the reference model, whether time-dependent or not. A study reported in \cite{VietsDissertation} showed that systematic errors as large as \SI{40}{\micro\second} could be caused by variations in a single TDCF that are typical during observation. The other TDCFs also suffered from similar effects. Additionally, the lack of a complete model for the low-frequency sensing function at H1 can contribute to these systematic errors.
An exact solution for the TDCFs developed in \cite{VietsDissertation} appears to mitigate these problems, allowing for better calibration accuracy achieved by compensating for time dependence in all the TDCFs. This method may be used during Advanced LIGO's next observing run, O4.
\item {\it Systematic errors in the static reference model.} In addition to causing the breakdown of approximations noted above, systematic errors in the static reference model can also directly cause erroneous deviations from zero in the $\tau_i$. This is because the parameters of the reference models are chosen based on all the data collected in the measurement process to produce a model that best matches the data at all frequencies. Thus, by design, the model may not exactly match the measurements at the calibration line frequencies. Such errors are generally expected to be small, except in cases where the model does not fit the measurements well, such as can occur at H1 given the incomplete low-frequency sensing function model.
\item {\it Other unknown changes in the frequency dependence of the actuation function.} It is quite probable that any other changes to the frequency dependence of $A$ are less impactful than the aforementioned effects. Compensating for such changes would require a new reference model, since we cannot correct for time dependence that has unknown frequency dependence. Measurements are taken periodically during observing runs to ensure that deviations from the current model are sufficiently small.
\end{itemize}
|
1,477,468,751,383 | arxiv | \section{Introduction}\label{sec:Introduction}
Many times, people get involved in discussions about
certain issues that don't arise only from their own daily
experiences, in the sense that these seem to behoove
the social group to which they belong, such as discussions
about their country's macro economy, regional elections, etc.
Due to the complexity and variety of those issues, and
in many cases the remoteness with the situation,
people resort to Mass Media in order to get informed
about these ones and to know the opinion
of specialists in these topics.
People become interested in these issues because
the Media is supposed to reflect the interests and
concerns of their social environment.\par
Following Giddens (\cite{Giddens}), we find several
theoretical approaches to the role of a Mass Media
in the field of sociology. A Media seen as
social stabilizer, which keeps and reflects the dominant
culture, is the basis of the functionalism theory.
As Giddens says, several reasons lead sociologists to
move away from this approach: One of them
is that the functions mentioned above appear wholly
positive. In contrast to functionalism, the conflict theory
sees the Media as a less benign force within society:
It is a powerful agent whose ideology
justifies or legitimizes the interests of the
owner group of the Media.
The ideology of a Media can be explicit, as
for instance, in the editorial line of many newspapers,
but in most cases it's implicit in the TV time
or newspaper's pages that the Media spends to discuss
a particular issue.
The imposing of a topic in public opinion is what
is called ``to set the agenda", widely analyzed
by Mccombs (\cite{Mccombs} \cite{Mccombs1972}). As it can be
read in \cite{Mccombs1972},
\emph{``(the press) may not be successful much of the time
in telling people what to think, but it is stunningly
successful in telling its readers what to think about"}.
However, during the coverage of a given issue,
the Media can suggest its point of view to the audience.\par
We analyze this idea in an agent-based model of cultural
dissemination (the Axelrod's model \cite{Axelrod}, see section \ref{sec:TheModel}),
where each individual is characterized by a set of
features representing
its cultural profile, who interact proportionally to their
degree of similarity
(Homophily).
Specifically, in this work, we analyze the case where a Mass Media
has a given purpose:
It is interested in ``setting the agenda", i.e.
make the largest amount of agents discuss about a given topic,
as for instance, a particular policy issue, and impose its point of view.
To pursue this goal, in our model
the {\it MM} is able to modify the topic of discussion in
each feature following different
strategies. This acts as a
feedback mechanism in order to be more appealing
to the majority of the agents and increase
the probability of interaction
with them, in line with the reported in \cite{Wood}
where individuals sharing common attributes tend to
be more similar.\par
In this work, we interpret each agent's value of
a given feature
as the main interest in this particular topic, as for instance,
its favorite sport or its opinion about a policy issue.
The Axelrod's model is very well suited to study
the influence
of a {\it MM} over a given population because each feature
could be naturally
interpreted as the section of a given newspaper. For instance
the New York Times
present the following sections: World, U.S., Politics, N.Y, Business,
Opinion,
Technology, Sports, Health, Science, Arts, Fashion and Style, and
Food.\par
\subsection{Previous works}
Previous works in this topic basically follow two approaches:
a fixed Mass Media, whose cultural state is constant in
time and represents a Media who
has no feedback with the population,
and a fully adaptive Mass Media,
which varies its cultural state adopting the
most popular trait in each feature.\par
From a social point of view, a constant Mass Media
represents a Media who impose the topic of discussion in all features
regardless the society concerning.
From the physical point of view, it acts as an external constant
vector field who drives the states of the agents.
This modeling approach was followed in
\cite{GonzalezAvella2005}
and \cite{Mazzitello2007}. In the first one, the authors
studied the combined dependence of the stationary states
with the number
of traits per feature ($Q$) and the probability of interaction
with the {\it MM} ($B$).
They counter-intuitively found that the
Mass Media induces cultural diversity when the interaction
parameter $B$ is above certain threshold.
In the second work,
the combined effects of a fixed {\it MM} and a cultural drift
(modeled as random perturbations) was analyzed.
They also
included an extra feature which make the interaction
between the {\it MM} and the agents always possible.
An interesting twist was followed in \cite{ARodriguez2009}
where the Mass Media is characterized by two parameters:
a non-null overlap with all agents and a confidence value
of its information. The first parameter is related to the
concept of ``propaganda", by which the {\it MM} can
interact with all agents, included those cases where there
is no cultural similarity. The second
parameter is intended to model the level of credibility
of a $MM$ which, according to the authors,
is directly related to its level of influence.
A similar approach was followed in \cite{ARodriguez2010},
where the authors incorporate the influence of the
Mass Media as a non-pairwise interactions among agents,
following the proposal of \cite{FlacheMacy2011}
for the Axelrod's Model.\par
The other approach includes
feedback processes between the Mass Media
and the social community.
In all the cases, the Media adopts the
most popular point of view in each
feature. From a social point of view,
this modeling approach is
closer to the functionalism theory
described in \cite{Giddens}, where
the Media is supposed to reflect the dominant
culture of a society.
On the other hand, from the physical point of view,
the Media only catalyzes the dynamics toward
consensus of the population, i.e., the Media
doesn't induce any particular state.
This problem was initially faced in \cite{Shibanai2001}
following an
Axelrod's model, where two different
variants were proposed: A global field
where each feature of the {\it MM} adopted
the most popular point
of view of the population and a filter model, where
the feedback
is indirectly modeled in the interaction between agents.
In \cite{GonzalezAvella2006}, the authors proposed three
different
ways in which the Mass Media could be modeled: as an
external field (as a fixed Mass Media),
as a global field (where the {\it MM} adopts the most
popular point
of view of each feature for all of them, making it time dependent
but uniformly distributed in space) and as a local field
(where the field adopted the most popular point of
view among
an agent's neighbors, i. e. it is non-constant in space
and time).
In \cite{GonzalezAvella2007}, the authors also systematically
investigated the indirect feedback mechanism as proposed
in \cite{Shibanai2001}. It is important to remark that,
in all cases, the feedback between the {\it MM}
and the population was present in all the features.\par
Many other studies have been made in the
context of the Axelrod's model.
The role of the social contact network in the dynamics
with a fixed {\it MM} was also
investigated in \cite{Candia2008}, where the effects
of intra and inter-links of a social network with
community structure was analyzed.
In \cite{LatoraMoreno2010}, the microscopic
dynamics toward equilibrium was analyzed
when the underlying network is scale-free
in its degree distribution.
In the same modeling scenario, a model of
cross-cultural interaction through Mass Media
interactions was investigated in \cite{GonzalezAvella2012},
where two (fully adaptive) Mass Media act over two different
interconnected populations. In this model,
one of the Mass Media reflects the dominant
cultural state of a given population and
influence the other one.
The study of social interactions and the presence
of a Mass Media was also explored in the context
of other models, as for instance,
the Deffuant's model (\cite{Pineda2015}),
the voter model (\cite{Masuda2015}),
and the Sznadj's model
(\cite{Zhao2015},\cite{Crokidakis2012}).\par
\subsection{Our contribution}
As was mentioned above, we consider the Axelrod's
model (see section \ref{sec:TheModel}) as the
best candidate to study the social influence of a Mass Media,
because of the natural interpretation of Media's cultural state
as the sections of a given newspaper.
We also mentioned that the previous works
follow basically two approaches: a fixed
and a fully-adaptive Mass Media.
While a fixed Mass Media is an
oversimplification of its actual role in a society,
mainly because of the absence of a feedback mechanism
between the Media and the population,
a fully adaptive implementation suites very well
to the functionalism theory of Media's influence,
but, as Giddens says (\cite{Giddens}), this theory have fallen
into decline in the recent decades, because
it presents the Media in a very naive way, as
an external agent without ideology or purposes, who
only reflects the dominant culture.
In this work, we model the Mass Media as an external agent
with some features fixed and the rest adaptive.
Despite the apparent little difference
between our model and the approaches mentioned
above, we consider that our interpretation and representation
of the Media fits better within the conflict
theory of Media's influence (\cite{Giddens}) and within
the works of Mccombs (\cite{Mccombs} \cite{Mccombs1972}).
Here, the Media influences
the population with a given purpose:
To put an special topic to be discussed
by public opinion, i.e., to set the ``agenda"
on a particular feature, and impose its point of view.
From now on we will refer to this peculiar
value of the selected feature as the Mass Media's topic
({\it MMT}).
Simultaneously, it will try to adapt the rest
of its features in order to attract a great number
of consumers.
We will explore two different strategies in order to do that:
A conservative one, where
it looks for increasing the number of followers,
from a well established group of them, and an aggressive one,
in which the Mass Media targets all those individuals which
have not attached yet.
From now we call {\it Followers} to those agents
who adopt the {\it MMT}.
We will explore the different collective dynamics which emerges
with these strategies. We compare the results with the
case where the {\it MM} doesn't follow any strategy, i.e.,
it is constant in time.\par
The work is organized as follows:
In section \ref{sec:TheModel} we describe the model
that we implemented for our numerical simulations,
describing the different strategies
that the Mass Media can adopt,
and the definition of the observables analyzed.
In section \ref{sec:Results}
we show the main results concerning as well as the
equilibrium properties and the dynamics
towards the equilibrium of different
Mass Media's strategies.
In particular, we will be interested in
the total number of followers as a
function of time and their self-similarity and
similarity with the Mass Media.
In section \ref{sec:Conclusions} we present the
conclusions of the work.\par
\section{The Model} \label{sec:TheModel}
In this work, there are two main actors, both
described within the Axelrod's model: On the one hand
we have a population of agents which interact
amongst them and, on the other, the Mass Media,
which interacts with all the members of the
population.
\subsection{The Axelrod Model}
The Axelrod's model \cite{Axelrod}
is an agent based model
which assumes that the cultural state of
each individual can be described in
terms of a set of attributes such as
political orientation, religion,
sports preferences, etc.
The interaction mechanism between
agents is pairwise based and
rests on two fundamental hypothesis:
\begin{itemize}
\item Homophily: the probability
of interaction between two
individuals is proportional
to their cultural similarity.
\item Social Influence: after each interaction,
the agents become more similar. (see section \ref{sec:Dynamics})
\end{itemize}
The success of the original model is due
to the emergence of a non-intuitive
stationary collective behavior: a
transition between a monocultural global state,
in which all the agents are identical,
and a state of cultural diversity,
characterized by the coexistence of
regions with different cultural states.\par
\subsubsection{The Population}
We implement the Axelrod's model
with $N$ agents placed in the nodes of a
two-dimensional grid,
with rigid walls, i. e.,
the system is finite.
Following \cite{Axelrod},
the cultural state of each agent
can be represented by a vector
$v=(v_1,v_2,....,v_F)$ where $F$
stands for the number of features.
Each component $v_i$ is a
nominal variable corresponding
to a certain cultural feature
and can adopt $Q$ different values,
representing
the different traits in a specific feature.
We interpret the value of
a given feature
as the main interest in this particular topic,
and this interpretation is
analogous that we give to the Mass Media's
state, as we describe below.
\subsubsection{The Mass Media}
The Mass Media is modeled as an
external agent,
with the same number of features
and traits than the agents,
which, in principle,
can interact with all of them with probability $B$.
In this work, the Mass Media's state
represents the sections of a newspaper,
and each feature's value, the main theme
covered en each topic.
The {\it MM}'s state
has a fixed value in the first component, i.e.,
$v_1^{MM}=1$, and represents the {\it MMT}
defined above.
The other features
fluctuate in time according
to different strategies,
as we detail bellow. In what follows,
we will call {\it Followers} to those agents
in the population who share the {\it MMT},
i.e., agents with $v_1 = 1$,
independently of the other
features' values.
On the other hand, the {\it Non-Followers}
are those agents with $v_1 \neq 1$.
In order to increase the interaction
probability with the majority of
the population and potentially
increase the amount of {\it Followers},
the Mass Media can change the other
features according to one of
the following strategies:
\begin{itemize}
\item The Followers Strategy ({\it FS}):
In the non-fixed components of its cultural vector
($v_2-v_F$), the Mass Media adopts,
at each time step, the most abundant
value among those agents who share
the {\it MMT}, i.e.,
{\it Followers} agents.
This is a conservative strategy and
its main goal is to increase
the amount of $Follower$
from a well consolidated crew.
\item The Non-Followers Strategy ({\it NFS}):
In the non-fixed components,
the Mass Media adopts the most
abundant value among those agents
who don't share the
{\it MMT},
i.e., {\it Non-Followers} agents,
in order to maximize the probability of
interaction with them, and convince
them rapidly. In opposite of {\it FS},
this is an aggressive or conqueror strategy.
\end{itemize}
In all cases, we compare our results
with the case of a Fixed Mass Media
({\it FMM}), where all the features of
the Mass Media remains constant in time,
as it was analyzed
in \cite{GonzalezAvella2005}. \par
\subsubsection{Dynamics} \label{sec:Dynamics}
The dynamics of the model is the following:
\begin{itemize}
\item Select one element $i$ from the lattice.
\item Select another element $j$,
which with probability $B$, $j=MM$,
and with probability $(1-B)$, $j$
is one of the nearest neighbors of $i$,
selected at random.
\item The probability of interaction between
agents $i$ and $j$, $P_{i,j}$,
is given by the fraction of shared features,
$P_{i,j}=\frac{1}{N}\sum_{k=1}^F \delta_{v_{k}^{i},v_{k}^{j}}$.
We will refer to this probability of
interaction as the {\it overlap}
between agents $i$ and $j$.
\item If $P_{i,j} \ne 0$ and $P_{i,j} \ne 1$,
then agent $i$ picks at random a feature
$v_{k}^{i}$ and adopts the corresponding
trait of the agent $j$, $v_{k}^{j}$
(but it doesn't change immediately, see the next step).
\item We repeat this task for all
the agents in the system,
updating the changes synchronously.
This is what we call a time step.
\item After a time step, the Mass Media's state
is updated according to the current strategy.
\end{itemize}
\subsection{Observables}
In order to study the behavior of the system
according to the different strategies
quoted above,
we define the following observables:
\begin{itemize}
\item \underline{Fraction of {\it Followers} ($F/N$)}:
It's the fraction of agents who
share the first feature's value
with the Mass Media. The fraction of
{\it Followers} is the main observable
in order to evaluate
the effectiveness of each strategy.
As it can be noticed this quantity
can only be defined in this modeling approach
(i.e., when we have only one feature fixed).
\item \underline{The normalized size
of the biggest fragment ($S_{max}/N$)}:
It represents the largest group of
connected agents who share
the first feature's value. ($S_{max}/N$)
is an standard quantity in order to study
collective properties in the Axelrod's model.
The two classical stationary solutions,
consensus and cultural diversity,
can be easily identified by studying
the behavior of ($S_{max}/N$)
as function of the system's parameters.
\end{itemize}
It is important to remark
that being a Follower agent
only implies that it shares
the first feature's value
with the Media, independently
of the others. It's
interpreted as it adopts the {\it MMT},
but maybe it's not interested
in the other Media sections.
On the other hand,
it is important to stress at this point
that, given an ensemble of realizations,
the features' values different to the one
corresponding to the {\it MMT} will be
homogeneously distributed amongst
all the elements of the space
of realizations. On the other
hand, depending of the values
of $B$ and $Q$, the feature corresponding
to the {\it MMT} will end to attain
the value ``pushed"
by the Media.\par
In order to have a map of all
stationary solutions of the model,
we will plot a Mass Phase Diagram
({\it MPD}), where we calculate ($F/N$)
as a function of $B$ and $Q$.
With this, we can explore how
effective is the Mass Media to convince
as many agents as it can.
On the other hand,
we will plot a Maximum Cluster Phase Diagram
({\it MCPD}) where we calculate
($S_{max}/N$) as a function
of $B$ and $Q$ (\cite{Cosenza2010}).
It takes into account cluster properties, and
it's not necessarily a cluster composed by
{\it Followers}. Both phase diagrams have been made
for all the strategies defined above.\par
We are also interested in studying
the average similarity among the {\it Followers},
so we define the following quantities:
\begin{itemize}
\item The mean homophily between
the Mass Media and the {\it Followers}:
\begin{equation}
H_{MM}(t)=\frac{1}{N'}\sum_{i}^{N'}
(\sum_{k=1}^{F}\delta_{v_{k}^{i},v_{k}^{MM}})
\end{equation}
where the first sum is over the $N'$
{\it Followers} and the second one
over the amount of features.
This quantity takes into account the average
similarity between the
{\it Followers} and the Mass Media.
\item The mean homophily among the {\it Followers}:
\begin{equation}
H_F(t)=\frac{1}{M'}\sum_{i<j}^{M'}
(\sum_{k=1}^{F}\delta_{v_{k}^{i},v_{k}^{j}})
\end{equation}
where $M'$ stands for all the pairs of {\it Followers}
that can be formed, and the first sum is over
all agents ($i$ and $j$) who are {\it Followers}.
This quantity expresses
the average similarity among {\it Followers}.
\end{itemize}
As the states of the agents vary with time,
these observables are time-dependent
and will bring useful information about
the dynamical behavior of the system.\par
\section{Results} \label{sec:Results}
We performed numerical simulations using
a two-dimensional finite grid of $50 \times 50$
nodes (total number of nodes $N=2500$).
In each node $i$, ($i=1,...,N$), an agent
with a given cultural vector $v_1,....,v_F$
is placed. The number of features is
$F=10$ and represents the typical
number of sections of a newspaper
(for instance, in the web edition of
New York Times, the main newspaper's
sections are thirteen: World,
U.S., Politics, N.Y,. Business, Opinion,
Tech, Science, Health, Sports, Arts,
Fashion and Style, and Food.
In the international edition
of ``El Pa\'is" from Spain there
are ten sections.).\par
\subsection{Equilibrium Properties}
In this section we study the characteristics
of the stationary states. In these states
the overlap between any pair of agents
(including the Mass Media) is zero or one.
This implies that a {\it Follower} agent
finishes to share all
features' values with the {\it MM}.
In this model, the system always reaches a
stationary state.\par
In Fig.\ref{Fig1}, we plot the Mass Phase
Diagram ({\it MPD}) and the
Maximum Cluster Phase
Diagram ({\it MCPD}) for the Followers
Strategy and Fixed Mass Media ({\it FS}
and {\it FMM}), respectively.
Three regions can be identified in the {\it MPD}
corresponding to
different kind of stationary solutions:
\begin{enumerate}
\item[I] \underline{Consensus identical to the {\it MM}}:
above the $90\%$ of the agents have
the same cultural state than the {\it MM}.
This region points out the hegemony of the {\it MM}.
\item[II] \underline{Absolute Dominance of {\it MM}}:
this region is characterized by a dominant
mass, identical to the {\it MM},
whose size is above the $50\%$ and bellow
the $90\%$ of the population.
\item[III] \underline{Relative Dominance of {\it MM}}:
this region is characterized by a dominant
mass, identical to the {\it MM},
whose size is above the $10\%$ and
bellow the $50\%$ of the population.
\end{enumerate}
These regions can also be found in the {\it MCPD},
if we replace the term {\it mass} for {\it cluster}, i. e.,
we find a maximum cluster whose relative
size is above the
$90\%$ (region {\it I}), between the $90\%$ and the $50\%$
(region {\it II}), and between the $50\%$ and the $10\%$ of the population
(region {\it III}).
In all these cases, the maximum cluster corresponds to a Mass
Media's state.
However in the {\it MCPD}, two more regions can be
identified:
\begin{enumerate}
\item[IV] \underline{Fragmentation}:
there is no dominant clusters
of agents. The size of the biggest
cluster is smaller than
the $10\%$ of the population.
\item[V] \underline{Local Relative Dominance}:
this region is characterized by a dominant
cluster with a different state respect
to the Mass Media's one,
whose size is above the $10\%$ and
bellow the $25\%$ of the population.
\end{enumerate}
For the values of B and Q explored in the phase
diagrams, the fraction of $Followers$ always
exceeded the $10\%$ of the population,
although it does not necessarily form an unique cluster:
This is why the region {\it IV} in the {\it MPCD}
and the region {\it III}
in the {\it MPD} can coexist.
On the other hand,
it is important to note the presence
of a region dominated by a cluster
whose state is different to the
Mass Media's one that we call region {\it V}.
This region was
reported in \cite{Cosenza2010} for a
Fixed Mass Media,
and it acquires more relevance
in networks
with long-range interactions.
We find this region also for
the Followers strategy's {\it MCPD}.\par
An important observation of Fig.\ref{Fig1}
is the absence
of a phase diagram corresponding to the
Non-Followers Strategy ({\it NFS}).
We haven't plot it because, with this strategy,
there is only one stationary state:
consensus similar to the Mass Media.
This is a fingerprint of this strategy:
it is able to produce consensus
for any values of $B$ and $Q$.
In this strategy, the Mass Media adapts its
non-fixed features in order to maximize
the interaction probability with
those agents who don't share
the {\it MMT}.
Therefore, the Media is able to make
all agents become {\it Followers}.
Once this task is completed, the Mass Media
produces no further changes in its state.
The remaining dynamics corresponds to
interactions between agents in order to
reach total consensus according the Axelrod
model's dynamics. Even though this strategy
shows only one equilibrium solution,
its dynamical behavior shows
a dependence with the parameters of the system,
as we will show in the next section. \par
On the other hand, for a Fixed Mass Media
({\it FMM}) and the Followers Strategy ({\it FS}),
both phase diagrams are qualitatively similar:
the dominance of the Mass Media state is
absolute for low $Q$ and $B$ (left-bottom corner)
and it losses preponderance when $Q$ and
$B$ increases. In the top-right corner of
the plots ($Q\simeq 60$ and $B \simeq 0.9$)
between the $10\%$ and the $50\%$ of the agents
share the {\it MMT}, but
there is no cluster in the
system bigger than the $10\%$
of the lattice's population.
Also, for both Fixed Mass
Media and Followers Strategy's {\it MPCD},
the region {\it V} is present, i. e., the
maximum cluster is orthogonal to the Mass Media,
but its size doesn't exceed
the total amount of {\it Followers} present in the system.\par
In what follows we will analyze which are the
main characteristics and differences between
the collective dynamical behavior of the
population for the different strategies
followed by the Mass Media.\par
\begin{figure}
\centering
\includegraphics[width = \textwidth]{Fig1.eps}
\caption{\textbf{Phase Diagrams}.
Mass Phase Diagram
(Left Panels, (a) and (c))
and
Maximum Cluster Phase Diagram
(Right Panels, (b) and (d))
for a Fixed Mass Media ({\it FMM},
top panels, (a) and (b)) and
Followers Strategy ({\it FS},
down panels, (c) and (d)).
Five regions can be identified
according the degree of
dominance of a given state
which are detailed in the main text.
The phase diagrams corresponding
to the Non-Followers Strategy ({\it NFS})
are not shown because there is
only one solution in the range
of analyzed parameters:
consensus with the {\it MM} (Region {\it I}).}
\label{Fig1}
\end{figure}
\subsection{Dynamical properties of collective states
for different strategies}
In the analysis of equilibrium states, we have seen that a
Fixed Mass Media and the Followers Strategy show
similar phase diagrams, while for the Non-Followers
Strategy the system evolves to a consensus with
the Mass Media for all values of $B$ and $Q$.
Given these known equilibrium properties,
the questions we would like to face in this section are two:
\begin{enumerate}
\item How is the dynamics toward equilibrium of
the system and the Mass Media for each strategy?
\item Do the Mass Media's followers form an
homogeneous or an heterogeneous cultural group?
\end{enumerate}
With this in mind, we analyze the temporal evolution
of the system for a case of low probability interaction
with the Mass Media ($B=0.01$) and two different
values of $Q$: $Q=20$ and $Q=60$.
For $Q=20$, all the strategies reach the consensus of
the whole population, meanwhile, for $Q=60$,
only {\it NFS} does.
We analyze the collective behavior of
the population in both cases in terms
of ($F/N$), $H_{MM}$, and $H_{F}$.\par
When we analyze the fraction of {\it Followers}
as a function of time (Fig.\ref{Fig2} panel (a)),
we can observe that all strategies behave
quite similar in the low $Q$ regime ($Q=20$),
being the Non-Followers Strategy ({\it NFS})
the fastest and the Fixed Mass Media ({\it FMM})
the slowest strategy to reach the 100\% of
{\it Followers}, as it can be seen in the table \ref{table1},
where we define $\tau$ as the time when $(F/N)$
reaches the value of 1.
However, the strategies
produce differences among the $Followers$
in terms of self similarity and similarity
with the Mass Media. If we look the behavior
of $H_{MM}$, at the
panel (b) of Fig.\ref{Fig2} and table \ref{table1},
we can see that at the time $<F/N> = 1$
the {\it Followers} in the {\it FS} and {\it FMM} are
closer to the Mass Media than when
we implement the {\it NFS}.
Similar behavior is found for $<H_F>$
(Fig.\ref{Fig2} panel(c)), showing
that at the time of reaching consensus,
the Mass Media which adapts to their
followers produces a more homogeneous
crew of them respect to the case of
the one which adapts to
the $Non-Followers$ agents.
The {\it Followers} attained by this strategy
form a more heterogeneous group
until they become completely similar.\par
\begin{table}[h]
\centering
\begin{tabular}{c c c c}
Strategy & $<\tau>$ & $<H_{MM}>$ & $<H_{F}>$\\
\hline\hline
NFS & 2200 & 0.35 & 0.40\\
FS & 2400 & 0.80 & 0.70\\
FMM & 2800 & 0.80 & 0.70\\
\end{tabular}
\caption{Aproximate values of $\tau$, $H_{MM}$, and $H_{F}$
at the time of reaching consensus,
in the first feature, for each strategy and $Q = 20$.
Bra-kets denote average over 1000 events.}
\label{table1}
\end{table}
In the region of large $Q$ ($Q = 60$),
we find an unexpected non-monotonic behavior
of $<H_{MM}>$ (Fig.\ref{Fig2} panel(e))
for the Non-Follower Strategy ({\it NFS}):
At the time when $<F/N>\simeq 0.75$
(Fig.\ref{Fig2} panel(d)), it starts to decrease
until the fraction of {\it Followers} become $1$,
when it starts to increase again.
This means that in this region
(when the amount of {\it Non-Followers} is
less than the 25\% of the population,
but greater than zero),
the similarity between the {\it Followers}
and the Mass Media is very low.
In addition, in order to convince
this last 25\% of the agents,
the Mass Media takes a similar
time interval (about 4000 time steps)
as it took to convince
the 75\% of the population.
What is happening in this region?
The Mass Media tries to increase the
probability of interaction with the
{\it Non-Followers} changing
its state. The {\it Non-Followers}
can be distributed throughout all the
lattice and they have very different
cultural states among them.
At the same time, when the Mass Media
adapts to them, it departs from the
{\it Followers},
which constitute the majority of the system.
In addition, the high degree of similarity
between the Mass Media and a small
group of {\it Non-Followers}
doesn't favor the homogenization
of the {\it Followers} group, as can
be seen in Fig.(\ref{Fig2} panel(f),
where $<H_F>$ remains constant
during this time-lapse ($<H_F>\simeq0.35$).
Once all the agents become {\it Followers},
both $<H_{MM}>$ and $<H_F>$ grow monotonically
until they reach the value of 1
(i.e, agents shares all feature's values with
the Media).
In appendix \ref{sec:appendix}, we show a more detailed
description of this behavior,
analyzing a single event.\par
Concerning to the case of a Fixed Mass Media
and Followers Strategy for $Q = 60$,
both $<H_{MM}>$ and $<H_{F}>$
increase monotonically as it was
observed for $Q=20$, but in this case,
these strategies are unable to
reach consensus. {\it FS} only reaches
a little more than the $25\%$
of {\it Followers}, and {\it FMM} gets
a percentage slightly below of that.
The similarity among {\it Followers}
and between the {\it Followers}
and the Mass Media is identical for
both strategies: they reach
$<H_{MM}>=1$ and $<H_F>=1$ at
almost the same time than
they reach the largest
amount of $Followers$
that they can get.\par
\begin{figure}
\includegraphics[width = \textwidth]{Fig2.eps}
\caption{\textbf{Dynamical
behavior of the strategies}.
$B=0.01$ and $Q=20$ (left panels)
and $Q=60$ (right panels).
Fraction of {\it Followers}
($<F/N>$, panels (a) and (d)),
mean homophily respect to the
Mass Media, $<H_{MM}>$
(panels (b) and (e)), and mean
homophily among {\it Followers},
$<H_{F}>$ (panels (c) and (f))
as a function of time.
The bra-ket notation denote
averaging over 1000 events.
Squares denotes {\it NFS},
triangles {\it FS} and circles {\it FMM}.}
\label{Fig2}
\end{figure}
\subsection{Optimal combination of strategies}
The analysis performed in the previous sections tell
us that even though the Non-Followers Strategy
is the best one in terms of reaching consensus,
it takes a lot of time in convincing the last fraction of agents.
It happens because the Mass Media can change its
state very sharply in order to maximize
the overlap with the {\it Non-Followers},
which are just a few and are very different among them.
But what would happen if the Mass Media changes
its strategy when the homophily among {\it Followers}
stops growing (i.e., when $<F/N> \simeq 0.75$ for $Q=60$)?
Is it possible to reach consensus
when the second strategy is not the {\it NFS}?
Is there an optimal balance between maximizing
the amount of {\it Followers} and minimizing
the time to do this? \par
\subsubsection{Temporal combinations}
In this section we analyze how the system behaves
when the Mass Media change its strategy at a given time.
In Fig. \ref{Fig3}, panel (a) and (b),
the Mass Media starts
with the Non-Followers Strategy ({\it NFS}) until it reaches
the $75\%$ of {\it Followers} and then,
it remains as a Fixed Mass Media ({\it FMM}),
or implements the Followers Strategy ({\it FS}).
In panel (a) we can observe that,
when the combination of
strategies is implemented,
the Media is not able to reach the $100\%$ of
{\it Followers}. On the other hand,
the asymptotic fraction of {\it Followers} reaches a
value closer to $0.90$, being slightly larger
when the second strategy is {\it FMM}, but
none of these cases can improve the {\it NFS},
which reaches that amount of {\it Followers} in
less time. However, we can observe that the combination of strategies
produces that $<H_{F}>$ begins to increase monotonically
when the change is done,
in contrast to what is observed when the Media
applies a {\it pure NFS}, where it remains
practically constant.\par
In Fig. \ref{Fig3}, panel (c) and (d),
we explore different values of $(F/N)$ in which
the Media change its strategy.
We plot $<F/N>$ and
$<H_F>$, respectively, at the time $\tau$, which is the time spent
to reach the asymptotic value of $<F/N>$ when
the Media applies a combination of strategies,
and we compare them with the transitory results
obtained when only the {\it NFS} is applied.
It is important to remark that the asymptotic fraction
of {\it Followers} is always 1 for the {\it NFS}.
In panel (c), we can observe
that the {\it NFS} is always the
faster strategy to reach a given amount of
{\it Followers}, but the combination of strategies
produce a better homogenization when that
value is attained (see panel (d)).
It implies that if the Media adopts a combination of
strategies, it relaxes the condition
of full consensus, but the system reaches a stationary state
(when $<H_F> = 1$ and $<F/N>$ is maximum) faster
than when a pure {\it NFS} is applied.
However, if it wants to reach a given amount
of {\it Followers} regardless of their
homogenization, the {\it NFS} is the best strategy.\par
\begin{figure}
\includegraphics[width = \textwidth]{Fig3.eps}
\caption{\textbf{Combination of Strategies}.
Panel (a), Fraction of {\it Followers} $<F/N>$,
panel (b) homophily among {\it Followers}
$<H_{F}>$, both as function of time for
Non-Followers Strategy (diamonds)
and a combination of two strategies
starting with {\it NFS} until $(F/N)=0.75$:
Followers Strategy ({\it FS}, empty circles)
and Fixed Mass Media ({\it FMM}, full circles).
In all the cases, $Q=60$ and $B=0.01$.
Panel (c), $<F/N>$, and panel (d), $<H_F>$,
as function of the time of
reaching the asymptotic value of $<F/N>$, $\tau$.
The diamond symbol stands
for {\it NFS}, empty symbols for {\it NFS}
followed by {\it FS}, and full symbols
(except diamonds) for {\it NFS} followed
by {\it FMM}. The change of strategy is
done at different values of $(F/N)$
for the {\it NFS}: $0.75$ (triangles down),
$0.85$ (squares)
and $0.95$ (triangles up).}
\label{Fig3}
\end{figure}
\subsubsection{Structural combinations}\label{sec:Structural}
In the previous sections, the Mass Media always kept fixed
the first feature while it was able to change the values
in the others according to the different strategies
defined above. However when we analyze the mean
number of changes that the Mass Media does per time step,
we found that, on average, the {\it NFS} changes just one
feature per time step, while {\it FS} changes even less
(and it's more similar to a Fixed Mass Media, as we
have seen at their respective phase diagrams), as we
can see in Fig.\ref{Fig4}.
This suggests
that similar results can be found
if we let the Mass Media
change just one of the features at a time.
This can be seen as a combination of strategies in
the features space where one feature adapts to the
population meanwhile the others remain fixed.
We analyze variants of the {\it Non-Followers} strategy
in two different cases: When the adaptive feature is
always the same (fixed) and when it is chosen randomly
in every time step. In all cases, the first feature
remains constant.\par
In Figure \ref{Fig5} we plot $<F/N>$, as well as
$<H_{MM}>$ and $<H_F>$ as
function of time for the two cases. We analyze the system for $Q=60$
and $B=0.01$ and compare it with the cases of {\it NFS}
and {\it FMM}, respectively.
We can observe that one adaptive feature is a sufficient
condition to reach consensus for $Q=60$ and $B=0.01$,
which is impossible if all features are fixed ({\it FMM}),
as we have seen in Fig.\ref{Fig1}. In particular,
if the adaptive feature is randomly chosen,
the dynamics of the system is almost the same that in
the case of Non-Follower Strategy ({\it NFS}). On the other hand,
if the adaptive feature is fixed, also the system is able to
reach consensus in regions of the parameter space
where a Fixed Mass Media is unable to do it, but the
convergence time is larger than the one expected for a
full Non-Followers Strategy. On the other hand, this strategy
favors the homogenization of the {\it Followers} group,
as it can be observed in the behavior of $<H_{MM}>$
and $<H_{F}>$ in Fig.\ref{Fig5}.\par
\begin{figure}
\centering
\includegraphics[width = \textwidth]{Fig4.eps}
\caption{\textbf{Average number of changes}.
Average number of features that the Mass Media changes.
Each point represents the mean value of changes
over 100 time steps and this quantity is averaged over
50 events. Panel (a) stands for NFS and panel (b)
for FS, both with $Q=60$ and $B = 0.01$.}
\label{Fig4}
\end{figure}
\begin{figure}
\includegraphics[width = \textwidth]{Fig5.eps}
\caption{\textbf{Combination of Strategies 2}.
Dynamical behavior when the {\it MM}
has only one adaptive feature for $B=0.01$ and $Q=60$.
Fraction of {\it Followers} ($<F/N>$, panels (a) and (d)),
mean homophily respect to the {\it MM}
($<H_{MM}>$, panels (b) and (e)),
and mean homophily among {\it Followers},
($<H_{F}>$, panels (c) and (f)) as a function of time.
Left panels: the adaptive feature is randomly chosen
at each time step.
Right panels: The
adaptive feature is always the same.
Squares denotes {\it NFS}, circles {\it FMM}, and triangles
a Mass Media with one adaptive feature.
Bra-kets denote average over 1000 events.}
\label{Fig5}
\end{figure}
\section{Conclusions} \label{sec:Conclusions}
In this work we have proposed a new way to model the influence of
a Mass Media onto a system of social agents.
Here, the Media has
a specific purpose: To put on the agenda
a particular topic, i.e., to make people discuss a given
topic and impose its point of view, represented by a fixed feature's value.
This way to model the Media fits better within the conflict
theory of Media's influence (\cite{Giddens}) and within
the works of Mccombs (\cite{Mccombs} \cite{Mccombs1972}),
which we consider
that describe better its actual role in a society.
In order to achieve this goal, the Media takes advantage of
the other features which are adaptives in order
to increase the probability of interaction with potencial
consumers, according to different strategies. In one of them,
the Mass Media takes the most popular value of each
feature among the {\it Non-Followers} which was named the
NFS (Non-Followers Strategy). In the other one, the Mass Media
takes the most popular value of each feature among the
{\it Followers} and we called it the FS (Followers Strategy).
We compare both with the standard case where the Mass Media
is fixed in time and then it does not follow any strategy
at all ({\it FMM}).\par
When the $MM$ applies the Non-Followers strategy,
it is able to reach consensus for all values in parameter space,
which is not the case of the Followers Strategy or
when the MM is fixed.
The problem with this strategy is that it takes too
much time to reach that consensus due to the fact that
the Mass Media ends up adopting particular agents' state
in order to convince the last {\it Non-Follower} agents.
This sharp changes produce that the similarity between
the {\it Followers} and the {\it MM} decrease during this time,
while in the other strategies it always shows an increasing
behavior. It also produces that the {\it Followers} form
an heterogeneous group until the last agent is convinced.\par
In order to improve the {\it NFS}, we explored different
combinations of strategies: We have found that if the
Mass Media combines strategies in a temporal manner,
it can reach a large amount of $Followers$ (close to $90\%$)
with a monotonous increase
in their homogenization, but still a pure {\it NFS}
is the faster way to reach a given amount of {\it Followers}.
We have also found that,
when the combination is in the Feature Space
(i.e., when some features are fixed and others are adaptive),
the change of only one feature per step is a sufficient condition
in order to reach consensus (100\% of {\it Followers}).
Moreover, if the adaptive feature is selected at random,
the system behaves quite similar to the case when the {\it MM}
adopts the Non-Followers strategy. On the other hand,
if the adaptive feature is fixed, it takes almost the double
of time to reach the total amount of {\it Followers}
but it produces an homogeneous group during the dynamics.
The structural combination of strategies can be seen as a
more economic way to have an adaptive {\it MM} that can
reach consensus in all the parameter space.\par
This work is the first step in order to understand the
formation of collective states when a Mass Media
want to set the agenda and impose its point of
view in a given feature.
Future extensions of this work should include the
consideration of complex networks of interaction, and
the presence of two or more Media in a competitive
context.\par
|
1,477,468,751,384 | arxiv | \section{Introduction}
\label{sec:intro}
The ability of biological systems to learn and adapt to their environment is key for survival.
This learning ability is expressed mainly as the change in strength of the synapses that connect neurons, to adapt the structure and function of the underlying network.
The neural substrate of this ability has been studied and modeled intensively, and many brain-inspired learning rules have been proposed~\cite{McNaughton_etal78,Gerstner_etal93,Stuart_Sakmann94,Markram_etal95}.
The vast majority, if not all, of these biologically plausible learning models rely on local plasticity mechanisms, where locality is a fundamental computational principle, naturally emerging from the physical constraints of the system.
The principle of locality in synaptic plasticity presupposes that all the information a synapse needs to update its state (e.g., its synaptic weight) is directly accessible in space and immediately accessible in time. This information is based on the activity of the pre- and post-synaptic neurons to which the synapse is connected, but not on the activity of other neurons to which the synapse is not physically connected~\cite{Zenke_Neftci21}.
From a biological perspective, locality is a key paradigm of cortical plasticity that supports self-organization, which in turn enables the emergence of consistent representations of the world~\cite{Varela_etal91}.
From the hardware development perspective, the principle of locality is a key paradigm for the design of spike-based plasticity circuits integrated in embedded systems, in order to enable them to learn online, efficiently and without supervision.
This is particularly important in recent times, as the rapid growth of wearable and specialized autonomous sensory-processing devices brings new challenges in analysis and classification of sensory signals and streamed data at the edge.
Consequently, there is an increasing need for online learning circuits that have low latency, are low power, and do not need to be trained in a supervised way with large labeled data-sets.
As standard von Neumann computing architectures have separated processing and memory elements, they are not well suited for simulating parallel neural networks, they are incompatible with the locality principle, and they require a large amount of power compared to in-memory computing architectures. In contrast, neuromorphic architectures typically comprise parallel and distributed arrays of synapses and neurons that can perform computation using only local variables, and can achieve extremely low-energy consumption figures.
In particular, analog neuromorphic circuits operate the transistors in the weak inversion regime using extremely low currents (ranging from pico-Amperes to micro-Amperes), small voltages (in the range of a few hundreds of milli-Volts), and use the physics of their devices to directly emulate neural dynamics~\cite{Mead90}.
The spike-based learning circuits implemented in these architectures can exploit the precise timing of spikes and consequently take advantage of the high temporal resolutions of event-based sensors. Furthermore, the sparse nature of the spike patterns produced by neuromorphic sensors and processors can give these devices even higher gains in terms of energy efficiency.
Given the requirements to implement learning mechanisms using limited resources and local signals, animal brains still remain one of our best sources of inspiration, as they have evolved to solve similar problems under similar constraints, adapting to changes in the environment and improving their survival chances~\cite{Hofman15}.
Bottom-up, brain-inspired approaches to implement learning with local plasticity can be very challenging for solving real-world problems, because of the lack of a clear methodology for choosing specific plasticity rules, and the inability to perform global function optimization (as in gradient back-propagation)~\cite{Eshraghian_etal21}.
However, these approaches have the potential to support massively parallel and distributed computations and can be used for adaptive online systems at a minimum energy cost~\cite{Neftci_etal19}.
Recent work has explored the potential of brain-inspired self-organizing neural networks with local plasticity mechanisms for spatio-temporal feature extraction~\cite{Bichler_etal12}, unsupervised learning~\cite{Diehl_Cook15,Iyer_Basu17,Hazan_etal18,Kheradpisheh_etal18,Khacef_etal20b}, multi-modal association~\cite{Khacef_etal20,Rathi_Roy21}, adaptive control~\cite{DeWolf_etal20}, and sensory-motor interaction~\cite{Lallee_Dominey13,Zahra_Navarro-Alarcon19}.
Some of the recently proposed models of plasticity have introduced the notion of a ``third factor'', in addition to the two factors used in learning rules, derived from local information present at the pre- and post-synaptic site.
In these three-factor learning rules, the local variables are used to determine the potential change in the weight (e.g., by using a local eligibility trace), but the change in the weight is applied only when the additional third factor is presented. This third factor represents a feedback signal (e.g., reward, punishment, or novelty) which could be implemented in the brain for example by diffusion of neuromodulators, such as dopamine~\cite{Kusmierz_etal17,Gerstner_etal18}.
While this feedback signal is locally accessible to the synapse, it is not produced directly at the pre- or post-synaptic site. Therefore, these three-factor learning rules violate the principle of locality that we consider in this review.
In the next section, we provide an overview of synaptic plasticity from a historical, experimental, and theoretical perspective, with a focus on compatibility with physical emulation on \ac{CMOS} systems.
We then present a selection of representative spike-based synaptic plasticity models that adhere to the principle of locality and that can therefore be implemented in neuromorphic hardware.
We then present analog \ac{CMOS} circuits that implement the basic mechanisms present in the rules discussed. As different implementations have different characteristics that impact the type and number of elements that use local signals, for each target implementation, we assess the principle of locality taking into account the circuits' physical constraints. We conclude proposing steps to reach a unified plasticity framework and presenting the challenges that still remain open in the field.
\section{Synaptic plasticity overview}
\subsection{A brief history of plasticity}
\label{sec:history}
The quest for understanding learning in human beings is a very old one, as the process of acquiring new skills and knowledge was already a subject of debate among philosophers back in Ancient Greece where Aristotle introduced the notion of the brain as a blank state (or \emph{tabula rasa}) at birth that was then developed through education~\cite{Markram_etal11}. It was in contrast to the idea of Plato, his teacher, who believed the brain was pre-formed in the ``heavens'' then sent to earth to join the body. In modern times, the question of nature versus nurture is still being debated, with the view that we are born without preconceptions and our brain is molded by experience proposed by modern philosophers such as~\citeasnoun{Locke89}, and the studies that emphasize the importance of pre-defined structure in the nervous system and in neural networks, to guide and facilitate the learning process~\cite{Binas_etal15,Hawkins_etal17,Suarez_etal21}.
In the later half of the nineteenth century, learning and memory were linked for the first time to ``junctions between cells'' by~\citeasnoun{Bain73}, even before the discovery of the synapse. In 1890, the psychologist William James postulated a mechanism for associative learning in the brain: ``When two elementary brain-processes have been active together or in immediate succession, one of them, on reoccurring, tends to propagate its excitement into the other''~\cite{James90}. In the same period, neuroanatomists discovered the two main components of the brain: neurons and synapses. They postulated that the brain is composed of separate neurons~\cite{Waldeyer91}, and that long-term memory does requires the growth of new connections between existing neurons~\cite{Ramon-y-Cajal94}. These connections became known then as ``synapses''~\cite{Sherrington97}. At the end of the nineteenth century, synapses were already thought to control and change the flow of information in the brain, thus being the substrate of learning and memory~\cite{Markram_etal11}.
The first half of the twentieth century confirmed this hypothesis by various studies on the chemical synapses and the direction of information flow among neurons, going from the pre-synaptic axons to the post-synaptic dendrites. Neural processing was associated to the integration of synaptic inputs in the soma, and the emission of an output spike once a certain threshold was reached, propagating along the axon. Donald Hebb combined earlier ideas and recent discoveries on learning and memory in his book ``The Organization of Behavior''. Similarly to the ideas of James 60 years earlier, Hebb published, in 1949, his formal postulates for the neural mechanisms of learning and memory: ``When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased''~\cite{Hebb49}. Although Hebb stated that this idea is old, strengthening synapses (that is, increasing synaptic efficacy or weight) connecting co-active neurons has since been called ``Hebbian plasticity''. It is also called \ac{LTP}.
Even though Hebb wrote that ``less strongly established memories would gradually disappear unless reinforced through a slow ``synaptic decay''~\cite{Hebb49}, he did not provide an active mechanism for weakening synapses. Hence, the synaptic strengths or ``weights'' are unbounded and it is not possible to forget previously learned patterns to learn new ones. The first solution proposed a few years later was to maintain the sum of synaptic weights in a neuron constant~\cite{Rochester_etal56}.
In 1982, Oja proposed a Hebbian-like rule~\cite{Oja82} that adds a ``forgetting'' parameter and solves the stability problem with a form of local multiplicative normalization for synaptic weights. In the same year,~\citeasnoun{Bienenstock_etal82} proposed the \acf{BCM} learning rule where during pre-synaptic stimulation, low-frequency activity of the post-synaptic neuron leads to \ac{LTD} while high-frequency activity would lead to \ac{LTP}. This model was an important shift as it introduced the so-called homo-synaptic \ac{LTD}, where the plasticity was determined by the post-synaptic spike rate with no requirement on the temporal order of spikes. The importance of the post-synaptic neuron in synaptic plasticity was further demonstrated by showing how post-synaptic sub-threshold depolarization can determine whether \ac{LTP} or \ac{LTD} is applied~\cite{Artola_etal90,Sjostrom_etal01}.
Time is inherently present in any associative learning since it only relies on co-occurring events. \citeasnoun{McNaughton_etal78} were the first to experimentally explore the importance of the pre- and post-synaptic spike timing in plasticity. Fifteen years later,~\citeasnoun{Gerstner_etal93} hypothesized that these pre/post spike times contain more information for plasticity compared to spike rates. Their hypothesis would be confirmed by experiments conducted by~\citeasnoun{Stuart_Sakmann94} who discovered that the post-synaptic spike is back-propagating into the dendrites, as well as by~\citeasnoun{Markram_etal95} who showed that a single spike leaves behind a Calcium trace of about \SI{100}{\ms} which is propagated back into the dendrites. These findings were highly influential in the field because they provided evidence that synapses have local access to the timings of pre-synaptic and postsynaptic neurons spikes. In their subsequent experiments,~\citeasnoun{Markram_etal95} provided additional evidence that precise timing is important in neocortical neurons: They showed that using a pre/post pairing with a time difference of \SI{10}{\ms} led to \ac{LTP}, while using the same time difference of \SI{10}{\ms} in an inverted post/pre pairing led to \ac{LTD}~\cite{Markram_etal97}. Larger time differences of \SI{100}{\ms} did not lead to any change in the synaptic weights. Almost concurrently,~\citeasnoun{Bi_Poo98} performed similar experiments and found a \SI{40}{\ms} coincidence time window using paired recordings. These experiments proved that in addition to mean rates, also spike-timing matters.
This phenomenon was later formulated in a learning rule named \ac{STDP}~\cite{Song_etal00}.
In this respect, the Hebbian learning formula proposed by~\citeasnoun{Shatz92} that ``cells that fire together wire together'' could be misleading, as \possessivecite{Hebb49} postulate is directional: ``axon of cell A is near enough to excite a cell B'', which may be interpreted as implicitly time-dependent since cell A has to fire before cell B. On the other hand, \ac{STDP} had been later found to only partially explain more elaborate learning protocols, which showed that while both \ac{LTP} and \ac{LTD} are compatible \ac{STDP} at low frequencies, only \ac{LTP} occurs at high frequencies regardless of the temporal order of spikes~\cite{Sjostrom_etal01}.
As pair-based \ac{STDP} models do not reproduce the frequency dependence of synaptic plasticity, \citeasnoun{Pfister_Gerstner06} proposed \acf{TSTDP} rule where \ac{LTP} and \ac{LTD} depend on a combination of three pre- and post-synaptic spikes (either two pre- and one post or one pre- and two post). Both pair-based and triplet-based \ac{STDP} were then shown to be able to reproduce \ac{BCM} like behavior~\cite{Gjorgjieva_etal11}.
Furthermore, the same frequency dependent experiments~\cite{Sjostrom_etal01} showed that the state of the post-synaptic membrane voltage is important for driving \ac{LTP} or \ac{LTD} under the same pre/post timing conditions, confirming previous studies on the role of the neuron membrane voltage in plasticity~\cite{Artola_etal90}.
Therefore, these recent findings supported the computational plasticity models that depend on the arrival of the pre-synaptic spike and the voltage of the postsynaptic membrane~\cite{Fusi_etal00,Brader_etal07,Clopath_etal10}, and which were also compatible with the \ac{STDP} model.
The more recent three-factor learning rules aim at bridging the gap between the different time scales of learning, specifically from pre-post spike timings (milliseconds) to behavioral time scales (seconds)~\cite{Gerstner_etal18}.
Today, after more than two millennia of questioning, experimenting and more recently modeling, synaptic plasticity is still not fully understood and many questions remain unanswered.
Nevertheless, it is clear that multiple forms of plasticity and time-scales co-exist in the synapse and in the whole brain~\cite{Nelson_etal02}.
They link to each other by sharing locality as a fundamental computational principle.
\subsection{Experimental perspective}
Synaptic weights are correlated with various elements in biological synapses~\cite{Bartol_etal15b} such as the number of docked vesicles in the pre-synaptic terminal~\cite{Harris_Sultan95}, the area of the pre-synaptic active zone~\cite{Schikorski_etal97}, the dendritic spine head size~\cite{Harris_Stevens89,Hering_Sheng01}, the amount of released transmitters~\cite{Murthy_etal01,Branco_etal08,Ho_etal11}, the area of the post-synaptic density~\cite{Lisman_Harris94}, and the number of AMPA receptors~\cite{Bourne_etal13,Biology20}.
Synaptic plasticity is known to be heterogeneous across different types of synapses~\cite{Abbott_Nelson00,Bi_Poo01}, and there is no unified experimental protocol to confront the different observations.
Here we present the experimental results that let to the bottom-up definition of multiple plasticity rules.
\paragraph{Spike-timing dependence.}
Multiple experiments have been performed to demonstrate the dependence of plasticity on the exact pre- and post-synaptic neurons spike times~\cite{Markram_etal97,Bi_Poo98,Sjostrom_etal01}. From a computational point of view, these experiments led to the proposal of the \ac{STDP} learning rule~\cite{Abbott_Nelson00,Markram_etal11}, and its variants, such as \ac{TSTDP}~\cite{Pfister_Gerstner06}. Typically in these experiments, a pre-synaptic neuron is driven to fire shortly before or shortly after a postsynaptic one, by injecting a current pulse to the specific soma at the desired time. Specifically, these pre-post and post-pre pairings are repeated for \numrange{50}{100} times at a relatively low frequency of about \SIrange{1}{10}{\hertz}~\cite{Sjostrom_Gerstner10}. Experimental results reveal synaptic plasticity mechanisms that are sensitive to the difference in spike times at the time scale of milliseconds~\cite{Gerstner_etal93}. \ac{LTP} is observed when the pre-synaptic spike occurs within \SI{10}{\ms} before the post-synaptic spike is produced, while \ac{LTD} is observed when the order is reversed~\cite{Markram_etal97,Bi_Poo98}. In biology, this precise spike timing dependence could be supported by local processes in the synapses that have access to both the timing information of pre-synaptic spikes and to the postsynaptic spike times, either by sensing their local membrane voltage changes or by receiving large depolarizations caused by output spikes that are back-propagated into the dendrite~\cite{Stuart_Sakmann94}.
\paragraph{Post-synaptic membrane voltage dependence.}
Another feature of synaptic plasticity is its dependence on the post-synaptic neuron membrane voltage~\cite{Artola_etal90}. To study this dependence, the pre-synaptic neuron is driven to fire while the post-synaptic neuron is clamped to a fixed voltage. The clamped voltage level will determine the outcome of the synaptic changes: If the voltage is only slightly above the resting potential of the neuron, then \ac{LTD} is observed while if it is higher, then \ac{LTP} is observed~\cite{Artola_etal90,Ngezahayo_etal00}. These experiments show that post-synaptic spikes are not strictly necessary to induce long-term plasticity~\cite{Lisman_Spruston05,Lisman_Spruston10}. Moreover, even in the presence of a constant pre/post timing (\SI{10}{\ms}) at low frequencies (\SI{0.1}{\hertz}), the post-synaptic membrane voltage determines whether \ac{LTP} or \ac{LTD} can be induced~\cite{Sjostrom_etal01,Sjostrom_Gerstner10}.
These findings suggest that the post-synaptic membrane voltage might be more important than the pre/post spike timing for synaptic plasticity.
\paragraph{Frequency dependence.}
While both spike-timing and post-synaptic membrane voltage dependence are observed in experimental protocols when relatively low spike frequencies are used, at high frequencies \ac{LTP} tends to dominate over \ac{LTD} regardless of precise spike timing~\cite{Sjostrom_etal01}. This spike-rate dependence, which is correlated with the Calcium concentration of the postsynaptic neuron~\cite{Sjostrom_etal01}, is captured by multiple learning rules such as \ac{BCM}~\cite{Bienenstock_etal82} or the \ac{TSTDP}~\cite{Pfister_Gerstner06} rule. In these rules, high spike rates produce a strong / rapid increase in Calcium concentration that leads to \ac{LTP}, while low spike rates produce a modest / slow increase in Calcium concentration that decays over time and leads to \ac{LTD}~\cite{Bliss_Collingridge93}.
\subsection{Theoretical perspective}
Theoretical investigations of plasticity have yielded crucial insights in computational neuroscience. Here, we summarize the fundamental theoretical and practical requirements for long-term synaptic plasticity.
\paragraph{Sensitivity to pre-post spikes correlations.}
Synaptic plasticity has to adjust the synaptic weights depending on the correlation between the pre- and post-synaptic neurons~\cite{Hebb49}.
Depending on how information is encoded, this can be achieved using spike times, spike rates or both~\cite{Brette15}.
It is important to note that the objective behind the detection of correlation is to detect causality which would ensure a better prediction~\cite{Vigneron_Martinet20}. Even if correlation does not imply causality~\cite{Brette15}, correlation can be considered as a tangible trace for causality in learning.
\paragraph{Selectivity to different patterns.}
In supervised, semi-supervised and reinforcement learning, post-synaptic neurons are driven by a specific teacher signal that forces target neurons to spike and other neurons to remain silent, allowing them to become selective to the pattern applied in input~\cite{Brader_etal07}. In unsupervised learning, the selectivity emerges from competition among neurons~\cite{Kohonen90,Olshausen_Field96} like in \ac{WTA} networks~\cite{Chen17}.
By associating local plasticity with a \ac{WTA} network, it is possible to create internal models of the probability distributions of the input patterns. This can be interpreted as an approximate Expectation-Maximization algorithm for modeling the input data~\cite{Nessler_etal09}. Recently, the combination of \ac{STDP} with \ac{WTA} networks has been successfully used for solving a variety of pattern recognition problems in both supervised \cite{Chang_etal18} and unsupervised scenarios~\cite{Bichler_etal12,Diehl_Cook15,Iyer_Basu17,Rathi_Roy21}.
\paragraph{Stability of synaptic memory.}
\label{sec:stability}
Long-term plasticity requires continuous adaptation to new patterns but it also requires the retention of previously learned patterns.
As any physical system has a limited storage capacity, the presentation of new experiences will continuously generate new memories that would eventually lead to saturation of the capacity. When presenting new experiences, the stability (and retrieval) of old memories is a major problem in \acp{ANN}.
When learning of new patterns leads to the complete corruption or destruction of previously learned ones, then the network undergoes \textit{catastrophic forgetting}~\cite{Nadal_etal86,French99}.
Both catastrophic forgetting and continual learning are critical problems that need to be addresses for always-on neural processing systems, including artificial embedded processors applied to solving edge-computing tasks.
The main challenge in always-on learning is not its resilience against time, but its resilience against ongoing activity~\cite{Fusi_etal05}.
Different strategies can be used to find a good balance between plasticity and stability. A first solution is to introduce stochasticity in the learning process, for example by using Poisson distributed spike trains to represent input signals to promote plasticity, while promoting stability using a bi-stable internal variable that slowly drives the weight between one of two possible stable states~\cite{Brader_etal07}. As a result, only a few synapses will undergo a \ac{LTP} or \ac{LTD} transition for a given input, to progressively learn new patterns without forgetting previously learned patterns.
A second solution is to have an intrinsic stop-learning mechanism to modulate learning and not change synaptic weights if there is enough evidence that the current input pattern has already been learned.
Depending on the particular pattern recognition problem to be solved and the learning paradigm (offline/online), specific properties can be more or less important.
\section{Computational primitives of synaptic plasticity}
In this work, we refer to ``computational primitives of synaptic plasticity'' as those basic plasticity mechanisms that make use of local variables.
\subsection{Local variables}
\label{sec:local-var}
\begin{center}
\begin{figure}[H]
\includegraphics[width=\textwidth, angle = 0 ]{figures/Plasticity_Plot.pdf}
\centering
\caption{The local variables involved in the local synaptic plasticity models we review in this survey: Pre- and/or post-synaptic spike traces (capped or integrative) and post-synaptic membrane (dendritic or somatic) voltage.}
\label{fig:local_var}
\end{figure}
\end{center}
The following are the local variables that we consider:
\begin{description}
\item[Pre- and post-synaptic spike traces:]
These are the traces generated at the pre- and post-synaptic site triggered by the spikes of the corresponding pre- or post-synaptic neurons.
They can be computed by either integrating the spikes using a linear operator in models and a low-pass filter in circuits, or by using non-linear operators/circuits. Figure~\ref{fig:local_var} shows examples both linear (denoted as ``integrative'') and non-linear (denoted as ``capped'') spike traces.
In general, these traces represent the recent level of activation of the pre- and post-synaptic neurons.
Depending on the learning rule, there might be one or more spike traces per neuron with different decay rates.
The biophysical substrates of these traces can be diverse~\cite{Pfister_Gerstner06,Graupner_Brunel10}, for example reflecting the amount of bound glutamate~\cite{Karmarkar_Buonomano02} or the number of \ac{NMDA} receptors in an activated state~\cite{Senn_etal01}. The post-synaptic spike traces could reflect the Calcium concentration mediated through voltage-gated Calcium channels and \ac{NMDA} channels~\cite{Karmarkar_Buonomano02}, the number of secondary messengers in a deactivated state of the \ac{NMDA} receptor~\cite{Senn_etal01} or the voltage trace of a back-propagating action potential~\cite{Shouval_etal02}.
\item[Post-synaptic membrane voltage:]
The post-synaptic neuron's membrane potential is also a local variable, as it is accessible to all of the neuron's synapses.
\end{description}
These local variables are the basic elements that can be used to induce a change in the synaptic weight, which is reflected in the change of the post-synaptic membrane voltage that a pre-synaptic spike induces.
\subsection{Spikes interaction}
\label{sec:spike-interaction}
We refer to spike interactions as the number of spikes from the past activity of the neurons that are taken into account for the weight update. In particular, we distinguish two spikes interaction schemes:
\begin{description}
\item[All-to-all:] In this scheme, the spike trace is "integrative" and influenced, asymptotically, by the whole previous spiking history of the pre-synaptic neuron. The contribution of each spike is expressed in the form of a Dirac delta which should be integrated. Nevertheless, if the spikes are considered to be point processes for which their spike width is zero in the limit, the contribution of all spikes in Eq.~\eqref{eq:trace-rate} can be approximated as follows:
\begin{equation}
\centering
\label{eq:trace-rate}
\frac{dX(t)}{dt} = - \frac{X(t)}{\tau} + \sum _{i} A \: \delta \left ( t - t_i \right )
\end{equation}
where $\delta \left ( t - t_i \right )$ is a spike occurring at time $t_i$, $\tau$ is the exponential decay time constant and $A$ is the jump value such that at the moment of a spike event, \textit{the spike trace jumps by $A$}. In addition to being a good first-order model of synaptic transmission, this transfer function can be easily implemented in electronic hardware using low-pass filters. Indeed, the trace $X(t)$ represents the online estimate of the neuron's mean firing rate~\cite{Dayan_Abbott01}.
\item[Nearest spike:] This is a non-linear mode in which the spike trace is only influenced by the most recent pre-synaptic spike. It is implemented by means of a hard bound that is limiting the maximum value of the trace, such that if the jumps reach it, the trace is "capped" at that bound value. It is expressed in Eq.~\eqref{eq:trace-time}:
\begin{equation}
\centering
\label{eq:trace-time}
\frac{dX(t)}{dt} = - \frac{X(t)}{\tau} + \sum _{i} (A - X(t)) \: \delta \left ( t - t_i \right )
\end{equation}
where $A$ is both the jump value and the hard bound, such that at the moment of a spike event, \textit{the spike trace jumps to $A$}. It means that the spike trace gives an online estimate of the time since the last spike.
\end{description}
Therefore, the jump and bound parameters control the sensitivity of the learning rule to the spike timing and rate combined (all-to-all) or to the spike timing alone (nearest spike), while the decay time constant controls how fast the synapse forgets about these activities.
Further spike interaction schemes are possible, for example by adapting the nearest spike interaction so that spike interactions producing \ac{LTP} would dominate over those producing \ac{LTD}.
\subsection{Update trigger}
In most synaptic plasticity rules, the weights update is event-based and happens at the moment of a pre-synaptic spike~\citeaffixed{Brader_etal07}{e.g.}, post-synaptic spike~\citeaffixed{Diehl_Cook15}{e.g.} or both pre- and post-synaptic spikes~\citeaffixed{Song_etal00}{e.g.}. This event-based paradigm is particularly interesting for hardware implementations, as it exploits the spatio-temporal sparsity of the spiking activity to reduce the energy consumption with less updates. On the other hand, some rules use a continuous update~\citeaffixed{Graupner_Brunel12}{e.g.} arguing for more biological plausibility, or a mixture of both with e.g.\ depression at the moment of a pre-synaptic spike and continuous potentiation~\citeaffixed{Clopath_etal10}{e.g.}.
\subsection{Synaptic weights}
The synaptic weight represents the strength of a connection between two neurons.
Synaptic weights have three main characteristics:
\begin{enumerate}
\item Type: Synaptic weights can be continuous, with full floating-point resolution in software, or with fixed/limited resolution (binary in the extreme case). Both cases can be combined by using fixed resolution synapses (e.g., binary synapses), which however have a continuous internal variable that determines if and when the synapse undergoes a low-to-high (\ac{LTP}) or high-to-low (\ac{LTD}) transition, depending on the learning rule.
\item Bistability: In parallel to the plastic changes that update the weights, on their weight update trigger conditions, synaptic weights can be continuously driven to one of two stable states, depending on additional conditions on the weight itself and on its recent history. These bistability mechanisms have been shown to protect memories against unwanted modifications induced by ongoing spontaneous activity~\cite{Brader_etal07} and provide a way to implement stochastic selection mechanisms.
\item Bounds: In any physical neural processing system, whether biological or artificial, synaptic weights have bounds: they cannot grow to infinity. Two types of bounds can be imposed on the weights: (1) hard bounds, in rules with additive updates independent of weight, or (2) soft bounds, in weight-dependent updates (for example, multiplicative) rules that drive the weights toward the bounds asymptotically~\cite{Morrison_etal08}.
\end{enumerate}
\subsection{Stop-learning}
\label{sec:stop-learning}
An intrinsic mechanism to modulate learning and automatically switch from the training mode to the inference mode is important, especially in an online learning context.
This ``stop-learning'' mechanism can be either implemented with a global signal related to the performance of the system, as in reinforcement learning, or with a local signal produced in the synapses or in the soma.
For example, a local variable that can be used to implement stop-learning could be derived from the post-synaptic neuron's membrane voltage~\cite{Clopath_etal10,Albers_etal16} or spiking activity~\cite{Brader_etal07,Graupner_Brunel12}.
\section{Models of synaptic plasticity}
\label{sec:models}
We present a representative set of spike-based synaptic plasticity models, summarize their main features, and explain their working principles. Table~\ref{tab:models} shows a direct comparison of the computational principles used by the relevant models, and Tables~\ref{tab:models-variablesI} and~\ref{tab:models-variablesII} show the main variables common to the different models.
\begin{center}
\begin{table}[H]
\caption{Spike-based local synaptic plasticity rules: comparative table}
\label{tab:models}
\resizebox{\textwidth}{!}{%
\begin{tabular}{>{\hspace{0pt}}m{0.1\linewidth}>{\centering\hspace{0pt}}m{0.27\linewidth}>{\centering\hspace{0pt}}m{0.09\linewidth}>{\centering\hspace{0pt}}m{0.08\linewidth}>{\centering\hspace{0pt}}m{0.09\linewidth}>{\centering\hspace{0pt}}m{0.1\linewidth}>{\centering\hspace{0pt}}m{0.08\linewidth}>{\centering\hspace{0pt}}m{0.09\linewidth}>{\centering\arraybackslash\hspace{0pt}}m{0.1\linewidth}}
\hline
\multirow{2}{\linewidth}{\hspace{0pt}\textbf{Plasticity rule}} & \multirow{2}{\linewidth}{\hspace{0pt}\Centering{}\textbf{Local variables}} & \multirow{2}{\linewidth}{\hspace{0pt}\Centering{}\textbf{Spikes interaction}} & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{\textbf{Update trigger (spike)}} & \multicolumn{3}{>{\Centering\hspace{0pt}}m{0.27\linewidth}}{\textbf{Synaptic weights}} & \multirow{2}{\linewidth}{\hspace{0pt}\Centering{}\textbf{Stop-learning}} \cr
\cline{4-8}
& & & \textbf{\acs{LTD}} & \textbf{\acs{LTP}} & \textbf{Type} & \textbf{Bistability} & \textbf{Bounds} & \cr
\hline
\textbf{\acs{STDP}} & Pre- and post-synaptic spike traces & Nearest spike & Pre & Post & Analog & No & Hard & No \cr
\hline
\textbf{\acs{TSTDP}} & Pre-synaptic spike trace + 2 post-synaptic spike traces (different time constants) & Nearest spike / all-to-all & Pre & Post & Analog & No & Hard & No \cr
\hline
\textbf{\acs{SDSP}} & Post-synaptic membrane voltage + post-synaptic spike trace & All-to-all & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{Pre} & Binary$^*$ & Yes & Hard & Yes$^1$ \cr
\hline
\textbf{\acs{VSTDP}} & Pre-synaptic spike trace + post-synaptic membrane voltage + 2 post-synaptic membrane voltage traces & All-to-all & Pre & Continuous & Analog & No & Hard & Yes$^2$ \cr
\hline
\textbf{\acs{CSTDP}} & One synaptic spike trace updated by both pre- and post-synaptic spikes & All-to-all & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{Continuous} & Analog & Yes & Soft & Yes$^3$ \cr
\hline
\textbf{\acs{SBCM}} & Pre- and post-synaptic spike traces & All-to-all & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{Continuous} & Analog & No & Hard & No \cr
\hline
\textbf{\acs{MPDP}} & Pre-synaptic spike trace + post-synaptic membrane voltage & All-to-all & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{Continuous} & Analog & No & Hard & Yes$^4$ \cr
\hline
\textbf{\acs{DPSS}} & Pre-synaptic spike trace + post-synaptic dendritic voltage + post-synaptic somatic spike & All-to-all & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{Continuous} & Analog & No & Hard & No \cr
\hline
\textbf{\acs{RDSP}} & Pre-synaptic spike trace & All-to-all & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{Post} & Analog & No & Soft & No \cr
\hline
\textbf{\acs{HMPDP}} & Pre-synaptic spike trace + post-synaptic membrane voltage & All-to-all & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{Continuous} & Analog & No & Hard & Yes$^5$ \cr
\hline
\textbf{\acs{CMPDP}} & Post-synaptic membrane voltage + post-synaptic spike trace & All-to-all & \multicolumn{2}{>{\Centering\hspace{0pt}}m{0.2\linewidth}}{Pre} & Analog & No & Hard & No \cr
\hline
\textbf{\acs{BDSP}} & Pre-synaptic spike trace + post-synaptic event trace + post-synaptic burst trace & All-to-all & Post (event) & Post (burst) & Analog & No & Hard & No \cr
\hline
\end{tabular}%
}
\footnotesize{$^*$ Binary with analog internal variable. \newline
$^1$ At low and high activities of post-neuron (post-synaptic spike trace). \newline
$^2$ At low low-pass filtered post-synaptic membrane voltage (post-synaptic membrane voltage trace). \newline
$^3$ At low activity of pre- and post-neurons merged (synaptic spike trace). \newline
$^4$ At medium (between two thresholds) internal update trace. \newline
$^5$ At medium (between two thresholds) post-synaptic membrane voltage.}
\end{table}
\end{center}
\subsection{Song et al. (2000): \acf{STDP}}
\acf{STDP}~\cite{Song_etal00} was proposed to model how pairs of pre-post spikes interact based solely on their timing. It is one of the most widely used synaptic plasticity algorithms in the literature.
\begin{equation}
\label{eq:stdp}
\Delta w =
\begin{cases}
\mathrm{A}_{+} \exp (\frac{\Delta t}{\tau_{+}}) & \text{if $\Delta\mathrm{t}<0$}.\\
-\mathrm{A}_{-} \exp (\frac{-\Delta t}{\tau_{-}}) & \text{if $\Delta\mathrm{t}\geq 0$}.
\end{cases}
\end{equation}
The synaptic weight is updated according to Eq.~\eqref{eq:stdp}, whose variables are described in Tab.~\ref{tab:stdp}. If a post-synaptic spike occurs after a pre-synaptic one ($\Delta\mathrm{t}<0$), potentiation is induced (triggered by the post-synaptic spike). In contrast, if a pre-synaptic spike occurs after a post-synaptic spike ($\Delta\mathrm{t}\geq 0$), depression occurs (triggered by the pre-synaptic spike). The time constants $\tau_{+}$ and $\tau_{-}$ determine the time window in which the spike interaction leads to changes in synaptic weight.
As shown in Tab.~\ref{tab:models}, \ac{STDP} is based on local pre- and post-spike traces with nearest spike interaction, meaning that the spike traces are capped. Fig.~\ref{fig:stdp_traces} illustrates how \ac{STDP} is implemented using these spike traces for online learning.
\begin{center}
\begin{figure}[H]
\includegraphics[width=\textwidth, angle = 0 ]{figures/STDP_Plot.pdf}
\centering
\caption{Online implementation principle of STDP using local pre- and post-synaptic capped spike traces which provide an online estimate of the time since the last spike. For example, at the moment of post-synaptic spike, potentiation is induced with a weight change that is proportional to the value of the pre-synaptic spike trace, and the post-synaptic spike trace is updated with a jump to $A_{-}$.}
\label{fig:stdp_traces}
\end{figure}
\end{center}
\begin{table}[H]
\centering
\caption{Variables of the \ac{STDP} rule.}
\label{tab:stdp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$w$& Synaptic weight\cr
$\mathrm{A}_{+}$ / $\mathrm{A}_{-} $& Maximum amount of synaptic change\cr
$\Delta t$& Time difference between pre- and post-synaptic spikes: $t_{pre} - t_{post}$\cr
$\tau_{+}$ / $\tau_{-}$& Time constants of synaptic traces\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Pfister and Gerstner (2006): \acf{TSTDP}}
The main limitation of the original \ac{STDP} model is that it is only time-based; thus, it cannot reproduce frequency effects as well as triplet and quadruplet experiments. In this work,~\citeasnoun{Pfister_Gerstner06} introduces additional terms in the learning rule to expand the classical pair-based \ac{STDP} to a \acf{TSTDP}.
Specifically, the authors introduce a triplet depression (i.e.\ 2-pre and 1-post) and potentiation term (i.e.\ 1-pre and 2-post). They do this by adding
four additional variables that they call detectors: $r$ and $o$. $r_{1}$ and $r_{2}$ detectors are pre-synaptic spike traces which increase whenever there is a pre-synaptic spike and decrease back to zero with their individual intrinsic time constants. Similarly, $o_{1}$ and $o_{2}$ detectors increase on post-synaptic spikes and decrease back to zero with their individual intrinsic time constants. The weight changes are defined in Eqs.~\eqref{eq:tstdp}, whose variables are described in Tab.~\ref{tab:tstdp}.
\begin{equation}
\label{eq:tstdp}
\begin{array}{l}
w(t)\rightarrow w(t)+r_{1}(t)\left[A_{2}^{+}+A_{3}^{+} o_{2}(t-\epsilon)\right] \text { if } t=t^{\mathrm{{post}}} \\
w(t)\rightarrow w(t)-o_{1}(t)\left[A_{2}^{-}+A_{3}^{-} r_{2}(t-\epsilon)\right] \text { if } t=t^{\mathrm{pre}}
\end{array}
\end{equation}
While in classical \ac{STDP}, potentiation takes place shortly after a pre-synaptic spike and upon occurrence of a post-synaptic spike, in the current framework several conditions need to be considered. Potentiation is triggered at every post-synaptic spike where the weight change is gated by the $r_{1}$ detector and modulated by the $o_{2}$ detector. If there are no post-synaptic spikes shortly before the current one ($o_{2}$ is zero) the degree of potentiation is determined by $A_{2}^{+}$ only, just like in the pair-based \ac{STDP}. If however a triplet of spikes occurs (in this case 1-pre and 2-post) $o_{2}$ is non zero and an additional potentiation term $A_{3}^{+} o_{2}(t-\epsilon)$ contributes to the weight change. Analogously, $r_{2}$, $o_{1}$, $A_{2}^{-}$ and $A_{3}^{-}$ operate for the case of synaptic depression which is triggered at every pre-synaptic spike.
\begin{table}[H]
\centering
\caption{Variables of the \ac{TSTDP} rule.}
\label{tab:tstdp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$w$& Synaptic weight\cr
$r_{1}$ / $r_{2}$& Pre-synaptic spike traces - integrative \cr
$o_{1}$ / $o_{2}$ & Post-synaptic spike traces - integrative\cr
$\mathrm{A}_{2}^{+}$ / $\mathrm{A}_{2}^{-}$ &Weight change amplitude whenever there is a pair event \cr
$\mathrm{A}_{3}^{+}$ / $\mathrm{A}_{3}^{-}$& Weight change amplitude whenever there is triplet event\cr
$ \epsilon $ & Small positive constant\cr
$t^{\mathrm{pre}}$ / $t^{\mathrm{post}}$& Time of pre- and post-synaptic spikes\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Brader et al. (2007): \acf{SDSP}}
The \acf{SDSP} learning rule addresses in particular the problem of memory maintenance and catastrophic forgetting: the presentation of new experiences continuously generates new memories that will eventually lead to saturation of the limited storage capacity, hence forgetting.
As discussed in Sec.~\ref{sec:stability}, this problem concerns all learning rules in an online context.
\ac{SDSP} attempts to solve it by slowing the learning process in an unbiased way. The model randomly selects the synaptic changes that will be consolidated among those triggered by the input, therefore learning to represent the statistics of the incoming stimuli.
The \ac{SDSP} model proposed by~\citeasnoun{Brader_etal07} is demonstrated in a feed-forward neural network used for supervised learning in the context of pattern classification. Nevertheless, the model is also well suited for unsupervised learning of patterns of activation in attractor neural networks~\cite{Del-Giudice_etal03,Brader_etal07}.
It does not rely on the precise timing difference between pre- and post-synaptic spikes, instead the weight update is triggered by single pre-synaptic spikes. The sign of the weight update is determined by the post-synaptic neuron's membrane voltage $V(t^{pre})$. The post-synaptic neuron's Calcium variable $C(t^{pre})$ represents a trace of the recent low-pass filtered post-synaptic activity and is used to determine if synaptic updates should occur (stop-learning mechanism). The synaptic dynamics is described in Eq.~\eqref{eq:trace-rate}.
The internal variable $X$ is updated according to Eq.~\eqref{eq:sdsp} with the variables described in Tab.~\ref{tab:sdsp}.
\begin{equation}
\label{eq:sdsp}
\begin{array}{l}
X \rightarrow X + a
\text{ if } V(t^{\mathrm{pre}}) > \theta_{V} \text { and } \theta_{\mathrm{up}}^{\mathrm{l}} < C(t^{\mathrm{pre}}) <\theta_{\mathrm{up}}^{\mathrm{h}}\\ \\
X \rightarrow X - b
\text{ if } V(t^{\mathrm{pre}}) \leq \theta_{V} \text { and } \theta_{\mathrm{down}}^{\mathrm{l}} < C(t^{\mathrm{pre}}) <\theta_{\mathrm{down}}^{\mathrm{h}}\\
\end{array}
\end{equation}
The weight update depends on the instantaneous values of $V(t^{\mathrm{pre}})$ and $C(t^{\mathrm{pre}})$ at the arrival of a pre-synaptic spike. A change of the synaptic weight is triggered by the pre-synaptic spike if $V(t^{\mathrm{pre}})$ is above a threshold $\theta_{v}$, provided that the post-synaptic Calcium trace $C(t^{\mathrm{pre}})$ is between the potentiation thresholds $\theta_{up}^{\mathrm{l}}$ and $\theta_{up}^{\mathrm{h}}$. An analogous but flipped mechanism induces a decrease in the weights.
The synaptic weight is restricted to the interval $0 \leq X \leq X_{max}$. The bistability on the synaptic weight implies that the internal variable $X$ drifts (and is bounded) to either a low state or a high state, depending on whether $X$ is below or above a threshold $\theta_{X}$ respectively. This is shown in Eqs~\eqref{eq:sdsp-bistability}.
\begin{equation}
\label{eq:sdsp-bistability}
\frac{dX}{dt} =
\begin{cases}
\alpha & \text {if $X > \theta_{X}$}\\
-\beta & \text {if $X \leq \theta_{X}$}
\end{cases}
\end{equation}
\begin{table}[H]
\centering
\caption{Variables of the \ac{SDSP} rule.}
\label{tab:sdsp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$X$& Synaptic weight\cr
$a,b$ & Jump sizes\cr
$V(t)$& Post synaptic membrane potential\cr
$\theta_{V}$ & Membrane potential threshold \cr
$C(t)$& Post-synaptic spike trace (Calcium) - integrative\cr
$\theta_{\mathrm{up}}^{\mathrm{l}}$ / $\theta_{\mathrm{up}}^{\mathrm{h}}$ / $\theta_{\mathrm{down}}^{\mathrm{l}}$ / $\theta_{\mathrm{down}}^{\mathrm{h}}$ &Thresholds on the Calcium variable\cr
$X_{max}$& Maximum synaptic weight\cr
$\alpha$ / $\beta$ & Bistability rates, $\in\mathbb{R}^+$ \cr
$\theta_{X}$& Bistability threshold on the synaptic weight\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Clopath et al. (2010): \acf{VSTDP}}
The \acf{VSTDP} rule has been introduced to unify several experimental observations such as post-synaptic membrane voltage dependence, pre-post spike timing dependence and post-synaptic rate dependence~\cite{Clopath_Gerstner10}, but also to explain the emergence of some connectivity patterns in the cerebral cortex~\cite{Clopath_etal10}.
In this model, depression and potentiation are two independent mechanisms whose sum produces the total synaptic change. Variables of the equations are described in Tab.~\ref{tab:vstdp}.
Depression is triggered by the arrival of a pre-synaptic spike ($X(t)=1$) and is induced if the voltage trace $\overline{u}_{-}(t)$ of the post-synaptic membrane voltage $u(t)$ is above the threshold $\theta_{-}$ (see Eq.~\eqref{eq:vstdp_ltd}).
\begin{equation}
\label{eq:vstdp_ltd}
\frac{dw^{-}}{dt} = -A_{\mathrm{LTD}} X(t)[\overline{u}_{-}(t) - \theta_{-}]_{+}
\end{equation}
On the other hand, potentiation is continuous and occurs following Eq.~\eqref{eq:vstdp_ltp} if the following conditions are met at the same time:
\begin{itemize}
\item The instantaneous post-synaptic membrane voltage $u(t)$ is above the threshold $\theta_{+}$, with $\theta_{+} > \theta_{-}$;
\item The low-pass filtered post-synaptic membrane voltage $\overline{u}_{+}$ is above $\theta_{-}$;
\item A pre-synaptic spike occurred a few milliseconds earlier and has left a trace $\overline{x}$.
\end{itemize}
\begin{equation}
\label{eq:vstdp_ltp}
\frac{dw^+}{dt} = +A_{\mathrm{LTP}}\: \overline{x}(t)\: \left [ u(t) - \theta_{+} \right ]_{+}\: \left [ \overline{u}_{+}(t) - \theta_{-} \right ]_{+}
\end{equation}
The total synaptic change is the sum of depression and potentiation expressed in Eqs.~\eqref{eq:vstdp_ltd} and \eqref{eq:vstdp_ltp} respectively, within the weights' hard bounds $0$ and $w_{\mathrm{max}}$.
\begin{table}[H]
\centering
\caption{Variables of the \ac{VSTDP} rule.}
\label{tab:vstdp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$w$ & Synaptic weight\cr
$X(t)$& Pre-synaptic spike train\cr
& $X(t) = \sum _{n} \delta \left ( t - t^{n} \right )$\cr
$\delta(.)$ & Delta-Dirac function\cr
$t^{n}$& Time of the n-th pre-synaptic spike\cr
$u(t)$& Post-synaptic membrane voltage\cr
$\overline{u}_{-}(t)$ / $\overline{u}_{+}(t)$& Post-synaptic membrane voltage traces \c
$A_{\mathrm{LTD}}$ / $A_{\mathrm{LTP}}$ & Amplitudes for depression and potentiation\cr
$\theta_{-}$ / $\theta_{+}$& Thresholds\cr
$[.]_{+}$& Rectifying bracket $[x]_+ = x$ if $x>0$, $[x]_+ =0$ otherwise\cr
$\overline{x}(t)$& Pre-synaptic spike trace - integrative\cr
$w_{\mathrm{max}}$& Weight max hard bound\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Graupner and Brunel (2012): \acf{CSTDP}}
Founded on molecular studies,~\citeasnoun{Graupner_Brunel12} proposed a plasticity model (\ac{CSTDP}) based on a transient Calcium signal. They model a single Calcium trace variable $c(t)$ which represents the linear sum of individual Calcium transients elicited by pre- and post-synaptic spikes at times $t_i$ and $t_j$, respectively. The amplitudes of the transients elicited by pre- and post-synaptic spikes are given by $C_{\mathrm{pre}}$ and $C_{\mathrm{post}}$, respectively, and $c(t)$ decays constantly towards $0$.
In the proposed model, the synaptic strength is described by the synaptic efficacy $\rho\in[0:1]$, which is constantly updated according to Eq.~\eqref{eq:cstdp}, whose variables are described in Tab.~\ref{tab:cstdp}.
Changes in synaptic efficacy are continuous and depend on the relative times in which the Calcium trace $c(t)$ is above the potentiation ($\theta_p$) and depression ($\theta_d$) thresholds~\cite{Graupner_Brunel12}.
\begin{equation}
\begin{split}
\tau \frac{d\rho}{dt} = -\rho(1 - \rho)(\rho_{\star} - \rho) + \gamma_{p}(1 - \rho)\Theta[c(t) - \theta_p] - \gamma_d \rho \Theta[c(t) - \theta_d] + \mathrm{Noise(t)}
\end{split}
\label{eq:cstdp}
\end{equation}
If the Calcium variable is above the threshold for potentiation ($\Theta[c(t) - \theta_p] = 1$) the synaptic efficacy is continuously increased by $\frac{\gamma_p(1 - \rho)}{\tau}$ and as long as the Calcium variable is above the threshold for depression ($\Theta[c(t) - \theta_d] = 1$) the synaptic efficacy is continuously decreased by $-\frac{\gamma_d\rho}{\tau}$.
Eventually, the efficacy updates induced by the Calcium concentration are in direct competition with each other as long as $c(t)$ is above both thresholds~\cite{Graupner_Brunel12}.
In addition to constant potentiation or depression updates, the bistability mechanism $-\rho(1 - \rho)(\rho_{\star} - \rho)$ drives the synaptic strength toward $0$ or $1$, depending on whether the instantaneous value of $\rho$ is below or above the bistability threshold $\rho_{\star}$.
\citeasnoun{Graupner_Brunel12} show that their rule replicates a plethora of dynamics found in numerous experiments, including pair-based behavior \ac{STDP} with different \ac{STDP} curves, synaptic dynamics found in CA3-CA1 slices for postsynaptic neuron spikes and dynamics based on spike triplets or quadruplets.
However, the rule contains only a single Calcium trace variable $c(t)$ per synapse, which is updated by both pre- and post-synaptic spikes. Since the synaptic efficacy update only depends on this variable and not on the individual or paired spike events of the pre- and post-synaptic neuron, the system can get into a state in which isolated pre-synaptic or isolated post-synaptic activity can lead to synaptic efficacy changes. In extreme cases, isolated pre(post)-synaptic spikes could drive a highly depressed ($\rho = 0$) synapse into the potentiated state ($\rho = 1$), without the occurrence of any post(pre)-synaptic action potential.
In a recent work, \citeasnoun{Chindemi_etal22} uses a modified version of the \ac{CSTDP} rule based on data-constrained post-synaptic Calcium dynamics according to experimental data. They show that the rule is able to replicate the connectivity of pyramidal cells in the neocortex, by adapting the probabilistic and limited release of $Ca^{2+}$ during pre- and post-synaptic activity.
\begin{table}[H]
\centering
\caption{Variables of the \ac{CSTDP} rule.}
\label{tab:cstdp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$c(t)$& Pre- and post-synaptic spike trace (Calcium) - integrative\cr
$C_{\mathrm{pre}}$ / $C_{\mathrm{post}}$& Amplitudes of pre- and post-synaptic Calcium jumps \cr
$\theta_p$ / $\theta_d$& Thresholds for potentiation and depression\cr
$\tau$& Time constant of synaptic efficacy changes\cr
$\rho$& Synaptic efficacy\cr
$\rho_{\star}$ & Bistability threshold on the synaptic efficacy\cr
$\gamma_{p}$ / $\gamma_{d}$ & Rates of synaptic potentiation and depression\cr
$\Theta[.]$ & Heaviside function $\Theta[x] = 1$ if $x>0$, $\Theta[x] =0$ otherwise\cr
$\mathrm{Noise(t)}$ & Activity-dependent noise\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Bekolay et al. (2013): \acf{SBCM}}
The \acf{SBCM} learning rule~\cite{Bekolay_etal13} has been proposed as another spike-based formulation of the abstract learning rule \ac{BCM}, after the \ac{TSTDP} rule. The weight update of the \ac{SBCM} learning rule is continuous and is expressed in Eq.~\eqref{eq:sbcm}, whose variables are described in Tab.~\ref{tab:sbcm}.
\begin{equation}
\label{eq:sbcm}
\Delta w_{ij} = \kappa \alpha_j a_i a_j (a_j - \theta(t))
\end{equation}
The mechanistic properties of \ac{SBCM} are closer to the formal \ac{BCM} rule, with the activities of the neurons expressed as spike activity traces and a filtered modification threshold. Nevertheless, the \ac{SBCM} exhibits both the timing dependence of \ac{STDP} and the frequency dependence of the \ac{TSTDP} rule.
\begin{table}[H]
\centering
\caption{Variables of the \ac{SBCM} rule.}
\label{tab:sbcm}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$w_{ij}$ & Synaptic weight between pre- and post-synaptic neurons $i$ and $j$, respectively \cr
$\kappa$ & Learning rate \cr
$\alpha_j$ & Scaling factor (gain) associated with the neuron \cr
$a_i$ / $a_j$ & Pre- and post-synaptic spike traces \cr
$\theta(t)$ & Modification threshold: $\theta(t) = e^{-t / \tau} \theta(t-1) + (1 - e^{-t / \tau} a_j(t))$ \cr
$\tau$& Time constant of modification threshold\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Yger and Harris (2013): \acf{MPDP}}
The \acf{MPDP} rule, also called the ``Convallis'' rule~\cite{Yger_Harris13} aims to approximate a fundamental computational principle of the neocortex and is derived from principles of unsupervised learning algorithms. The main assumption of the rule is that projections with non-Gaussian distributions are more likely to extract useful information from real-world patterns~\cite{Hyvarinen_Oja00}. Therefore, synaptic changes should tend to increase the skewness of a neuron’s sub-threshold membrane potential distribution. The rule is therefore derived from an objective function that measures how non-Gaussian the membrane potential distribution is, such that the post-synaptic neuron is often close to either its resting potential or spiking threshold (and not in between).
The resulting plasticity rule reinforces synapses that are active during post-synaptic depolarization and weakens those active during hyper-polarization. It is expressed in Eq.~\eqref{eq:mpdp-trace}, where changes are continuously made on an internal update trace $\Psi$, and are then applied on the synaptic weight $w$ as expressed in Eq.~\eqref{eq:mpdp-update}. The variables of the equations are explained in Tab.~\ref{tab:mpdp}.
The rule was used for unsupervised learning of speech data, where an additional mechanism was implemented to maintain a constant average firing rate.
\begin{equation}
\label{eq:mpdp-trace}
\Psi(t) = \int_{-\infty}^{t} e^{-(t - \tau)/T} F'(V(\tau)) \sum_{i=1}^{N_s} K(\tau - t_i^s) d\tau
\end{equation}
\begin{equation}
\label{eq:mpdp-update}
\frac{dw}{dt} = \left\{\begin{matrix}
\Psi - \theta_{\mathrm{pot}} & if \: \theta_{\mathrm{pot}} < \Psi \\
0 & if \: \theta_{\mathrm{dep}} < \Psi \leq \theta_{\mathrm{pot}} \\
\Psi - \theta_{\mathrm{dep}} & if \: \Psi \leq \theta_{\mathrm{dep}}
\end{matrix}\right.
\end{equation}
\begin{table}[H]
\centering
\caption{Variables of the \ac{MPDP} rule.}
\label{tab:mpdp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$\Psi$ & Internal spike trace \cr
$T$& Decay time constant\cr
$F'(V(\tau))$ & Function of the post-synaptic membrane voltage \cr
$V(\tau)$& Post-synaptic membrane voltage\cr
$N_{s}$& Pre-synaptic spike indices\cr
$\sum_{i=1}^{N_s} K(\tau - t_i^s)$ & Pre-synaptic spike trace - integrative \cr
$K(\tau - t_i^s)$& Kernel for pre-synaptic spikes\cr
$w$ & Synaptic weight \cr
$\theta_{\mathrm{pot}}$ / $\theta_{\mathrm{dep}}$ & Thresholds for potentiation and depression\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Urbanczik and Senn (2014): \acf{DPSS}}
\citeasnoun{Urbanczik_Senn14} proposed a new learning model based on the \acf{DPSS}, which aims to implement a biologically plausible non-Hebbian learning rule. In their rule, they rely on the pre-synaptic spike trace, the post-synaptic spike event and the post-synaptic dendritic voltage of a multi-compartment neuron model.
Plasticity in dendritic synapses is realizing a predictive coding scheme that matches the dendritic potential to the somatic potential. This minimizes the error of dendritic prediction of somatic spiking activity of a conductance-based neuron model, that exhibits probabilistic spiking~\cite{Urbanczik_Senn14}.
The neuron membrane potential $U$ is influenced by both a scaled version of the dendritic compartment potential $V^{*}_{w}$ and the teaching inputs from excitatory or inhibitory proximal synapses $I_{U}^{\mathrm{som}}$.
In their proposed learning rule (see Eq.~\eqref{eq:dpss-PI}), the aim is to minimize the error between the predicted somatic spiking activity based on the dendritic potential $\phi(V_{w}^*(t))$ and the real somatic spiking activity represented by back-propagated spikes $S(t)$. The equation's variables are described in Tab.~\ref{tab:dpss}. The error $S(t)-\phi(V_{w}^*(t))$ is assigned to individual dendritic synapses based on their recent activation, similar to~\citeasnoun{Yger_Harris13} and~\citeasnoun{Albers_etal16}.
\begin{equation}
PI_{i}(t) = [S(t) - \phi(V_{w}^*(t))]h(V_{w}^*(t))PSP_i(t)
\label{eq:dpss-PI}
\end{equation}
Since the back-propagated spikes $S(t)$ are only $0$ or $1$, but the predicted rate $\phi(V_{w}^*)$ based on a sigmoidal function is never $0$ or $1$, $PI$ will never be $0$. In this case, there is never a zero weight change~\cite{Urbanczik_Senn14}.
The plasticity induction variable $PI_{i}$ is continuously updated and used as an intermediate variable,
before it is applied to induce a scaled
persistent synaptic change, as expressed in Eq.~\eqref{eq:dpss-PI-lp}.
\begin{equation}
\begin{split}
\tau_{\Delta}\frac{d\Delta_{i}}{dt} &= PI_{i}(t) - \Delta_{i} \\
\frac{dw_{i}}{dt} &= \eta\Delta_{i}
\end{split}
\label{eq:dpss-PI-lp}
\end{equation}
\citeasnoun{Sacramento_etal18} showed later analytically that the \acf{DPSS} learning rule combined with similar dendritic predictive plasticity mechanisms approximate the error back-propagation algorithm, and demonstrated the capabilities of such a learning framework to solve regression and classification tasks.
\begin{table}[H]
\centering
\caption{Variables of the \ac{DPSS} rule.}
\label{tab:dpss}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$U$ & Somatic potential \cr
$V^{*}_{w}$ & Scaled dendritic potential \cr
$I_{U}^{\mathrm{som}}$ & Proximal input current \cr
$\phi(.)$ & Sigmoid function \cr
$S(t)$ & Back-propagated somatic spiking activity \cr
$PI_i(t)$ & Plasticity induction variable \cr
$h(.)$ & Positive weighting function \cr
$PSP_i(t)$ & Pre-synaptic spike trace - integrative \cr
& $PSP_i(t)=\sum_{s\in X_{i}^{\mathrm{dnd}}} \kappa(t-s)$\cr
$\kappa(t-s)$ & Kernel for pre-synaptic spikes \cr
$X^{\mathrm{dnd}}_i$ & Pre-synaptic spike train \cr
$w_{i}$ & Synaptic strength of synapse $i$ \cr
$\tau_{\Delta}$ & Plasticity induction variable time constant \cr
$\eta$ & Learning rate \cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Diehl and Cook (2015): \acf{RDSP}}
\citeasnoun{Diehl_Cook15} proposed the~\acf{RDSP} rule as a local credit assignment mechanism for unsupervised learning in self-organizing \acp{SNN}. The idea is to potentiate or depress the synapses for which the pre-synaptic neuron activity was high or low at the moment of a post-synaptic spike, respectively.
The \ac{RDSP} learning rule relies solely on the pre-synaptic information and is triggered when a post-synaptic spike arrives. The weight update is shown in Eq.~\eqref{eq:rdsp}, whose variables are described in Tab.~\ref{tab:rdsp}.
\begin{equation}
\label{eq:rdsp}
\Delta w = \eta (x_{\mathrm{pre}} - x_{\mathrm{tar}}) \: (w_{\mathrm{max}} - w)^{u}
\end{equation}
$u$ determines the weight dependence of the update for implementing a soft bound, while the target value of the pre-synaptic spike trace $x_{tar}$ is crucial in this learning rule because it acts as a threshold between depression and potentiation. If it is set to $0$, then only potentiation is observed. It is hence important to set it to a non-zero value to ensure that pre-synaptic neurons that rarely lead to the firing of the post-synaptic neuron will become more and more disconnected. More generally, the higher the value of $x_{\mathrm{tar}}$ value, the more depression occurs and the lower the synaptic weights will be~\cite{Diehl_Cook15}.
This rule was first proposed as a more biologically plausible version of a previously proposed rule for memristive implementations by~\citeasnoun{Querlioz_etal13}.
The main difference between the two models is that the \ac{RDSP} rule uses an exponential time dependence for the weight change which is more biologically plausible~\cite{Abbott_Song99} than a time-independent weight change. This can also be more useful for pattern recognition depending on the temporal dynamics of the learning task.
\begin{table}[H]
\centering
\caption{Variables of the \ac{RDSP} rule.}
\label{tab:rdsp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$w$& Synaptic weight\cr
$\eta$& Learning rate\cr
$x_{\mathrm{pre}}$& Pre-synaptic spike trace - integrative\cr
$x_{\mathrm{tar}}$& Target value of the pre-synaptic spike trace\cr
$w_{\mathrm{max}}$& Maximum weight\cr
$u$& Weight dependence - soft bound\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Albers et al. (2016): \acf{HMPDP}}
The \acf{HMPDP} learning rule proposed by~\citeasnoun{Albers_etal16} is derived from an objective function similar to that of the \ac{MPDP} rule but with opposite sign, as it aims to balance the membrane potential of the post-synaptic neuron between two fixed thresholds; the resting potential and the spiking threshold of the neuron. Hence, the \ac{MPDP} and the \ac{HMPDP} implement a Hebbian or homeostatic mechanism, respectively. In addition, the \ac{HMPDP} differs from the other described models by inducing plasticity only to inhibitory synapses.
\citeasnoun{Albers_etal16} use a conductance based neuron and synapse model, similar to the \acs{CMPDP} and the \acs{DPSS} rules. The continuous weight updates of the \ac{HMPDP} rule depend on the instantaneous membrane potential $V(t)$ and the pre-synaptic spike trace $\sum_{k} \epsilon(t-t_{i}^{k})$ as expressed in Eq.~\eqref{eq:hmpdp} whose variables are described in Tab.~\ref{tab:hmpdp}.
\begin{equation}
\label{eq:hmpdp}
w_i = \eta (-\gamma[V(t) - \vartheta_D]_+ + [\vartheta_P - V(t)]_+)\sum_{k} \epsilon(t-t_{i}^{k})
\end{equation}
The authors claim that their model is able to learn precise spike times by keeping a homeostatic membrane potential between two thresholds. This definition differs from the homeostatic spike rate definition of the \acs{CMPDP} rule by~\citeasnoun{Sheik_etal16}.
\begin{table}[H]
\centering
\caption{Variables of the \ac{HMPDP} rule.}
\label{tab:hmpdp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$w_i$& Synaptic weight \cr
$\eta$& Learning rate \cr
$\gamma$& Scaling factor for LTD/LTP\cr
$[.]_{+}$& Rectifying bracket $[x]_+ = x$ if $x>0$, $[x]_+ =0$ otherwise\cr
$V(t)$& Instantaneous membrane potential\cr
$\vartheta_P/\vartheta_D$& Thresholds for plasticity induction\cr
$\sum_{k} \epsilon(t-t_{i}^{k})$& Pre-synaptic spike trace - integrative\cr
$t_{i}^{k}$& Time of the k-th spike at the i-th synapse\cr
$\epsilon(s)$& Kernel for pre-synaptic spikes \cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Sheik et al. (2016): \acf{CMPDP}}
The \acf{CMPDP} learning rule~\cite{Sheik_etal16} was proposed with the explicit intention to have a local spike-timing based rule that would be sensitive to the order of spikes arriving at different synapses and that could be ported onto neuromorphic hardware.
Similarly to the \ac{DPSS} rule, the \ac{CMPDP} rule uses a conductance-based neuron model. However, instead of relying on mean rates, it relies on the exact timing of the spikes. Furthermore, as for the \ac{HMPDP} rule, \citeasnoun{Sheik_etal16} propose to add a homeostatic element to the rule that targets a desired output firing rate.
This learning rule is very hardware efficient because it depends only on the pre-synaptic spike time and not on the post-synaptic one.
The equation that governs its behavior is Eq.~\eqref{eq:cmpdp-update}.
The weight update, triggered by the pre-synaptic spike, depends on a membrane voltage component (see Eq.~\eqref{eq:cmpdp-update-voltage}) and on a homeostatic one (see Eq.~\eqref{eq:cmpdp-update-homo}).
All equation variables are described in Tab.~\ref{tab:cmpdp}.
\begin{equation}
\Delta W= \Delta W_{v} + \Delta W_{h}
\label{eq:cmpdp-update}
\end{equation}
\begin{equation}
\Delta W_{v}=[\delta(V_{m}(t+1)>V_{\mathrm{lth}}) \eta_{+} -\delta(V_{m}(t+1)<V_{\mathrm{lth}}) \eta_{-}] S(t-t_{\mathrm{pre}})
\label{eq:cmpdp-update-voltage}
\end{equation}
\begin{equation}
\Delta W_{h}=\eta_{h}(Ca_{t}-Ca) S(t-t_{\mathrm{pre}})
\label{eq:cmpdp-update-homo}
\end{equation}
The post-synaptic membrane voltage dependent weight update shown in Eq.~\eqref{eq:cmpdp-update-voltage} depends on the values of the membrane voltage $V_{m}$ and an externally set threshold $V_{\mathrm{lth}}$, which determines the switch between \ac{LTP} and \ac{LTD}.
The homeostatic weight update in Eq.~\eqref{eq:cmpdp-update-homo} is proportional to the difference in post-synaptic activity represented by the post-synaptic spike trace $Ca$ and an externally set threshold $Ca_{t}$.
The authors show that this learning rule, using the spike timing together with conductance based neurons, is able to learn spatio-temporal patterns in noisy data and differentiate between inputs that have the same 1st-moment statistics but different higher moment ones.
Although they gear the rule toward neuromorphic hardware implementations, they do not propose circuits for the learning rule.
\begin{table}[H]
\centering
\caption{Variables of the \ac{CMPDP} rule.}
\label{tab:cmpdp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$W$& Synaptic weight\cr
$\Delta W_{v}$& Voltage-based weight update\cr
$\Delta W_{h}$& Homeostatic weight update\cr
$\delta$& Boolean variable\cr
$V_{m}$& Membrane potential \cr
$V_{\mathrm{lth}}$& Threshold on membrane potential \cr
$\eta_{+}$ / $\eta_{-}$ / $\eta_{h}$ & Magnitude of LTP/LTD/Homeostasis\cr
$S(t-t_{\mathrm{pre}})$& Pre-synaptic spike trains\cr
$t_{\mathrm{pre}}$& Pre-synaptic spike time\cr
$Ca$& Post-synaptic spike trace (Calcium) - integrative\cr
$Ca_{t}$& Calcium target concentration trace\cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Payeur et al. (2021): \acf{BDSP}}
The \acf{BDSP} learning rule~\cite{Payeur_etal21} has been proposed to enable online, local, spike-based solutions to the credit assignment problem in hierarchical networks~\cite{Zenke_Neftci21}, i.e.\ how can neurons high up in a hierarchy signal to other neurons, sometimes multiple synapses apart, whether to engage in \ac{LTP} or \ac{LTD} to improve behavior.
The \ac{BDSP} learning rule is formulated in Eq.~\eqref{eq:bdsp} whose variables are described in Tab.~\ref{tab:bdsp}.
\begin{equation}
\label{eq:bdsp}
\frac{dw_{ij}}{dt} = \eta [B_i(t) - \overline{P}_i(t) E_i(t)] \widetilde{E}_j(t)
\end{equation}
where an event $E_i(t)$ is said to occur either at the time of an isolated spike or at the time of the first spike in a burst, whereas a burst $B_i(t)$ is defined as any occurrence of at least two spikes (at the second spike) with an inter-spike interval less than a pre-defined threshold. Any additional spike within the time threshold belongs to the same burst. Hence, \ac{LTP} and \ac{LTD} are triggered by a burst and an event, respectively. Since a burst is always preceded by an event, every potentiation is preceded by a depression. However, the potentiation through the burst is larger than the previous depression, which results in an overall potentiation.
The moving average $\overline{P_i}(t)$ regulates the relative strength of burst-triggered potentiation and event-triggered depression. It has been established that such a mechanism exists in biological neurons~\cite{Maki-Marttunen_etal20}. It is formulated as a ratio between averaged post-synaptic burst and event traces.
The authors show that manipulating the moving average $\overline{P_i}(t)$ (i.e.\ the probability that an event becomes a burst) controls the occurrence of \ac{LTP} and \ac{LTD}, while changing the pre- and post-synaptic event rates simply modifies the rate of change of the weight while keeping the same transition point between \ac{LTP} and \ac{LTD}. Hence, the \ac{BDSP} rule paired with the control of bursting provided by apical dendrites enables a form of top-down steering of synaptic plasticity in an online, local and spike-based manner.
Moreover, the authors show that this dendrite-dependent bursting combined with short-term plasticity supports multiplexing of feed-forward and feedback signals, which means that the feedback signals can steer plasticity without affecting the communication of bottom-up signals. Taken together, these observations show that combining the \ac{BDSP} rule with short-term plasticity and apical dendrites can provide a local approximation of the credit assignment problem. In fact, the learning rule has been shown to implement an approximation of gradient descent for hierarchical circuits and achieve good performance on standard machine learning benchmarks.
\begin{table}[H]
\centering
\caption{Variables of the \ac{BDSP} rule.}
\label{tab:bdsp}
\begin{tabular}{@{}cc@{}}
\toprule
\textbf{Variable} & \textbf{Description} \cr
\midrule
$w_{ij}$ & Synaptic weight between pre- and post-synaptic neurons $j$ and $i$ \cr
$\eta$ & Learning rate \cr
$B_i(t)$ & Post-synaptic bursts \cr
$\overline{P}_i(t)$ & Exponential moving average of the proportion of post-synaptic bursts \cr
$E_i(t)$ & Post-synaptic events \cr
$\widetilde{E}_j(t)$ & Pre-synaptic spike trace \cr
\bottomrule
\end{tabular}
\end{table}
\subsection{Models common variables}
Tables \ref{tab:models-variablesI} and \ref{tab:models-variablesII} show the major common variables between the different models. This allows an easy comparison of the formalism of each rule.
\begin{center}
\begin{table}[H]
\caption{Variables in common between rules Part I}
\label{tab:models-variablesI}
\resizebox{\linewidth}{!}{%
\begin{tabular}{>{\centering}m{0.15\linewidth}>{\centering}m{0.15\linewidth}>{\centering}m{0.15\linewidth}>{\centering}m{0.15\linewidth}>{\centering}m{0.15\linewidth}>{\centering}m{0.15\linewidth}>{\centering}m{0.15\linewidth}}
\toprule
\textbf{Variables}&\textbf{\acs{STDP}}&\textbf{\acs{TSTDP}}&\textbf{\acs{SDSP}}&\textbf{\acs{VSTDP}}&\textbf{\acs{CSTDP}}&\textbf{\acs{SBCM}}\cr
\midrule
Synaptic weight &w &w & X & $w$ & $\rho$ & $w_{ij}$\cr
\midrule
Weight bounds& & & $X_{max}$ & $w_{max}$& $0$ / $1$\cr
\midrule
Traces& &$o_{1}$ / $o_{2}$ / $r_{1}$ / $r_{2}$ & $C(t)$ & $\overline{u}_{-}(t)$ / $\overline{u}_{+}(t)$ / $\overline{x}(t)$ & $c(t)$ & $a_i$ / $a_j$\cr
\midrule
Time constants & $\tau_{+}$ / $\tau_{-}$ & & & & $\tau$ & $\tau$ \cr
\midrule
Membrane potential& & &$V(t)$ & $u(t)$ &\cr
\midrule
Thresholds & & & $\theta_{V}$ / $\theta_{up}^{\mathrm{l}}$ / $\theta_{up}^{\mathrm{h}}$ / $\theta_{down}^{\mathrm{l}}$ / $\theta_{down}^{\mathrm{h}}$ / $\theta_{X}$ & $\theta_{-}$ / $\theta_{+}$ & $\rho_{\star}$ / $\theta_p$ / $\theta_d$ &$\theta$\cr
\midrule
Amplitudes & $\mathrm{A}_{+}$ / $\mathrm{A}_{-}$ & $\mathrm{A}_{2+}$ / $\mathrm{A}_{2-}$ / $\mathrm{A}_{3+}$ / $\mathrm{A}_{3-}$ & $a$ / $b$ / $\alpha$ / $\beta$ & $A_{LTP}$ / $A_{LTD}$ & $C_{pre}$ / $C_{post}$ / $\gamma_{p}$ / $\gamma_{d}$ & $\kappa$ / $\alpha_j$ \cr
\bottomrule
\end{tabular}
}
\end{table}
\end{center}
\begin{center}
\begin{table}[H]
\caption{Variables in common between rules Part II }
\label{tab:models-variablesII}
\resizebox{\linewidth}{!}{%
\begin{tabular}{>{\centering}m{0.155\linewidth}>{\centering}m{0.125\linewidth}>{\centering}m{0.125\linewidth}>{\centering}m{0.125\linewidth}>{\centering}m{0.15\linewidth}>{\centering}m{0.15\linewidth}>{\centering}m{0.125\linewidth}}
\textbf{Variables}&\textbf{\acs{MPDP}}&\textbf{\acs{DPSS}}&\textbf{\acs{RDSP}}&\textbf{\acs{HMPDP}}&\textbf{\acs{CMPDP}}&\textbf{\acs{BDSP}}\cr
\midrule
Synaptic weight & $w$ & $w_{i}$ & $w$ & $w_i$ & $W$ & $w_{ij}$ \cr
\midrule
Weight bounds & & & $w_{max}$ & & & \cr
\midrule
Traces & $\Psi$ / $ K(\tau - t_i^s)$ & $PSP_i(t)$ / $\Delta_{i}$ & $x_{pre}$ & $\sum_{k} \epsilon(t-t_{i}^{k})$ & $Ca$ / $Ca_{t}$ & $\widetilde{E_j}(t)$ \cr
\midrule
Time constants & T & $\tau_{\Delta}$ & & & & \cr
\midrule
Membrane potential & $V(\tau)$ & $U$ & & $V(t)$ & $V_{m}$ & \cr
\midrule
Thresholds & $\theta_{dep}$ / $\theta_{pot}$ &
& $x_{tar}$ & $\vartheta_P$ / $\vartheta_D$ & $V_{lth}$ & \cr
\midrule
Amplitudes & & $\eta$
&$\eta$ / $u$ & $\eta$ / $\gamma$ & $\eta_{+}$ / $\eta_{-}$ / $\eta_{h}$ & $\eta$ / $\overline{P_i}(t)$ \cr
\bottomrule
\end{tabular}
}
\end{table}
\end{center}
\section{\ac{CMOS} implementations of synaptic plasticity}
Our comparison of plasticity models has highlighted many common functional primitives that are shared among the rules. These primitives can be grouped according to their function into the following blocks: low-pass filters, eligibility traces, and weight updates.
These blocks can be readily implemented in \ac{CMOS} technology, and they can be combined to implement different learning circuits.
An overview of the proposed \ac{CMOS} learning circuits that implement some of the models discussed is shown in Table~\ref{tab:circuits}.
To better link the \ac{CMOS} implementations with the models presented, we named all the current and voltage variables of our circuits to match those in the model equations.
\subsection{\ac{CMOS} building blocks}
The basic building blocks found required for building neuromorphic learning circuits can be grouped in four different families.
\begin{description}
\item[Eligibility trace blocks] These are implemented using either a current-mode integrator circuit, such as the \acf{DPI}, or other non-linear circuits that produce slowly decaying signals.
Input spikes can either increase the trace amplitude, decrease it, or completely reset it. The rate at which the trace decays back to its resting state can be typically modulated with externally controllable parameters.
Circuit blocks implementing eligibility traces are highlighted in green in the schematics.
\item [Comparator blocks] They are typically implemented using \acf{WTA} current mode circuits, or voltage mode transconductance or \acp{OPAMP}. The comparator block changes its output based on which input is greater. Circuit blocks implementing comparators are highlighted in yellow in the schematics.
\item [Weight update blocks] They typically comprise a capacitor that stores a voltage related to the amplitude of the weight. Charging and discharging pathways connected to the capacitor enable potentiation and depression of the weight depending on the status of other signals.
These blocks are is similar to the eligibility trace ones, except for the fact that they can produce both positive and negative changes.
Circuit blocks implementing weight updates are highlighted in purple in the schematics.
\item[Bistability blocks] These are typically implemented using a \ac{TA} connected in feedback operation which compares the weight voltage to a reference voltage.
Depending on the value of the weight voltage the bistability circuit will push the weight to the closest stable state.
In its simplest form they have one single reference voltage, but they could be expanded to produce multiple stable states.
Circuit blocks implementing bistability are highlighted in red in the schematics.
\end{description}
\begin{center}
\begin{table}[H]
\caption{Neuromorphic circuits for spike-based local synaptic plasticity models}
\label{tab:circuits}
\resizebox{\textwidth}{!}{%
\begin{tabular}{>{\hspace{0pt}}m{0.1\linewidth}>{\centering\hspace{0pt}}m{0.3\linewidth}>{\centering\hspace{0pt}}m{0.37\linewidth}>{\centering\hspace{0pt}}m{0.23\linewidth}}
\hline
\textbf{Rule} & \textbf{Paper} & \textbf{Difference with the model} & \textbf{Implementation} \cr
\hline
\multirow{25}{*}{\acs{STDP}} &~\cite{Bofill-i-Petit_etal01}$^1$ & / & \SI{0.6}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Indiveri02b} & All-to-all spike interaction + bistable weights & \SI{1.5}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Bofill-i-Petit_Murray04} & / & \SI{0.6}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Cameron_etal05} & Anti-STDP + Non-exponential spike trace & \SI{0.35}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Indiveri_etal06} & Bistable weights & \SI{1.6}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Arthur_Boahen06}$^2$ & All-to-all interaction + binary weights & \SI{0.25}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Koickal_etal07} & Soft bounds & \SI{0.6}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Liu_Mockel08} & All-to-all spike interaction + asymmetric bounds (soft lower bound + hard upper bound) & \SI{0.35}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Tanaka_etal09} & / & \SI{0.25}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Bamford_etal12} & All-to-all spike interaction & \SI{0.35}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Gopalakrishnan_Basu14} & All-to-all spike interaction + asymmetric bounds (soft lower bound + hard upper bound) & \SI{0.35}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Mastella_etal20} & / & \SI{0.15}{\um} Simulated \cr
\hline
\multirow{3}{*}{\acs{TSTDP}} &~\cite{Mayr_etal10} & / & Simulated \cr
\cline{2-4}
&~\cite{Azghadi_etal13} & / & \SI{0.35}{\um} Simulated \cr
\cline{2-4}
&~\cite{Gopalakrishnan_Basu17} & / & \SI{0.35}{\um} Fabricated \cr
\hline
\multirow{6}{*}{\acs{SDSP}} &~\cite{Fusi_etal00} & No post-synaptic spike trace + no stop-learning mechanism & \SI{1.2}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Chicca_Fusi01} & No post-synaptic spike trace + no stop-learning mechanism & \SI{0.6}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Chicca_etal03} & No post-synaptic spike trace + no stop-learning mechanism & \SI{0.6}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Giulioni_etal08} & Analog weights & \SI{0.35}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Mitra_etal09} & Analog weights & \SI{0.35}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Chicca_etal14b} & Analog weights & \SI{0.35}{\um} Fabricated \cr
\hline
\multirow{1}{*}{\acs{CSTDP}} &~\cite{Maldonado_etal16} & Hard bounds & \SI{0.18}{\um} Fabricated \cr
\hline
\multirow{4}{*}{\acs{RDSP}} &~\cite{Hafliger_etal97} & Nearest spike interaction + reset of pre-synaptic spike trace at post-spike + very small soft bounds & \SI{2}{\um} Fabricated \cr
\cline{2-4}
&~\cite{Ramakrishnan_etal11} & Nearest spike interaction + asymmetric bounds (soft lower bound + hard upper bound) & \SI{0.35}{\um} Fabricated \cr
\hline
\end{tabular}%
}
\footnotesize{$^1$ Potentiation and depression triggers done with digital logic gates. \newline
$^2$ Weight storage in digital SRAM.}
\end{table}
\end{center}
\subsection{\acf{STDP}}
\begin{center}
\begin{figure}[H]
\includegraphics[width=0.7\textwidth]{figures/STDP_f_mod.pdf}
\centering
\caption{\acs{STDP} circuit with highlighted the \ac{CMOS} building blocks used: Eligibility traces (in green) and Weight updates (in violet). The voltage and current variables reflect the model equation. Adapted from:~\protect\citeasnoun{Indiveri_etal06}.}
\label{fig:stdp}
\end{figure}
\end{center}
Following the formalization of the \ac{STDP} model in 2000 (see Eq.~\eqref{eq:stdp}), many \ac{CMOS} implementations have been proposed. Most implement the model as explained in Section above~\cite{Bofill-i-Petit_etal01,Indiveri03,Bofill-i-Petit_Murray04,Arthur_Boahen06,Bamford_etal12} however, some exploit the physics of single transistors to propose a floating gate implementation~\cite{Liu_Mockel08,Gopalakrishnan_Basu14}.
\citeasnoun{Indiveri_etal06} presented the implementation in Fig.~\ref{fig:stdp}.
This circuit increases or decreases the analog voltage $V_{w}$ across the capacitor $C_{w}$ depending on the relative timing of the pulses $pre$ and $post$. Upon arrival of a pre-synaptic pulse $pre$, a potentiating waveform $V_{pot}$ is generated within the pMOS-based trace block (see Fig.~\ref{fig:stdp}). $V_{pot}$ has a sharp onset and decays linearly with an adjustable slope set by $V_{\tau +}$. $V_{pot}$ serves to keep track of the most recent pre-synaptic spike. Analogously, when a post-synaptic spike ($post$) occurs, $V_{dep}$ and $V_{\tau -}$ create a trace of post-synaptic activity. By ensuring that $V_{pot}$ and $V_{dep}$ remain below the threshold of the transistors they are connected to and the exponential current-voltage relation in the sub-threshold regime, the exponential relationship to the spike time difference $\Delta t$ of the model is achieved. While $V_{A+}$ and $V_{A-}$ set the upper-bounds of the amount of current that can be injected or removed from $C_{w}$, the decaying traces $V_{pot}$ and $V_{dep}$ determine the value of $I_{A+}$ or $I_{A-}$ and ultimately the weight increase or decrease on the capacitor $C_{w}$ within the weight update block (see Fig.~\ref{fig:stdp}).
\subsection{\acf{TSTDP}}
\begin{center}
\begin{figure}[H]
\includegraphics[width=\textwidth, angle = 0 ]{figures/TSTDP_f.pdf}
\centering
\caption{\acs{TSTDP} circuit with highlighted the \ac{CMOS} building blocks used: Eligibility traces with leaky integrators (in green) and weight updates (in violet). The voltage and current variables reflect the model equation. The $r$ and $o$ detectors of the model are also reported in this circuit figure. Adapted from:~\protect\citeasnoun{Azghadi_etal13}.}
\label{fig:tstdp}
\end{figure}
\end{center}
Similarly, as for the pair-based \ac{STDP}, there are many implementations of the \ac{TSTDP} rule. While some are successful in implementing the equations in the model~\cite{Mayr_etal10,Meng_etal11,Rachmuth_etal11,Azghadi_etal13}, others exploit the properties of floating gates~\cite{Gopalakrishnan_Basu17}.
Specifically,~\citeasnoun{Mayr_etal10} as well as~\citeasnoun{Rachmuth_etal11} and~\citeasnoun{Meng_etal11} implement learning rules that model the conventional pair-based \ac{STDP} together with the \ac{BCM} rule. \citeasnoun{Azghadi_etal13} is the first, to our knowledge, to not only model the function but also model the equations presented in~\citeasnoun{Pfister_etal06} (see Eq.~\eqref{eq:tstdp}).
Figure~\ref{fig:tstdp} shows the circuit proposed by Azghadi in 2013 to model the \ac{TSTDP} rule. It faithfully implements the equations by having independent circuits and biases, for the model parameters $A_{2}^{-}$, $A_{2}^{+}$, $A_{3}^{-}$, and $A_{3}^{+}$. These parameters correspond to spike-pairs or spike-triplets: post-pre, pre-post, pre-post-pre, and post-pre-post, respectively.
In this implementation, the voltage across the capacitor $C_{w}$ determines the weight of the specific synapse. Here, a high potential at the node $W$ is caused by a highly discharged capacitor indicating a low synaptic weight, which results in a depressed synapse. In the same way, a low potential at this node is caused by a more strongly charged capacitor and resembles a strong synaptic weight and in turn a potentiated synapse. The capacitor is charged and discharged by the two currents $I_{pot}$ and $I_{dep}$ respectively. These two currents are gated by the most recent pre- and post-synaptic spikes through the transistors controlled by $\overline{pre(n)}$ and $post(n)$ within the weight update block (see Fig.~\ref{fig:tstdp})
The amplitude of the depression current $I_{dep}$ and the potentiation current $I_{pot}$ is given by the recent spiking activity of the pre- and post-synaptic neurons. On the arrival of a pre-synaptic spike, the capacitors $C_{+}$ and $C_{x}$ (in the trace - leaky integrator blocks r1 and r2 in Fig.~\ref{fig:tstdp}) are charged by the currents $I_{A2+}$ and $I_{A3-}$. Analogously, the capacitors $C_{-}$ and $C_{y}$ (in the trace - leaky integrator blocks o1 and o2 in Fig.~\ref{fig:tstdp}) are charged at the arrival of a post-synaptic spike by the currents $I_{A2-}$ and $I_{A3+}$. Here, both currents $I_{A2+}$ and $I_{A2-}$ depend on an externally set constant input current plus the currents generated by the o2 and r2 blocks, respectively. These additional blocks o2 and r2 activated by previous spiking activity realize the triplet-sensitive behavior of the rule.
All capacitors within the``Trace - leaky integrator'' blocks ($C_{+}$, $C_{-}$, $C_{x}$, $C_{y}$) constantly discharge with individual rates given by $I_{\tau+}$, $I_{\tau-}$, $I_{\tau x}$, $I_{\tau y}$, respectively.
\subsection{\acf{SDSP}}
\begin{center}
\begin{figure}[H]
\includegraphics[width=\textwidth, angle = 0]{figures/SDSP_f.pdf}
\centering
\caption{\acs{SDSP} circuit with highlighted the \ac{CMOS} building blocks used: Eligibility traces with a \acs{DPI} (in green), weight updates (in violet), bistability (in red) and comparators with \acs{WTA} (in yellow). The voltage and current variables reflect the model equation. Adapted from:~\protect\citeasnoun{Chicca_etal14b}.}
\label{fig:sdsp}
\end{figure}
\end{center}
A sequence of theoretical works on spike based learning rules designed in the theoretical framework of attractor neural network and mean field theory preceded the \acf{SDSP} formalization by~\citeasnoun{Brader_etal07}. Several hardware implementations by~\citeasnoun{Fusi_etal2000},~\citeasnoun{Dante_etal2001} and~\citeasnoun{Chicca_etal03} accompanied this theoretical work. After formalization by~\citeasnoun{Brader_etal07} many implementations of the \acf{SDSP} rule were proposed following the desire to build smarter, larger, and more autonomous networks.
The implementations by~\citeasnoun{Chicca_etal03},~\citeasnoun{Mitra_etal09},~\citeasnoun{Giulioni_etal08} and~\citeasnoun{Chicca_etal14b} share similar building blocks: trace generators, comparators, blocks implementing the weight update and bistability mechanism. Here, we present the most complete design by~\citeasnoun{Chicca_etal14b}, shown in Fig.~\ref{fig:sdsp}, which replicates more closely the model equations (see Eq.~\eqref{eq:sdsp}).
At each pre-synaptic spike $pre$, the weight update block (see Fig.~\ref{fig:sdsp}) charges or discharges the capacitor $C_{x}$ altering the voltage $V_{x}$, depending on the values of $V_{a}$ and $V_{b}$. Here, $V_{x}$ represents the synaptic weight. If $I_{a} > I_{b}$, $V_{x}$ increases, while in the opposite case $V_{x}$ decreases. Moreover, over long time scales, in the absence of pre-synaptic spikes, $V_{x}$ is slowly driven toward the bistable states $V_{stableH}$ or $V_{stableL}$ depending on whether $V_{x}$ is higher or lower than $\theta_{x}$ respectively (see bistability block in Fig.~\ref{fig:sdsp}).
The $V_{a}$ and $V_{b}$ signals are continuously computed in the learning block, which compares the membrane potential of the neuron ($V$) to the threshold $\theta_{V}$ and evaluates in which region the Calcium concentration $V_{c}$ lies. The neuron's membrane potential is compared to the threshold $\theta_{V}$ by a transconductance amplifier. If $V > \theta_{V}$, $V_{mhi}$ is high and $V_{mlo}$ is low, while if $V < \theta_{V}$, $V_{mhi}$ is low and $V_{mlo}$ is high. At the same time, the post-synaptic neuron spikes ($post$) are integrated by a \ac{DPI} to produce the Calcium concentration $V_{c}$ (see trace - \ac{DPI} block in Fig.~\ref{fig:sdsp}), which is then compared with three Calcium thresholds by three \ac{WTA} circuits (see comparator circuits in Fig.~\ref{fig:sdsp}). In the lower comparator, $I_{c}$ is compared to $I_{\theta{C1}}$ and if $I_{c} < I_{\theta{C1}}$ no learning conditions of the \ac{SDSP} rule is satisfied and there is no weight update. Assuming that $I_{c} > I_{\theta{C1}}$, the two upper comparators set the signals $V_{a}$ and $V_{b}$. If $V_{mlo}$ is high and $I_{c} < I_{\theta{C2}}$, $V_{b}$ is increasing, setting the strength of the nMOS-based pull-down branch in the weight update block. If $V_{mhi}$ is high and $I_{C} < I_{\theta{C3}}$, $V_{a}$ is decreasing, setting the strength of the pMOS-based pull-up branch of the weight update block. These two branches in the weight update block are activated by the $pre$ input spike.
\subsection{\acf{CSTDP}}
\begin{center}
\begin{figure}[H]
\includegraphics[width=0.8\textwidth, angle =0]{figures/CA_STDP_f.pdf}
\centering
\caption{\ac{CSTDP} circuit with highlighted \ac{CMOS} building blocks used: Eligibility traces with a \acs{DPI} (in green), Weight updates (in violet), Bistability (in red) and Comparators with \acs{WTA} (in yellow). Not shown is the circuit that implements the pre-synaptic spike extension. The voltage and current variables reflect the model equation. Adapted from:~\protect\citeasnoun{Maldonado_etal16}.
}
\label{fig:cstdp}
\end{figure}
\end{center}
The \ac{CSTDP} rule proposed by~\citeasnoun{Graupner_Brunel07} (see Eq.~\eqref{eq:cstdp}) attracted the attention of circuit designer thanks to its claim to closely replicate biological findings and explain synaptic plasticity in relation to both spike timing and rate.
To implement the \ac{CSTDP} rule proposed by~\citeasnoun{Graupner_Brunel07} (see Eq.~\eqref{eq:cstdp}), \citeasnoun{Maldonado_etal16} made small adaptations to the original model and proposed the circuit shown in Fig.~\ref{fig:cstdp}.
Specifically, they proposed to convert the soft bounds of the efficacy update to hard bounds, resulting in the following model for the update of the synaptic efficacy:
\begin{equation}
\begin{split}
\tau \frac{d\rho}{dt} = -k_{bs}\rho(1 - &\rho)(\rho_{\star} - \rho) + \gamma_{p}\Theta[c(t) - \theta_p] - \gamma_{d}\Theta[c(t) - \theta_d] \\
&\rho > 1 \rightarrow \rho = 1 \\
&\rho < 0 \rightarrow \rho = 0
\end{split}
\label{eq:ca_plasticity_simple}
\end{equation}
with $k_{bs}$ acting as a constant which scales the bistability dynamics and the hard-bounds implemented by the Heaviside function $\Theta$.
The building blocks implemented in this work are shown in Fig.~\ref{fig:cstdp}. The trace block implements the local spike trace $c(t)$ represented by the voltage $V_{c}(t)$.
It consists of a \ac{DPI} with two input branches. On the arrival of either a post-synaptic spike ($post$) or the delayed pre-synaptic spike ($pre\_D$) the capacitor $C_{ca}$ is charged by a current defined by the gain of the \ac{DPI} ($V_{gCa}$) and $V_{Cpost}$ or $V_{Cpre}$, respectively. Charging the capacitor decreases the voltage $V_{c}(t)$. In the absence of input pulses, the capacitor discharges at a rate controlled by $V_{\tau Ca}$ towards its resting voltage $V_{cref}$.
The voltage $V_{c}(t)$ of the trace block sets the amplitude of the current $I_{c}(t)$ within the comparator blocks (see Fig.~\ref{fig:cstdp}). The current $I_{c}(t)$ is compared with the potentiation and depression thresholds defined by the currents $I_{\theta p}$ and $I_{\theta d}$, respectively. The \ac{WTA} functionality of the comparator circuits implements the Heavyside functionality of the comparison of the local spike trace $c(t)$ with the thresholds for potentiation ($\theta_p$) and depression ($\theta_d$) in the model (see Eq.~\eqref{eq:cstdp}).
While the Calcium current $I_{c}(t)$ is greater than the potentiation threshold current $I_{\theta p}$, the synapse efficacy capacitor $C_{\rho}$ within the weight update block (see Fig.\ref{fig:cstdp}) is continuously charged by a current defined by the parameter $V_{\gamma p}$. Similarly, as long as $I_{c}(t)$ is greater than the depression threshold current $I_{\theta d}$, $C_{\rho}$ is constantly discharged with a current controlled by $V_{\gamma d}$. The voltage across the synapse capacitor $V_{\rho}$ resembles the efficacy $\rho$ of the synapse.
To implement the bistability behavior of the synaptic efficacy, Maldonado et al. use an \ac{TA} in positive feedback configuration with a very small gain defined by $V_{b}$ (see Fig.~\ref{fig:cstdp}). As long as the synaptic efficacy voltage $V_{\rho}$ is above the bistability threshold $V_{\rho_{\star}}$ the positive feedback constantly charges the capacitor $C_{\rho}$ and drives $V_{\rho}$ towards the upper limit defined by $V_{wh}$. In the case that $V_{\rho}$ is below $V_{\rho_{\star}}$, the \ac{TA} discharges the capacitor and drives $V_{\rho}$ toward the lower limit defined by $V_{wl}$.
\subsection{\acf{RDSP}}
\begin{center}
\begin{figure}[H]
\includegraphics[width=0.7\textwidth, angle =0]{figures/RDSP_f.pdf}
\centering
\caption{\acs{RDSP} circuit with highlighted the \ac{CMOS} building blocks used: Eligibility traces (in green), weight updates (in violet) and comparators with differential pair (in yellow). Adapted from:~\protect\citeasnoun{Hafliger_etal97}.
}
\label{fig:rdsp}
\end{figure}
\end{center}
The first \ac{CMOS} implementation of a spike-based learning rule done by~\citeasnoun{Hafliger_etal97} pre-dates the formalization of the \ac{RDSP} model, which happened almost 20 years later~\cite{Diehl_Cook15}. It is one of the most apparent cases of how building electronic circuits that mimic biological behavior leads to the discovery of useful mechanisms for solving real-world problems.
The algorithmic definition of their learning rule is based on a correlation signal, local to each synapse, which keeps track of the pre-synaptic spike activity. The correlation signal is refreshed at each pre-synaptic event and decays over time. When a post-signal arrives, depending on the value of the correlation, the weight is either increased or decreased, while the correlation signal is reset.
Similarly, the \ac{RDSP} rule relies on the pre-synaptic spike time information and is triggered when a post synaptic spike arrives. The direction of weight update depends on a target value $x_{tar}$, which determines the threshold between depression and potentiation.
The two main differences between the circuit by~\citeasnoun{Hafliger_etal97} (see Fig.~\ref{fig:rdsp}) and the \ac{RDSP} rule (see Eq.~\eqref{eq:rdsp}) is that the correlation signal in~\citeasnoun{Hafliger_etal97} is binary and is compared to a fixed threshold voltage (the switching threshold of the first inverter), which resembles a fixed $x_{tar}$.
In the~\citeasnoun{Hafliger_etal97} implementation, the voltage $V_{w}$ across the capacitor $C_{w}$ represents the synaptic weight and the voltage $V_{xpre}$ at the capacitor $C_{xpre}$ represents the correlation signal. At the arrival of a pre-synaptic input spike ($pre$), the voltage $V_{w}$ determines the amplitude of the current towards the soma ($V_{mem}$) of the post-synaptic neuron. At the same time, the capacitor $C_{xpre}$ is fully discharged and $V_{xpre}$ is low. In the absence of pre-synaptic and post-synaptic spikes ($pre$ and $post$ are low), $C_{xpre}$ is slowly charged towards $Vdd$ by the pMOS branch in the trace block (see Fig.~\ref{fig:rdsp}).
The voltage $V_{xpre}$ is constantly compared to the threshold voltage (resembling $x_{tar}$) of the first inverter it is connected to. At the arrival of a post-synaptic spike ($post$ is high) the weight capacitor $C_{w}$ is either charged (depressed) or discharged (potentiated) depending on the momentary level of $V_{xpre}$. If $V_{xpre}$ is above the inverter threshold voltage, the right branch of the weight update block (see Fig.~\ref{fig:rdsp}) is inactive, while the left branch is active and the pMOS-based current mirror charges the capacitor $C_{w}$. In the opposite case, where $V_{xpre}$ is below the inverter threshold voltage, the right branch is active while the output of the second inverter disables the left branch of the weight update block. This results in a discharge of the capacitor $C_{w}$ controlled by the nMOS-based current mirror. The amplitude for potentiation and depression is set by the two biases $V_{\eta}$ and $V_{amp}$. At the end of a post-synaptic spike the correlation signal $V_{xpre}$ is reset to $Vdd$.
A similar approach implementing a nearest-spike interaction scheme and a fixed $x_{tar}$ was implemented by~\citeasnoun{Ramakrishnan_etal11} exploiting the properties of floating gates.
\subsection{Other models implementations}
\label{sec:cmos_other}
To the best of our knowledge, there have been no dedicated \ac{CMOS}-based implementations of the other models presented in Sec.~\ref{sec:models}. Although the \ac{VSTDP} rule proposed by ~\citeasnoun{Clopath_etal10} and~\citeasnoun{Clopath_Gerstner10} shares similarities with the \ac{TSTDP} rule and can be related to the \ac{BCM} rule~\cite{Gjorgjieva_etal11}, its complexity for implementations comes from its multiple transient signals on different timescales. To this end, emerging novel technologies, such as memristors~\cite{Cantley_etal11,Li_etal13,Li_etal14,Ziegler_etal15,Diederich_etal18} and neuristors~\cite{Abraham_etal18} are capable of supporting promising solutions to implement different timescales in a compact and efficient manner.
Similarly, implementations for the \ac{DPSS} rule~\cite{Urbanczik_Senn14} are difficult due to the increased complexity of the required multi-compartment neuron models. Recently, implementations based on hybrid memristor-\ac{CMOS} systems~\cite{Nair_etal17,Payvand_etal20} or using existing neuromorphic processors to exploit neuron structures to replicate the multi-compartment model~\cite{Cartiglia_etal20} have been proposed.
A detailed view on these implementations is beyond the scope of this review and the authors refer the readers to the original publications.
However, introducing \ac{CMOS} implemented models through the lens of functional building blocks allows us to quickly look for analogies and differences between the implemented and other models. Throughout this Section, we have highlighted the similarities and differences of each of the implemented models.
Focusing on functional building blocks also allows for a broader generalization to all the models that have not been implemented yet: using the basic building block we presented (e.g.\ Traces, Comparators, Weight updates, and Bistability) one could potentially construct all the learning models we have discussed in Sec.~\ref{sec:models}.
\section{Discussion and conclusion}
\subsection{Toward a unified synaptic plasticity framework}
In this survey, we highlighted the similarities and differences of representative synaptic plasticity models and provided examples of neuromorphic circuits \ac{CMOS} that can be used to implement their principles of computation.
We highlighted how the principle of locality in learning and neural computation in general is fundamental and enables the development of fast, efficient and scalable neuromorphic processing systems.
We highlighted how the different features of the plasticity models can be summarized in (1) synaptic weights properties, (2) plasticity update triggers and (3) local variables that can be exploited to modify the synaptic weight (see also Table~\ref{tab:models}). Although all local variables of these rules are similar in nature, the plasticity rules can can be subdivided in the following way:
\begin{itemize}
\item Pre-synaptic spike trace: \ac{RDSP}.
\item Pre- and post-synaptic spike traces: \ac{STDP}, \ac{TSTDP}, \ac{CSTDP}, \ac{SBCM}, \ac{BDSP}.
\item Pre-synaptic spike trace + post-synaptic membrane voltage: \ac{VSTDP}, \ac{DPSS}, \ac{MPDP}, \ac{HMPDP}.
\item Post-synaptic membrane voltage + post-synaptic spike trace: \ac{SDSP}, \ac{CMPDP}.
\end{itemize}
Many possibilities arise when exploring how the local variables used by these rules interact (e.g.\ comparison, addition, multiplication, etc.). This leads to a wide range of additional models that could be proposed and to a large number of biological experiments that could be carried out to verify the hypotheses and predictions made by the rules.
It is difficult to predict whether a unified rule of synaptic plasticity can be formulated, based on the observation that several plasticity mechanisms coexist in the brain~\cite{Abbott_Nelson00,Bi_Poo01}, and that different problems may require different plasticity mechanisms.
Nevertheless, we provided here a single unified framework that allowed us to do a systematic comparison of the features of many representative models of synaptic plasticity presented in the literature, developed following experiment-driven bottom-up approaches and/or application-driven top-down approaches~\cite{Frenkel_etal21b}. While the bottom-up approach can help in explaining the plasticity mechanisms found in the brain, top-down guidance can help to find the right level of abstraction from biology to get the best performance for solving problems in the context of efficient and adaptive artificial systems. In line with the neuromorphic engineering perspective, this work bridges the gap between both approaches.
\subsection{Overcoming back-propagation limits for online learning}
\label{sec:gradient-learning}
Local synaptic plasticity in neuromorphic circuits offers a promising solution for online learning in embedded systems. However, due to the very local nature of this approach, there is no direct way of implementing global learning rules in multi-layer neural networks, such as the gradient-based back-propagation algorithm~\cite{LeCun_etal98,Schmidhuber_etal07}.
This algorithm has been the work horse of \acp{ANN} training in deep learning over the last decade. Gradient-based learning has recently been applied for offline training of \acp{SNN}, where the \ac{BP} algorithm coupled with surrogate gradients is used to solve two critical problems: first, the temporal credit assignment problem which arises due to the temporal inter-dependencies of the \ac{SNN} activity. It is solved offline with \ac{BPTT} by unrolling the \ac{SNN} like standard \acp{RNN}~\cite{Neftci_etal19}. Second, the spatial credit assignment problem, where the credit or ``blame'' with respect to the objective function is assigned to each neuron across the layers.
However, \ac{BPTT} is not biologically plausible~\cite{Bengio_etal15,Lillicrap_etal20} and not practical for on-chip and online learning due to the non-local learning paradigm.
On one hand, \ac{BPTT} is not local in time as it requires keeping all the network activities for the duration of the trial.
On the other hand, \ac{BPTT} is not local in space as it requires information to be transferred across multiple layers. Indeed, synaptic weights can only be updated after complete forward propagation, loss evaluation, and back-propagation of error signals, which lead to the so-called ``locking effect''~\cite{Czarnecki_etal17}.
Recently, intensive research in neuromorphic computing has been dedicated to bridge the gap between back-propagation and local synaptic plasticity rules by reducing the non-local information requirements, at a cost of accuracy in complex problems~\cite{Eshraghian_etal21}.
The temporal credit assignment can be handled by using eligibility traces~\cite{Zenke_Ganguli18,Bellec_etal20} that solve the distal reward problem by bridging the delay between the network output and the feedback signal that may arrive later in time~\cite{Izhikevich07}.
Similarly, inspired by recent progress in deep learning, several strategies have been explored to solve the spatial credit assignment problem using feedback alignment~\cite{Lillicrap_etal16}, direct feedback alignment~\cite{Nokland16}, random error \ac{BP}~\cite{Neftci_etal17} or by replacing the backward pass with an additional forward pass whose input is modulated with error information~\cite{Dellaferrera_2022}.
However, these approaches only partially solve the problem~\cite{Eshraghian_etal21}, since they still suffer from the locking effect, which can nonetheless be tackled by replacing the global loss by a number of local loss functions~\cite{Mostafa_etal18,Neftci_etal19,Kraiser_etal20,Halvagal_Zenke22} or by using direct random target projection~\cite{Frenkel_etal21b}.
Assigning credit locally, especially within recurrent \acp{SNN}, is still an open question and an active field of research~\cite{Christensen_etal21}.
The local synaptic plasticity models and circuits presented in this survey do not require the presence of a teacher signal and contrast with supervised learning using labeled data which is neither biologically plausible~\cite{Halvagal_Zenke22} nor practical in most online scenarios~\cite{Muliukov_etal22}. Nevertheless, the main limit of spike-based local learning is the diminished performance on complex pattern recognition problems. Different approaches have been explored to bridge this gap, such as \ac{DPSS}~\cite{Urbanczik_Senn14,Sacramento_etal18} and \ac{BDSP}~\cite{Payeur_etal21} learning rules that use multi-compartment neurons and show promising performance in approximating back-propagation with local mechanisms, or using multi-modal association to improve the self-organizing system's performance~\cite{Gilra_Gerstner17,Khacef_etal20,Rathi_Roy21} as in contrast to labeled data, multiple sensory modalities (e.g.\ sight, sound, touch) are freely available in the real-world environment.
\subsection{Structural plasticity and network topology}
Exploring local synaptic plasticity rules gives valuable insights into how plasticity and learning evolves in the brain. However, in bringing the plasticity of single synapses to the function of entire networks, many more factors come into play. Functionality at a network level is determined by the interplay between the synaptic learning rules, the spatial location of the synapse, and the neural network topology.
Furthermore, the network topology of the brain is itself plastic~\cite{Holtmaat_Svoboda09}.
\citeasnoun{LeBe_Markram06} provided the first direct demonstration of induced rewiring (i.e.\ sprouting and pruning) of a functional circuit in the neocortex~\cite{Markram_etal11}, which requires hours of general stimulation.
Some studies suggest that glutamate release is a key determinant in synapse formation~\cite{Engert_Bonhoeffer99,Kwon_Sabatini11}, but additional investigations are needed to better understand the computational foundations of structural plasticity and how it is linked to the synaptic plasticity models we reviewed in this survey. Together, structural and synaptic plasticity are the local mechanisms that lead to the emergence of the global structure and function of the brain. Understanding, modeling, and implementing the interplay between these two forms of plasticity is a key challenge for the design of self-organizing systems that can get closer to the unique efficiency and adaptation capabilities of the brain.
\subsection{\ac{CMOS} neuromorphic circuits}
The computational primitives that are shared by the different plasticity models were grouped together in corresponding functional primitives and circuit blocks that can be combined to map multiple plasticity models into corresponding spike-based learning circuits.
Many of the models considered rely on exponentially decaying traces. By operating the \ac{CMOS} circuits in the sub-threshold regime, this exponential dependency is given by the physical substrate of transistors showing an exponential relationship between current and voltage~\cite{Mead90}.
The circuits presented make use of both analog computation (e.g.\ analog weight updates) and digital communication (e.g.\ pre- and post-synaptic spike events). This mixed-signal analog/digital approach aligns with the observations that biological neural systems can be considered as hybrid analog and digital processing systems~\cite{Sarpeshkar98}.
Due to the digital nature of spike transmission in these neuromorphic systems, plasticity circuits that require the use of pre-synaptic traces need extra overhead to generate this information directly at the post-synaptic side.
The emergence of novel nanoscale memristive devices has high potential for allowing the implementation of such circuits at a low overhead cost, in terms of space and power~\cite{Demirag_etal21}.
In addition, these emerging memory technologies have the potential of allowing long-term storage of the synaptic weights in a non-volatile way, that would allow these neuromorphic systems to operate continuously, without having to upload the neural network parameters at boot time. This will be a significant advantage in large-scale systems, as Input/Output operations required to load network parameters can take a significant amount of power and time.
In addition, the properties of emerging memristive devices could be exploited to implement different features of the plasticity models proposed~\cite{Diederich_etal18}.
Overall, the number of proposed \ac{CMOS}-based analog or mixed-signal neuromorphic circuits over the past 25 years is relatively low, as this was mainly driven by fundamental academic research. With the increasing need for low-power neural processing systems at the edge, the increasing maturity of novel technologies, and the rising interest in brain-inspired neural networks and learning for data processing, we can expect an increasing number of new mixed signal analog/digital circuits implementing new plasticity rules also for commercial exploitation. In this respect, this review can provide valuable information for making informed modeling and circuit design decision in developing novel spike-based neuromorphic processing systems for online learning.
\ack
We would like to thank the BICS group for the fruitful discussions, with a special thank to Hugh Greatorex for providing valuable comments on the manuscript. We would also like to acknowledge the financial support of the CogniGron research center and the Ubbo Emmius Funds of the University of Groningen, the European Union's H2020 research and innovation programme under the H2020 BeFerrosynaptic project (871737), the Swiss National Science Foundation Sinergia project (CRSII5-18O316) and the ERC grant NeuroAgents (724295).
\section*{Data availability statement}
No new data were created or analyzed in this study.
\section*{ORCID IDs}
Lyes Khacef: https://orcid.org/0000-0002-4009-174X. \\
Philipp Klein: https://orcid.org/0000-0003-4266-2590. \\
Matteo Cartiglia: https://orcid.org/0000-0001-8936-6727. \\
Arianna Rubino: https://orcid.org/0000-0002-5036-1969. \\
Giacomo Indiveri: https://orcid.org/0000-0002-7109-1689. \\
Elisabetta Chicca: https://orcid.org/0000-0002-5518-8990.
\section*{References}
|
1,477,468,751,385 | arxiv |
\section{Discussion}
\label{ssec:keyobservation}
\subsection{Key Observation from Layer and Neuron Analysis:}
\paragraph{\textbf{Number of neurons and the task complexity}:}
We noticed that the simple properties, such as gender and channel, requires less neurons (as little as 1\% neurons of the network) to capture the information. One possible explanation can be the nature of the tasks -- since both gender and channel information are salient and easily distinguishable in the acoustic signal, a handful of neurons is sufficient to capture the information during classification. However, for the complex properties, such as voice identity verification, a significant amount of the neurons are required to represent the variability in the signal.
\paragraph{\textbf{Localized vs Distributive}:} We noticed the salient neurons, for most of the tasks, are localised in the upper layers of the pretrained models. This observation is more pronounced for the voice identity verification task, in the SRE model, since only the last layer of the network encodes such information using almost 50-75\% of layer's neurons. We hypothesize the neurons in the each layers are more informative than its predecessor and as we go deeper in the network, the more contextual information is captured in the network, which helps to discriminate, for e.g., the variability in speaker voice.
\paragraph{\textbf{Task-specific Redundancy}:} We observed task-specific redundancy, in both layer- and neuron-level for gender and channel properties. Due to the distinguishable acoustic signature of these properties, the redundant information is learnt across the network, which can be represented by a small number of neurons.
A neuron-level redundancy is also
observed for language property in ADI model. We speculate that a small number of neurons
is sufficient to capture the variability between the languages, when transferred from the pretrained model -- trained to distinguish dialects of a language family.
\paragraph{\textbf{Polysemous Neurons}:} We noticed these salient neurons are sometimes shared among properties. For example, in the SRE pretrained model, we noticed that a subset (around 40\%) of the voice-identity neurons are shared with the gender neurons. This sharing properties of the neurons reflects on the main training objective -- i.e., to recognise speakers -- of the pretrained model, as both gender and voice variability is needed to verify speakers' identity. Hence, when creating speaker verification pairs to evaluate the performance of a speaker recognition model, we tend to select pairs from same gender, removing the advantage of gender based discrimination.
\paragraph{\textbf{Bias}:} Our fine-grained neuron analysis
reflects the property-based bias present in the pretrained network and highlights the parts of the networks (neurons) responsible for encoding the information. For example, using layer and neuron-level analysis, we show ADI pretrained model is more susceptible to gender bias than other pretrained models. Identifying neurons that
capture the gender in the network, we can manipulate and control the system's behavior to eradicate the bias. We leave this exploration for future.
\paragraph{\textbf{Robustness}:} Through our diagonostic tests, we observed that the pretrained networks is robust towards unknown speakers. This increase the reliability in the prediction of the pretrained models.
Moreover, using this information we can identify potential parts of the network, susceptible to capture such identity information, and use them only to fine-tune the pretrained model for any future speaker-dependent downstream task, with lower computational cost.
\subsection{Cross-architectural Comparison:}
\paragraph{\textbf{Network Size and its Encoding Capabilities}:} From cross-architectural comparison of the pretrained models, we noticed that the small transformers gives relatively poor performance compared to the large transformers and CNNs. We hypothesise that the small pretrained transformers encode low-level information and can perform well only when fine-tuned for a downstream task. On the other hand, large architectures are better feature extractors,
due to their ability to capture more meaningful and abstract information (such as language property). Our findings resonate with previous study carried by \cite{liu2020mockingjay}.
\paragraph{\textbf{Storing Knowledge}:}
We observed a tetrahedral pattern in storing knowledge. With deeper network, the
neurons are more informative and store more knowledge than its predecessor.
As for large architectures -- CNNs and large transformer, we notice that the task-oriented information is captured in the upper layers of the network, encoding more abstract information; whereas vocal features are mostly captured in the lower CNN layers. This reinforces the previous findings that the lower layers of the network act as feature extractor and upper layers as task-oriented classifier.
As for the small transformer pretrained model, the information is more distributed.
\paragraph{\textbf{Revered Pretrained Architecture for Transfer Learning}:} For pretrain-fine-tune paradigm, the most popular architecture choice is transformers in context of pretrained language modelling and speech representation models. However, in the research community, there is abundance of trained large CNN models. In line with \cite{tay2021pretrained}, our findings suggest potential in (re-)using these large CNNs as pretrained model for transfering knowledge to another task, irrespective of their pretraining objectives. Our results show that re-using these pretrained CNNs, can give better/comparable performance, compared to the transformer models with less use of computational resources.
\subsection{Potential Applications}
\label{sec:application}
Our results demonstrate that a minimal neuron set can represent the learned property-based information, without compromising its accuracy.
Identifying such a small salient neurons
can be used to understand the network behaviour and its prediction. Such neuron set can be effectively used to identify neurons or parts of the network to prune or to find sparse network, as indicated in \cite{frankle2018lottery}.
Furthermore, these salient neurons can be used as important features for downstream tasks. Moreover, identifying the salient neurons for each property can pinpoint sensitive parts of the network, as seen in Section \ref{t1layer}, for gender property in ADI.
\subsection{Limitations}
\label{ssec:limitation}
\paragraph{Complexity of the probe} From the methodological point of view, we used a simple logistic regression classifier motivated by the simplicity of the theoretical understanding and strong use in the literature.
However, some studies \cite{conneau2018you} showed that a deeper classifier might be needed to capture more nuanced encoded knowledge. Linear probes are particularly important for our method as we use the learned weights as a proxy to measure the importance of each neuron. We leave exploration of more complex probes for the future.
\paragraph{Dependence on supervision}
We used pre-defined tasks and annotations to train our probes, upon which we carry our layer-wise and fine-grain neuron analyses.
A downside to this approach is that our analysis is limited to only pre-defined properties for which annotations are available. Unsupervised analysis is required to unearth what other information captured within the network and if machine learned features correspond to human engineered features.
Another limitation of probing classifiers is that the analysis is biased by the limitations (sparsity, genre etc) of the annotated data. It is important to conduct analysis under various data conditions to corroborate the findings. We leave this exploration for the future.
\paragraph{Connecting interpretation with prediction}
While probing methods are useful in analyze and pinpoint important information captured within the network, this approach does not necessarily indicate how this information (causation) is used by the network during prediction \cite{belinkov2019analysis}. For example to eliminate the bias in the output of the system, one must identify neurons that are relevant to that property and then also identify which of these neurons are critical during prediction. Combining the two pieces of information, one can effectively control system's behavior towards that property. This is a challenging research frontier, that we invite researchers in speech modeling to explore.
\section{Results: Fine-grained Neuron Analysis}
\label{sec:resultlc}
Next, we carried out a fine-grained neuron-level analysis for deeper insights. We first evaluate the efficacy of the neuron ranking algorithm in Section \ref{ssec:ranking_eval}; second we compare our oracle (ALL) results with the control tasks for further validation; and third we investigate the following questions: RQ3: what are the minimal set of neurons that capture a property, and highlight the parts of the networks that predominantly capture these properties.
\begin{table}[!ht]
\centering
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$Neurons$ & 20\% & 20\% & 20\% & 20\% \\\hline
\hline\hline
\multicolumn{5}{|c|}{T1: GC} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc_t (Masked)$ & \multicolumn{1}{r}{98.23} & \multicolumn{1}{r}{97.58} & \multicolumn{1}{r}{95.43} & \multicolumn{1}{r}{93.65} \\\hline
$Acc_b$ (Masked) & 53.36 & 58.65 & 77.13 & 55.87 \\\hline
$Acc_r$ (Masked) & 98.06 & 87.08 & 95.93 & 92.64 \\\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$Neurons$ & 20\% & 20\% & 20\% & 20\% \\\hline
\hline\hline
\multicolumn{5}{|c|}{T3: LID} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc_t$ (Masked) & 65.38 & 64.89 & 50.45 & 70.51 \\
\hline
$Acc_b$ (Masked) & 16.30 & 17.26 & 21.13 & 17.14 \\
\hline
$Acc_r$ (Masked) & \multicolumn{1}{l|}{58.83} & \multicolumn{1}{l|}{42.94} & 48.40 & 58.84 \\ \hline
\end{tabular}
}
\medskip
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T4: DID} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc_t$ (Masked) & 52.75 & 31.29 & 23.14 & 36.21 \\
\hline
$Acc_b $(Masked) & \multicolumn{1}{l|}{22.33} & \multicolumn{1}{l|}{23.67} & 24.21 & 22.40 \\
\hline
$Acc_r$ (Masked) & \multicolumn{1}{l|}{34.01} & \multicolumn{1}{l|}{25.78} & 24.97 & 29.05 \\
\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T5:CC} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc_t$ (Masked) & 88.60 & 86.48 & 79.40 & 88.69 \\
\hline
$Acc_b$ (Masked) & 41.00 & 36.64 & 49.69 & 33.14 \\
\hline
$Acc_r$ (Masked) & 87.49 & 75.36 & 80.28 & 88.87 \\
\hline
\end{tabular}
}
\caption{Reported accuracy (Acc), to indicate neuron ranking and selection algorithm efficacy for the proxy Tasks T1: GC, T3:LID, T4:DID and T5:CC using Masked 20\% t/b/r neurons. $Acc_{*}$ with t=\textit{top}, b=\textit{bottom} and r=\textit{random}
Reported performance are averaged over 5 runs. }
\label{tab:task-efficacy}
\end{table}
\subsection{Efficacy of the Neuron Ranking}
\label{ssec:ranking_eval}
We evaluate the effectiveness of the neuron selection method. We report the probe's accuracy by masking out 80\% of top/bottom or keeping 20\% of random neurons. Please see the results (Accuracy) for for different understudied tasks in Tables \ref{tab:task-efficacy}, and EER for T2:SV in Table \ref{tab:sv_neuron}. Comparing the accuracy of the top neurons versus bottom neurons, the former is always higher than the latter, showing the efficacy of the ranking algorithm. This is also true comparing top versus random neurons, although, in some cases, for e.g. T2:SV-$ST_{large}$, EER\footnote{Lower value indicates better performance.} of the random set (15.06) is less than the EER of the top set (15.62). This indicates that the information is redundant and distributed across the network for the studied complex tasks. We will discuss this more as we move on to our fine-grained neuron analysis.
\begin{table}[!htb]
\centering
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$\#Neurons $& 11100 & 11100 & 2304 & 9216 \\
\hline\hline
\multicolumn{5}{|c|}{T1: GC} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc$ (Maj-C) &\multicolumn{4}{c|}{56.70} \\
\hline
$Acc$ (ALL) & 98.20 & 96.79 & 99.16 & 98.14 \\ \hline
$Acc$ (R.INIT) & 68.14 & 68.14 & 56.17 & 56.60 \\\hline
$Sel_a$ & 42.78 & 67.28 & 52.83 & 72.53 \\\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$\#Neurons $& 11100 & 11100 & 2304 & 9216 \\
\hline\hline
\multicolumn{5}{|c|}{T3: LID} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc$ (Maj-C) &\multicolumn{4}{c|}{14.96} \\
\hline
$Acc$ (All) & 86.00 & 76.01 & 57.35 & 76.24 \\
\hline
$Acc$ (R.INIT) & \multicolumn{1}{c}{13.20} & \multicolumn{1}{c|}{13.20} & \multicolumn{1}{c|}{15.58} & \multicolumn{1}{c|}{14.23} \\
\hline
$Sel_a$ & \multicolumn{1}{c|}{75.69} & \multicolumn{1}{c|}{69.20 } & \multicolumn{1}{c|}{41.18 } & \multicolumn{1}{c|}{61.76} \\ \hline
\end{tabular}
}
\medskip
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T4: DID} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc$ (Maj-C) &\multicolumn{4}{c|}{23.06} \\
\hline
$Acc$ (ALL) & 55.63 & 39.12 & 36.66 & 39.22 \\
\hline
$Acc$ (R.INIT) & \multicolumn{1}{c|}{20.24} & \multicolumn{1}{c|}{20.24} & \multicolumn{1}{c|}{16.70} & \multicolumn{1}{c|}{22.45} \\
\hline
$Sel_a$ & 36.7 & 16.89 & 13.34 & 19.20 \\ \hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T5:CC} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc$ (Maj-C) &\multicolumn{4}{c|}{32.12} \\
\hline
$Acc$ (ALL) & 93.93 & 85.51 & 86.80 & 96.55 \\
\hline
$Acc$ (R.INIT) &26.52 & 28.54 & 37.74 & 37.32 \\
\hline
$Sel_a$ & 63.81 & 77.65 & 68.17 & 83.76 \\ \hline
\end{tabular}
}
\caption{Control Tasks: Reported accuracy (Acc) for the proxy Tasks T1: GC, T3:LID, T4:DID and T5:CC using majority baseline (Maj-C), Oracle (ALL: with all neurons), random initialisation (R.INIT) of neuron's weight and selectivity ($Sel_a$).
Reported performance for $Sel_a$ averaged over 5 runs. }
\label{tab:task-baseline}
\end{table}
\subsection{Control Tasks}
\label{ssec:control_eval}
We report the baseline performances -- majority baseline and random initialisation of neuron weights instead of embeddings from pretrained models, in Table \ref{tab:task-baseline} and compare it with our oracle (ALL) results.
To show that the reported performance indicates the strength of the encoded representation, not the probe's memorising capability, we present selectivity ($Sel_a$) in the Table \ref{tab:task-baseline}. We show that the oracle (probe trained on all neurons, $Acc$ (ALL)) significantly outperforms both the probes with random initialised weights and the control task -- selectivity (described in Section \ref{ssec:control}).
\subsection{Minimal Neurons}
\label{ssec:neuron_eval}
\begin{table} [!ht]
\centering
\scalebox{0.8}{
\begin{tabular}{l|cccc}
\multicolumn{1}{l}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$\#Neurons $& 11100 & 11100 & 2304 & 9216 \\
\hline\hline
$Acc$ (ALL) & \multicolumn{1}{r}{98.20} & \multicolumn{1}{r}{96.79} & \multicolumn{1}{r}{99.16} & \multicolumn{1}{r}{98.14} \\
\hline\hline
\multicolumn{5}{c}{Re-trained with Minimal Neuron sets} \\
\hline\hline
$Neu_t$ & \multicolumn{1}{r}{5\%} & \multicolumn{1}{r}{50\%} & \multicolumn{1}{r}{15\%} & \multicolumn{1}{r}{10\%} \\\hline
$Acc_t$ (Re-trained) & \multicolumn{1}{r}{98.68} & \multicolumn{1}{r}{96.54} & \multicolumn{1}{r}{98.32} & \multicolumn{1}{r}{98.14} \\\hline
$Acc_r$ (Re-trained) & 94.65 & 95.28 & 97.44 & 87.99 \\\hline
\end{tabular}}
\caption{Reported accuracy (Acc) for proxy Task T1:GC using fine-grained neuron analysis. $Acc_{*}$ with t=\textit{top}, b=\textit{bottom} and r=\textit{random} \textit{Neu}: neurons. Reported performance averaged over 5 runs. }
\label{tab:gender}
\vspace{-0.4cm}
\end{table}
\subsubsection*{T1: Gender Classification (GC)}
We further study the possibility to extract a minimal neuron subset with comparable performance. We retrain the classifier with the selected top and random neurons.
We observed only a small set of neurons ($Acc_t (Re-trained)$), selected from the upper layers of the networks, is sufficient to achieve a close accuracy to `ALL' ($Acc (ALL)$) set. This shows that the information can be represented using a small set of neurons (e.g. 5-15\%).
Furthermore, we observed a close accuracy difference (with in a threshold of 5\%) between the small top and random subset for most of the pretrained models. These observation indicate a presence of some redundancy for the gender information throughout the network.
On the contrary, in SRE, we noticed that the accuracy of the probe drops when re-trained with top 50\% neurons (with respect to the masked, in Table \ref{tab:task-efficacy}, and oracle ($Acc $(ALL)) accuracy). We speculate that this behavior is due to the nature of the pretrained model and its training objective. Note that the primary objective of the pretrained SRE model is to discriminate speakers, where gender recognition is a first-line information for such discrimination. Therefore, the oracle gender classification probe -- trained with all the neurons of the pretrained network -- outperforms the newly re-trained probe with minimal neurons, indicates that gender-property is not redundant information for an SRE model and all the neurons capture some variant information. Such speculation is also affirmed when comparing the cardinality of the minimal neuron set of the SRE model ($50\%$ neurons) {\em vs} the rest pretrained models (5-15\% neurons).
\begin{table} [!ht]
\centering
\scalebox{0.75}{
\begin{tabular}{l|cccccc}
\hline
EER & \multicolumn{1}{c}{$L_{b}$} & $EER(L_{b})$ & \multicolumn{1}{c}{$Neu_{t}$} & \multicolumn{1}{c}{$EER_t$} & \multicolumn{1}{c}{$EER_r$} & \multicolumn{1}{c}{$EER_b$} \\
\hline\hline
\multicolumn{7}{c}{EN} \\
\hline\hline
ADI & FC1 &22.27 & 75\% & 22.03 & 22.32 & 22.50 \\
\hline
SRE & FC2 & 6.81 & 75\% & 6.96 & 6.96 & 7.05 \\
\hline
$ST_{base}$ & L3 & 28.12 & 5\% & 27.57 & 31.04 & 32.43 \\
\hline
$ST_{large}$ & L11 & 32.31 & 5\% & 26.64 & 34.11 & 39.36 \\
\hline
\hline
\multicolumn{7}{c}{ZH} \\
\hline\hline
ADI & FC1 & 13.55 & 50\% & 14.37 & 15.85 & 14.51 \\
\hline
SRE & FC2 & 5.47 & 50\% & 6.06 & 6.56 & 6.10 \\
\hline
$ST_{base}$ & L3 & 13.90 & 20\% & 13.78 & 15.19 & 20.49 \\
\hline
$ST_{large}$ & L11 & 15.90 & 5\% & 15.62 & 15.06 & 26.29 \\
\hline
\hline
\multicolumn{7}{c}{RU} \\
\hline\hline
ADI & FC1 & 13.47 & 50\% & 12.81 & 14.09 & 14.66 \\
\hline
SRE & FC2 & 4.05 & 50\% & 4.59 & 4.41 & 4.93 \\
\hline
$ST_{base}$ & L3 &16.63 & 10\% & 16.25 & 16.75 & 19.29 \\
\hline
$ST_{large}$ & L11 & 16.27 & 10\% & 9.09 & 15.04 & 24.44 \\
\hline
\end{tabular}
}
\caption{Reported equal error rate (EER) for proxy Task T2:SV using fine-grained neuron analysis. $EER_{*}$ with t=\textit{top}, b=\textit{bottom} and r=\textit{random} \textit{Neu}: neurons. $L_b$ represent the best layer from the pretrained model with lowest EER. $Neu_t$ represent percentage of neurons selected. Reported performance averaged over 5 runs. }
\label{tab:sv_neuron}
\end{table}
\subsubsection*{T2: Speaker Verification (SV)}
From the layer-wise representation, we observed that the speaker-variant information is present only in the last layer of the speaker recognition model. Further studying the represented layer ($L_b$), we noticed that $\approx75$\% (EN) of the layer neurons are used to represent this information. Thus, indicating that the information is distributed throughout the last layer. Moreover, these salient neurons are also shared with other properties like gender. This align with our previous hypothesis with gender classification probe. A similar observation is seen for Chinese and Russian datasets.
\begin{table}[!ht]
\centering
\scalebox{0.65}{
\begin{tabular}{|l|c|c|c|c|}
\hline
& \multicolumn{1}{l|}{ADI} & \multicolumn{1}{l|}{SRE} & \multicolumn{1}{l|}{$ST_{base}$} & \multicolumn{1}{l|}{$ST_{large}$} \\
\hline\hline
$\#Neurons$ & 11100 & 11100 & 2304 & 9216 \\
\hline\hline
\multicolumn{5}{|c|}{LID} \\
\hline\hline
$Acc$ (All) & 86.00 & 76.01 & 57.35 & 76.24 \\
\hline
$Acc$ (R.INIT) & \multicolumn{1}{l|}{13.20} & \multicolumn{1}{l|}{13.20} & \multicolumn{1}{l|}{15.58} & \multicolumn{1}{l|}{14.23} \\
\hline
$Sel_a$ & \multicolumn{1}{l|}{75.69} & \multicolumn{1}{l|}{69.20 } & \multicolumn{1}{l|}{41.18 } & \multicolumn{1}{l|}{61.76} \\
\hline\hline
$Neu_t$ & 20\% & 20\% & 20\% & 20\% \\\hline
$Acc_t$ (Masked) & 65.38 & 64.89 & 50.45 & 70.51 \\
\hline
$Acc_b$ (Masked) & 16.30 & 17.26 & 21.13 & 17.14 \\
\hline
$Acc_r$ (Masked) & \multicolumn{1}{l|}{58.83} & \multicolumn{1}{l|}{42.94} & 48.40 & 58.84 \\
\hline\hline
$Neu_t$ & 20\% & 10\% & 75\% & 50\% \\
\hline
$Acc_t$ (Re-) & 85.30 & 78.97 & 57.45 & 76.43 \\
\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
& \multicolumn{1}{l|}{ADI} & \multicolumn{1}{l|}{SRE} & \multicolumn{1}{l|}{$ST_{base}$} & \multicolumn{1}{l|}{$ST_{large}$} \\
\hline\hline
$\#Neurons $& 11100 & 11100 & 2304 & 9216 \\
\hline\hline
\multicolumn{5}{|c|}{DID} \\
\hline\hline
$Acc$ (ALL) & 55.63 & 39.12 & 36.66 & 39.22 \\
\hline
$Acc$ (R.INIT) & \multicolumn{1}{l|}{20.24} & \multicolumn{1}{l|}{20.24} & \multicolumn{1}{l|}{16.70} & \multicolumn{1}{l|}{22.45} \\
\hline
$Sel_a$ & \multicolumn{1}{l|}{36.7} & \multicolumn{1}{l|}{16.89} & \multicolumn{1}{l|}{13.34} & \multicolumn{1}{l|}{ 19.20 } \\
\hline\hline
$Neu_t$ & 20\% & 20\% & 20\% & 20\% \\\hline
$Acc_t$ (Masked) & 52.75 & 31.29 & 23.14 & 36.21 \\
\hline
$Acc_b $(Masked) & \multicolumn{1}{l|}{22.33} & \multicolumn{1}{l|}{23.67} & 24.21 & 22.40 \\
\hline
$Acc_r$ (Masked) & \multicolumn{1}{l|}{34.01} & \multicolumn{1}{l|}{25.78} & 24.97 & 29.05 \\
\hline\hline
$Neu_t$ & 25\% & 5\% & 50\% & 15\% \\
\hline
$Acc_t$(Re-) & 55.43 & 40.82 & 36.01 & 38.06 \\ \hline
\end{tabular}
}
\caption{Reported accuracy (Acc) for proxy Task T3:LID and T4:DID using fine-grained neuron analysis. $Acc_{*}$ with t=\textit{top}, b=\textit{bottom} and r=\textit{random} \textit{Neu}: neurons. Selectivity, $Sel_{a}$, represents the control task using all (ALL) the neurons. R.INIT represent accuracy using randomly initialised input feature instead of embeddings from the pretrained models. Reported performance averaged over 5 runs. }
\label{tab:task3-4}
\end{table}
\subsubsection*{T3: Language Identification (LID)}
\label{lang_neuron}
Our observation suggests a fine difference between the CNNs and transformer architectures when capturing the language property. We noticed, for the CNN architecture, a small neuron set (10-20)\% of the total network is enough to encode the language property. Whereas, such property is more distributed in the transformers (see Table \ref{tab:base_dist}-\ref{tab:large_dist} in Appendix).
One hypothesis for such a difference in the architectural behaviour is due to its training objectives. Both the CNN pretrained models are trained with an objective that is either a special case of language identification (dialect identification objective -- ADI) or language discrimination is one of the important criterion/feature for model prediction (speaker recognition -- SRE). The network is innately capturing language representation in the outer/upper layers of the network. Whereas, the transformer architectures are trained using self-supervised approach -- reconstructing the masked input signals -- innately capturing language discrimination properties, distributed throughout all the transformer layers ($|$minimal neuron set $|=$ 50-75\%). So, when the minimal neuron set is selected, the neurons from the last layers of CNNs (see Table \ref{tab:sup_dist}) are sufficient to capture the represented information, as for the transformers, the neurons are distributed throughout the network, each capturing some properties to represent language (see Table \ref{tab:base_dist}-Table \ref{tab:large_dist}).
\subsubsection{T4: Regional Dialect Identification (DID)}
Results from neuron-analysis for T4: regional dialect identification are presented in Table \ref{tab:task3-4}, indicating high selectivity for ADI network only.
We noticed only the representations from the ADI pretrained model has the ability to capture such information. This observation is seen in both layer-wise (see Section \ref{t4layer}) and in neuron-level analysis.
The ranked neurons from the ADI model indicate that regional dialectal information is neither distributed throughout the ADI network nor the information is redundant.
To analyse the distributive nature, we first compared the accuracy of the re-trained auxiliary probe using the top ranked neurons (minimal set) with the `ALL' set. We observed, using only 25\%
of the network neurons, the probe is achieving a close accuracy to `ALL' neurons ($Acc (ALL)$) set, thus reflecting to the capabilities, of the minimal set, to represent dialectal information in the network. Further analysis are shown in \ref{appen:cnn} for both ADI and DID, showed that the regional dialectal information are predominately present in the upper layers of the ADI network.
To analyse redundancy in the network, we explored the masked neuron accuracy ($Acc_*$ (Masked)) using top and random 20\% of neurons.
From the results, we notice a huge performance drop ($\approx 18.8$) between the $Acc_t$ (Masked) {\em vs} $Acc_r$ (Masked). This indicates that dialectal information may not be redundant in the network. To strengthen the findings, we also re-trained the probe with random 25\% neurons (based on the size of minimal neuron set) and compared it with $Acc_t$(Re-trained). Using the random neurons, we obtained an accuracy of 50.32 -- a significant decrease in performance compared to the $Acc_t$(Re-trained): 55.43, thus proving that the regional dialectal information is not redundant in the pretrained ADI model.
\begin{table}
\centering
\scalebox{0.8}{
\begin{tabular}{l|c|c|c|c}
\hline
& ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$\#Neurons$ & 11100 & 11100 & 2304 & 9216 \\
\hline\hline
$Acc$ (ALL) & 93.93 & 85.51 & 86.80 & 96.55 \\
\hline
$Acc$ (R.INIT) &26.52 & 28.54 & 37.74 & 37.32 \\
\hline
$Sel_a$ & 63.81 & 77.65 & 68.17 & 83.76 \\
\hline\hline
$Neu_t$ & 20\% & 20\% & 20\% & 20\% \\ \hline
$Acc_t$ (Masked) & 88.60 & 86.48 & 79.40 & 88.69 \\
\hline
$Acc_b$ (Masked) & 41.00 & 36.64 & 49.69 & 33.14 \\
\hline
$Acc_r$ (Masked) & 87.49 & 75.36 & 80.28 & 88.87 \\
\hline\hline
$Neu_t$ & 10\% & 1\% & 20\% & 10\% \\
\hline
$Acc_t$ (Re-trained) & 94.56 & 85.04 & 86.27 & 95.71 \\
\hline
\end{tabular}
}
\caption{Reported accuracy (Acc) for proxy Task T5:CC using fine-grained neuron analysis. $Acc_{*}$ with t=\textit{top}, b=\textit{bottom} and r=\textit{random} \textit{Neu}: neurons. Selectivity, $Sel_{a}$, represents the control task using all (ALL) the neurons. R.INIT represent accuracy using randomly initialised input feature instead of embeddings from the pretrained models. Reported performance averaged over 5 runs. }
\label{tab:channel}
\end{table}
\subsubsection{T5: Channel Classification (CC)}
From the reported results, present in Table \ref{tab:channel}, we observed that the accuracy using all the neurons of the pretrained network, $Acc$ (ALL), significantly outperform the accuracy using random neuron initialisation $Acc$ (R.INIT) and the control task ($Sel_a$).
From neuron-level analysis, we notice that only a handful (1-20\%) of neurons can represent the property and give a closer accuracy to the $Acc $(ALL). These
few neurons are also indicative of the pervasive nature of the channel information.
Comparing the differences between the pretrained networks, we observed that unlike CNNs (in Table \ref{tab:sup_dist}), the channel information is more distributed across different layers of transformer (Table \ref{tab:base_dist}-\ref{tab:large_dist}). Moreover, from the masked-neuron accuracy, we observed a substantial amount of neurons that are redundant for representing channel information, based on the closeness of accuracy for the top and random neurons, in most of the pretrained network.
\section{Discussion}
\label{ssec:keyobservation}
\sout{In this study, we present the post-hoc functional interpretibility of pretrained models trained with different objective functions, using auxiliary classifiers. We probe the layers of the network for the encoded information, such as: {\em speaker -- gender and voice identity}, {\em language with its dialects} and {\em transmission channel}. Moreover, we also perform a fine-grained neurons-level analysis of the pretrained network for the aforementioned properties.}
\nd{The above is just the summary of the work and not a discussion}
\nd{Make subsections in discussion like in the below paragraphs you are talking some information being distributive, some localized to higher layer and some task specific so the title could be Localized vs Distributive}
Our \sout{empirical} \hll{layer-wise} analysis \sout{, based on the uniform layer-wise performance, indicates} \hll{shows} that properties such as gender and channel are present throughout \sout{all the studied} \hll{the} network. Language information, on the other hand, is \sout{distributed} \hll{localized} in the upper layers \sout{of} \hll{in the} majority of the \sout{pretrained network} \hll{models}. Thus, suggesting \textbf{distributive nature of the encoded properties} in the pretrained network. \nd{I didn't follow the argument above - you are saying that some properties are distributed and some are only found in the upper layers. And then saying this shows the distributed nature of the encoded properties?}
\sout{As for the} \hll{In contrast,} complex properties, such as dialects and speaker voice identity, the information \sout{are} \hll{is} encoded only in their respective task-specific network, \textbf{localised} in the upper layers
\nd{The above is just reiterating the finding. What is the discussion point? One discussion item could be comparing layer and neuron-wise results w.r.t localization vs distributivity}
\nd{named the following sub-section as Shared Neurons}
\hll{In our neuron analysis we found salient neurons w.r.t to each property.}
\sout{Furthermore,} We noticed these salient neurons \sout{in the network} are sometimes \textbf{shared \sout{with other} \hll{among} properties}. For example, in \hll{the} speaker recognition (SRE) pretrained model, we noticed that \sout{the voice-identity information is distributed throughout the last layers, using 50-75\% layer's neurons, and} a subset \hll{HOW MANY?} of \sout{these} the voice-identity neurons are \sout{also} shared with \sout{properties and encode information like} \hll{the} gender \hll{neurons}. This sharing of properties \sout{can} reflects on the main training objective of the pretrained network (speaker identification) and its important features.
\nd{How does it reflect on the main training objective? You should talk about the polysemous neurons possibly saying how voice identity and gender related}
\nd{Name the following as redundancy analysis}
\sout{Our findings, from probing the network and comparing performances using top and random $20\%$ neurons,}
\hll{Our neuron analysis in Section XYZ that showd that top and random 20\% encode information related to specific task with similar accuracy demonstrates that the network distributes task-specific information redundantly for some tasks.}
\sout{indicate the \textbf{presence of task-specific redundancy} in the network for some properties.} For the gender and channel information, we observed that $\approx 80\%$ of neurons are redundant, in \hll{both} ADI and the transformer models. \nd{But ADI is a optimization task and transformer is an architecture. Isn't this comparing apples and oranges?}
\sout{As for} \hll{In contrast, we found that in} the dialectal properties, the information is not redundant and \sout{encodes discriminating information} \hll{is predominantly preserved only in the higher} \sout{in} task-specific layers of the ADI \hll{model.} \sout{network.} A similar observation \hll{is seen} \hll{was made} in speaker recognition (SRE) pretrained model with voice identity property.
\nd{Again this is just reiterating the findings. A discussion item would be some speculative analysis of why certain properties are redundant and distributive vs those that are localized and exclusive to certain parts of the network. }
\nd{The following paragraph talks about robustness of the network and its ability to generalize towards unseen speakers}
Comparing the encoded properties between the pretrained networks, we observed that these networks capture \textbf{speaker-invariant representation}.
Such findings increase the reliability in pretrained models to give representative information irrespective of known and unknown speakers.
\nd{How does the finding increases the reliablity in the pre-trained models? Our experiment is only showing that the network is robust towards unknown speakers through a diagonostic test. But someone may ask "Oh this I can even run by computing models' accuracy on unseen speakers. Why do I need your analysis?" What is your answer to that? Also what is the discussion point here?}
\nd{Following paragraph talks about minimal subset of neurons}
Using our neuron ranking and selection experiments, we observed that it is possible to \textbf{extract a small subset of neurons}, representing the information needed for the downstream task. Using these minimal neuron sets, we can get a comparable (sometimes improved) results similar to the full set network neurons. For example, we noticed using only 10-20\% of neurons in CNNs, we can effectively encode language information. Similarly, for regional dialectal information, 25\% of neurons in pretrained ADI is sufficient to represent the property.
Such approach, can be used to select important features efficiently for downstream tasks -- reducing task and computational complexity.
\nd{You can elaborate the discussion here by listing applications in efficient feature learning, distillation taking lead from our redundancy paper (see at the end of that paper for ideas)}
From the size of the minimal neuron set, we can also understand the prevalent/scarce presence of the property in the raw acoustic signal. Looking into the size of the minimal neuron sets of a channel (1-20\% neurons of network), for example, we see that the information \sout{are} \hll{is} easily available in the acoustic signal and can be captured within the pretrained network.
\nd{I did not get the last paragraph}
\nd{The following paragraph talks about bias}
\sout{Moreover, such} \hll{Our} fine-grained neuron analysis \sout{can} pinpoints the \textbf{property-based bias present in the pretrained network} and \sout{suggests} \hll{highlight} the parts of the networks (neurons) responsible for encoding the information. For example, using layer and neuron-level analysis, we show ADI pretrained model is more susceptible to gender bias than other pretrained models. \sout{With the minimal neuron set for gender property, we can, in future, recognise and modify the weights of the neurons, in ADI, responsible for such bias.} \hll{Identifying neurons that are capture the gender in the network, we can possible manipulate and control the system's behavior to achieve eradicate the bias. We leave this exploration for future.}
\nd{The following paragraph talks about comparing architectures}
From cross-architectural comparison of the pretrained model, we noticed that the small transformers \sout{architecture relatively} gives \hllPrelatively} poor performance \sout{with respect to} \hll{compared to} the large transformers and CNNs. \sout{Considering the findings in the previous study \cite{liu2020mockingjay},} We hypothesise that the small pretrained transformers encode \sout{very} low-level information and can perform well only when fine-tuned for a downstream task. \sout{This indicates that the} \hll{Contranstingly} large architectures \textbf{are better feature extractor}, for their ability to capture more meaningful and abstract information like language property. \hll{Our findings resonate with previous study \cite{liu2020mockingjay}.}
\nd{Last layer of the CNN models preserve task-specific knowledge}
As for CNNs, we notice that the task-oriented information (like dialects and voice identity in ADI and SRE pretrained models) are captured in the upper layers of the network, encoding more abstract information; whereas vocal features are mostly captured in the lower CNN layers. \sout{Thus, re-confirming} \hll{This reinforces} the previous findings \sout{inferring} \hll{that} the lower layers of the network act as feature extractor and upper layers as task-oriented classifier.
Moreover, in line with \cite{tay2021pretrained}, our findings also suggest potential in (re-)using large \textbf{CNNs as pretrained model}, irrespective of their pretraining objectives. \nd{Re-using towards what? Transfer learning?}
\sout{With proper task design, we hypothesize} \hll{Our results show that} re-using pretrained CNNs, \sout{rather than training transformers,} can give better/comparable performance\hll{, compared to the transformer models} with less use of computational resources.
\nd{did you only find CNN's last layer to be task specific? No transformers?}
\subsection{Potential Applications}
\label{ssec:application}
\nd{Below paragraphs are redundant with what you discussed above}
We observed that it is possible to extract a minimal neuron set that performs comparably with the oracle (`ALL') set, for each individual studied property. This observation aligns with the findings in \cite{han2015learning}, demonstrating that fewer parameters can be used to represent the learned function of the network.
Identifying such a small salient neuron subset can be used to understand the network behaviour and its prediction. Such neuron set can be effectively used to identify neurons or parts of the network to prune or to find sparse network, as indicated in \cite{frankle2018lottery}.
Furthermore, these salient neurons can be used as important features for downstream tasks.
Moreover, identifying the salient neurons for each property can pinpoint sensitive parts of the network. These sensitive neurons can later be used for bias mitigation (e.g. debasing ADI network for gender property) and enhance model generalization capabilities (e.g., for channel property). Thus, identifying these information give us the opportunity to manipulate specific responsible neurons to control the model's behavior and outcome.
The technique adopted in the study can also facilitate a deeper understanding of the designed speech network, such as: what non-task oriented information are encoded; where in the network these information are captured; and which properties are redundant for a particular task. Such knowledge can help to guide architectural/task design.
\nd{You have already discussed mitigating bias, efficient feature selection and manipulation above in the last section. Move information in this section above to make it better if it is not totally redundant}
\nd{make all other discussion items as subsections or make this one has paragraph heading too}
\subsection{Limitations}
\label{ssec:limitation}
\sout{\paragraph{Probe framework}} \hll{\paragraph{Complexity of the probe}} From the methodological point of view, we used a simple logistic regression classifier motivated by the simplicity of the theoretical understanding and strong use in the literature.
However, some studies \cite{conneau2018you} showed that a deeper classifier might be needed to capture more nuanced encoded knowledge\sout{, which we will explore in future studies.} \hll{Linear probes are particularly important
for our method as we use the learned weights as a
proxy to measure the importance of each neuron. We leave exploration of more complex probes for the future.}
\sout{\paragraph{Probing approach}} \hll{\paragraph{Dependence on supervision}}
We \hll{used pre-defined tasks and annotations to train our probes, upon which we carry our layer-wise and }
\sout{utilize proxy classifiers and the} fine-grain neuron analyses. \sout{to find what information is captured in the network layers.}
\sout{However, there are limitations to the approach. To use the probe for interpretibility of the network, we need to have pre-defined properties (in our study: gender, channel. language etc.) that are related to the probed network.}
\hll{A downside to this approach is that our analysis is limited to only pre-defined properties for which annotations are available. Unsupervised analysis is required to unearth what other information captured within the network and if machine learned features correspond to human engineered features.
Another limitation of probing classifiers is that the analysis is biased by the limitations (sparsity, genre etc) of the annotated data. It is important to conduct analysis under various data conditions to corroborate the findings. We leave this exploration for the future.}
\sout{The technique and the property selection is also bounded by the existence of additional annotated data for the said property. The nature of the probed-data can add some bias to the observation -- further investigation, with carefully designed experiments, are needed to understand the effect in depth.}
\sout{Another concern regarding the probing technique is its memorising capability. It is essential to understand if the probe is capturing the encoded property representation or is it just memorising the task. In this study, to overcome this concern, we added a control task (see Section \ref{ssec:control}) and reported selectivity measure.}
\nd{We have overcome this problem, so to say. Unless you want to say that we haven't and this is an open issue, which it is, you should not discuss it here. Control tasks are not fool proof, but don't open the can}
\paragraph{Connecting interpretation with prediction} \sout{Even with the above limitation, probing the network is one of the most successful techniques in explainability studies. Yet,}
\hll{While probing methods are useful in analyze and pinpoint important information captured within the network,}
this approach does not necessarily indicate \sout{when and how much of} \hll{how} this information (causation) is used by the network \sout{for a} \hll{during} prediction \sout{task} \cite{belinkov2019analysis}. \hll{For example to eliminate the bias in the output of the system, one must identify neurons that are relevant to that property and then also identify which of these neurons are critical during prediction. Combining the two pieces of information, one can effectively control system's behavior towards that property.} \sout{Further exploration is needed to see when particular encoded information is dominant. We keep this task as our future work.} \hll{This is a challenging research frontier, that we invite researchers in speech modeling to explore}.
\section{Neuron Analysis}
\label{sec:resultlc}
In this section we carry out a more fine-grained neuron-level analysis of the representations. The section is divided into two parts i) we first evaluate the efficacy of the neuron ranking algorithm, ii) we draw a task-wise and architectural comparison in the light of discovered neurons.
\begin{table}[!ht]
\centering
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$Neurons$ & 20\% & 20\% & 20\% & 20\% \\\hline
\hline\hline
\multicolumn{5}{|c|}{T1: GC} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc_t (Masked)$ & \multicolumn{1}{r}{98.23} & \multicolumn{1}{r}{97.58} & \multicolumn{1}{r}{95.43} & \multicolumn{1}{r}{93.65} \\\hline
$Acc_b$ (Masked) & 53.36 & 58.65 & 77.13 & 55.87 \\\hline
$Acc_r$ (Masked) & 98.06 & 87.08 & 95.93 & 92.64 \\\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$Neurons$ & 20\% & 20\% & 20\% & 20\% \\\hline
\hline\hline
\multicolumn{5}{|c|}{T3: LID} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc_t$ (Masked) & 65.38 & 64.89 & 50.45 & 70.51 \\
\hline
$Acc_b$ (Masked) & 16.30 & 17.26 & 21.13 & 17.14 \\
\hline
$Acc_r$ (Masked) & \multicolumn{1}{l|}{58.83} & \multicolumn{1}{l|}{42.94} & 48.40 & 58.84 \\ \hline
\end{tabular}
}
\medskip
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T4: DID} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc_t$ (Masked) & 52.75 & 31.29 & 23.14 & 36.21 \\
\hline
$Acc_b $(Masked) & \multicolumn{1}{l|}{22.33} & \multicolumn{1}{l|}{23.67} & 24.21 & 22.40 \\
\hline
$Acc_r$ (Masked) & \multicolumn{1}{l|}{34.01} & \multicolumn{1}{l|}{25.78} & 24.97 & 29.05 \\
\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T5:CC} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc_t$ (Masked) & 88.60 & 86.48 & 79.40 & 88.69 \\
\hline
$Acc_b$ (Masked) & 41.00 & 36.64 & 49.69 & 33.14 \\
\hline
$Acc_r$ (Masked) & 87.49 & 75.36 & 80.28 & 88.87 \\
\hline
\end{tabular}
}
\caption{Reported accuracy (Acc), to indicate neuron ranking and selection algorithm efficacy for the proxy Tasks T1: GC, T3:LID, T4:DID and T5:CC using Masked 20\% t/b/r neurons. $Acc_{*}$ with t=\textit{top}, b=\textit{bottom} and r=\textit{random}
Reported performance are averaged over 5 runs. }
\label{tab:task-efficacy}
\end{table}
\subsection{Efficacy of the Neuron Ranking}
\label{ssec:ranking_eval}
First we evaluate the efficacy of the neuron selection algorithm. To do so, we mask-out 80\% of
neurons and retain top/random/bottom 20\% neurons for calculating the accuracy of the classifier. Please see Tables \ref{tab:task-efficacy} for the results on different
tasks (T1,3-5).
Comparing
top neurons versus bottom neurons, we found the former
to always give higher accuracy than the latter, showing
that our rankings are correct.
\subsection{Minimal Neuron Set}
\label{ssec:ns2}
For analyzing individual neurons, we extract a minimal subset of neurons for each task. We extract a ranking of neurons towards each auxiliary task using the algorithm described in Section \ref{ssec:fg-neuron}. We then iteratively select top N\% neurons from the ranking until it gives comparable performance to oracle (within a certain threshold). Table \ref{tab:minimal-neurons} give minimal number of neurons extracted for each task.
We discuss results for different tasks below. We will use the minimal set of neurons to also carry our redundancy analysis. More specifically we define task-specific redundancy as the follow: i) if minimal set of neurons achieve at least 97\% of the oracle performance than the remaining neurons are redundant w.r.t this task, ii) if randomly selected N\% neurons also achieve the same accuracy as the top N\% neurons, we say the information is redundant w.r.t this task.
\begin{table}[!ht]
\centering
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T1: GC} \\
\hline\hline
& ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$Acc$ (ALL) & 98.20 & 96.79 & 99.16 & 98.14 \\\hline
$Neu_t$ & 5\% & 50\% & 15\% & 10\% \\\hline
$Acc_t$ (Re-trained) & 98.68 & 96.54 & 98.32 & 98.14 \\\hline
$Acc_r$ (Re-trained) & 94.65 & 95.28 & 97.44 & 87.99 \\\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T3: LID} \\
\hline\hline
& ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$Acc$ (All) & 86.00 & 76.01 & 57.35 & 76.24 \\ \hline
$Neu_t$ & 20\% & 10\% & 75\% & 50\% \\
\hline
$Acc_t$ (Re-trained) & 85.30 & 78.97 & 57.45 & 76.43 \\
\hline
$Acc_r$ (Re-trained) & 82.46 & 70.00 & 55.68 & 72.53 \\\hline
\end{tabular}
}
\medskip
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T4: DID} \\
\hline\hline
& ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$Acc$ (ALL) & 55.63 & 39.12 & 36.66 & 39.22 \\ \hline
$Neu_t$ & 25\% & 5\% & 50\% & 15\% \\
\hline
$Acc_t$(Re-trained) & 55.43 & 40.82 & 36.01 & 38.06 \\ \hline
$Acc_r$ (Re-trained) & 50.32 & 37.52 & 33.45 & 31.45 \\\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T5:CC} \\
\hline\hline
& ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$Acc$ (ALL) & 93.93 & 85.51 & 86.80 & 96.55 \\
\hline
$Neu_t$ & 10\% & 1\% & 20\% & 10\% \\
\hline
$Acc_t$ (Re-trained) & 94.56 & 85.04 & 86.27 & 95.71 \\
\hline
$Acc_r$ (Re-trained) & 92.82 & 75.85 & 84.70 & 92.49 \\
\hline
\end{tabular}
}
\caption{Reported re-trained accuracy (Acc) with minimal neuron set for proxy Tasks T1,T3,T4 and T5. $Acc_{*}$ with t=\textit{top}, and r=\textit{random} \textit{Neu}: neurons. Reported performance averaged over 5 runs. }
\label{tab:minimal-neurons}
\end{table}
\subsubsection*{T1: Gender Classification (GC)}
We observed that only a small set of neurons (5-15\%)
is sufficient to achieve
an accuracy close (within 97\% of oracle performance
`ALL' ($Acc (ALL)$).
Furthermore, we observed a
small accuracy difference (with in a threshold of 5\%) between the
top and random subset
in most of the pretrained models.
This indicates a presence of redundancy for the gender information throughout the network.
In the SRE model, we noticed that the accuracy of the probe drops when re-trained with top 50\% neurons (with respect to the masked, in Table \ref{tab:task-efficacy}, and oracle ($Acc $(ALL)) accuracy). We speculate that this behavior is due to the nature of the pretrained model and its training objective. Note that the primary objective of the pretrained SRE model is to discriminate speakers, where gender recognition is a first-line information for such discrimination. Therefore, the oracle gender classification probe -- trained with all the neurons of the pretrained network -- outperforms the newly re-trained probe with minimal neurons, indicates that gender-property is not redundant information for an SRE model and all the neurons capture some variant information.
This is also affirmed when comparing the cardinality of the minimal neuron set of the SRE model ($50\%$ neurons) {\em vs} the rest pretrained models (5-15\% neurons).
\begin{table} [!ht]
\centering
\scalebox{0.75}{
\begin{tabular}{l|cccccc}
\hline
EER & \multicolumn{1}{c}{$L_{b}$} & $EER(L_{b})$ & \multicolumn{1}{c}{$Neu_{t}$} & \multicolumn{1}{c}{$EER_t$} & \multicolumn{1}{c}{$EER_r$} & \multicolumn{1}{c}{$EER_b$} \\
\hline\hline
\multicolumn{7}{c}{EN} \\
\hline\hline
ADI & FC1 &22.27 & 75\% & 22.03 & 22.32 & 22.50 \\
\hline
SRE & FC2 & 6.81 & 75\% & 6.96 & 6.96 & 7.05 \\
\hline
$ST_{base}$ & L3 & 28.12 & 5\% & 27.57 & 31.04 & 32.43 \\
\hline
$ST_{large}$ & L11 & 32.31 & 5\% & 26.64 & 34.11 & 39.36 \\
\hline
\hline
\multicolumn{7}{c}{ZH} \\
\hline\hline
ADI & FC1 & 13.55 & 50\% & 14.37 & 15.85 & 14.51 \\
\hline
SRE & FC2 & 5.47 & 50\% & 6.06 & 6.56 & 6.10 \\
\hline
$ST_{base}$ & L3 & 13.90 & 20\% & 13.78 & 15.19 & 20.49 \\
\hline
$ST_{large}$ & L11 & 15.90 & 5\% & 15.62 & 15.06 & 26.29 \\
\hline
\hline
\multicolumn{7}{c}{RU} \\
\hline\hline
ADI & FC1 & 13.47 & 50\% & 12.81 & 14.09 & 14.66 \\
\hline
SRE & FC2 & 4.05 & 50\% & 4.59 & 4.41 & 4.93 \\
\hline
$ST_{base}$ & L3 &16.63 & 10\% & 16.25 & 16.75 & 19.29 \\
\hline
$ST_{large}$ & L11 & 16.27 & 10\% & 9.09 & 15.04 & 24.44 \\
\hline
\end{tabular}
}
\caption{Reported equal error rate (EER) for proxy Task T2:SV using fine-grained neuron analysis. $EER_{*}$ with t=\textit{top}, b=\textit{bottom} and r=\textit{random} \textit{Neu}: neurons. $L_b$ represent the best layer from the pretrained model with lowest EER. $Neu_t$ represent percentage of neurons selected. Reported performance averaged over 5 runs. }
\label{tab:sv_neuron}
\end{table}
\subsubsection*{T2: Speaker Verification (SV)}
In our layer-wise
results we observed that the speaker-variant information is present only in the last layer of the speaker recognition (SRE) model. Further studying the represented layer ($L_b$), we noticed that $\approx75$\% (EN) of the (last) layer neurons are used to represent this information (see Table \ref{tab:minimal-neurons}).
Comparing top versus random neurons, in SRE, we found the EER of the random set (6.96) is equal to the EER of the top set (6.96). However, the obtained minimal representation doesnot outperform the complete FC2 neuron set. This results how all the neurons of FC2 are relevant to the tasks.
\subsubsection*{T3: Language Identification (LID)}
For the CNN models, in contrast to the pretrained speech transformers, a small neuron set (10-20)\% of the total network is sufficient to encode the language property (in Table \ref{tab:minimal-neurons}).
Moreover, w observed that $\approx$ 80\% neurons are redundant for the language property in the ADI model only. We hypothesize such presence of redundancy is due to task nature and the training objective of the model.
Since the core task of e.g., ADI is to discriminate dialects, a small amount of pretrained model neurons are effective enough to store the knowledge for the language property.
\subsubsection*{T4: Regional Dialect Identification (DID)}
In our layer-wise analysis, we noticed only the representations from the ADI pretrained model has the ability to capture such information. This observation is further validated in the neuron analysis.
Table \ref{tab:minimal-neurons}
shows that with the 25\% of the ADI network neurons, the probe achieves a comparable accuracy to `ALL' neurons ($Acc (ALL)$) set.
Moreover we noticed that re-training with random 25\% of neurons the accuracy difference (with top neurons) is higher than the tolerated threshold. This indicates that dialectal information
is not be redundant in the ADI network.\footnote{A significant drop in performance is also noticed when experimented with top {\em vs} random Masked-Accuracy present in Table \ref{tab:task-efficacy}, thus reconfirming our finding.}
\begin{table}[!htb]
\centering
\scalebox{0.7}{
\begin{tabular}{l|cccccc}
\hline
Layers & CNN1 & CNN2 & CNN3 & CNN4 & FC1 & FC2 \\ \hline
\#Neu/Layers & 2000 & 2000 & 2000 & 3000 & 1500 & 600 \\ \hline
\#Neu (total) & \multicolumn{6}{c}{11100} \\
\hline
\hline
Tasks ($Neu_t$ \%) & \multicolumn{6}{c}{Network: ADI} \\
\hline
\hline
LID (20) & 0 (0.0\%) & 153 (1.4\%) & 0.4 (0.0\%) & 576.6 (5.2\%) & 1034.2 (9.3\%) & 455.8 (4.1\%) \\ \hline
DID (25) & 0 (0.0\%) & 162.2 (1.5\%) & 0.2 (0.0\%) & 776.8 (7.0\%) & 1264.2 (11.4\%) & 571.6 (5.1\%) \\\hline
GC (5) & 0 (0.0\%) & 10.6 (0.1\%) & 1 (0.0\%) & 74.8 (0.7\%) & 329.8 (3.0\%) & 138.8 (1.3\%) \\\hline
CC (10) & 0.8 (0.0\%) & 63.6 (0.6\%) & 4 (0.0\%) & 184 (1.7\%) & 608.8 (5.5\%) & 248.8 (2.2\%) \\
\hline
\hline
Tasks ($Neu_t$ \%) & \multicolumn{6}{c}{Network: SRE} \\
\hline
\hline
LID (10) & 25.8 (0.2\%) & 104.2 (0.9\%) & 102 (0.9\%) & 36.8 (0.3\%) & 577.2 (5.2\%) & 264 (2.4\%) \\
DID (5) & 28.8 (0.3\%) & 47.8 (0.4\%) & 56 (0.5\%) & 35.8 (0.3\%) & 382.2 (3.4\%) & 4.4 (0.0\%) \\
GC (50) & 938.2 (8.5\%) & 948.8 (8.5\%) & 936.4 (8.4\%) & 1394.8 (12.6\%) & 838 (7.5\%) & 493.8 (4.4\%) \\
CC (1) & 0 (0.0\%) & 0.4 (0.0\%) & 0 (0.0\%) & 0.6 (0.0\%) & 69.4 (0.6\%) & 40.6 (0.4\%) \\\hline
\end{tabular}
}
\caption{Distribution of top neurons across the ADI and SRE network for each task. Reported number of neurons averaged over 5 runs. }
\label{tab:sup_dist}
\end{table}
\subsubsection*{T5: Channel Classification (CC)}
We notice that only a handful (1-20\%) of neurons can represent the property and give a
comparable accuracy to the $Acc $(ALL).
The small
number of neurons required for the task
indicates the pervasive nature of the channel information.
From the re-trained top and random accuracy (in Table \ref{tab:minimal-neurons}), we observed a substantial amount of neurons that are redundant for representing channel information, for most of the pretrained network.
\subsection{Localization {\em vs} Distributivity}
\label{ssec:ns3}
Now we highlight the parts of the networks that predominantly capture
the top neurons properties.
We present the distribution of the top salient (minimal) neurons
across the network in Table \ref{tab:sup_dist}-\ref{tab:large_dist}, to study how distributed or localized the spread of information is.
\begin{table}
\centering
\scalebox{0.7}{
\begin{tabular}{l|ccc}
\hline
\multicolumn{1}{l}{} & \multicolumn{3}{c}{Network: $ST_{base}$} \\
\hline\hline
Tasks ($Neu_t$ \%) & L1 & L2 & L3 \\
\hline\hline
LID (75) & 605 (26.3\%) & 556.6 (24.2\%) & 566.4 (24.6\%)~ \\\hline\hline
DID (50) & 273.2 (11.9\%) & 582.4 (25.3\%) & 296.4 (12.9\%)~ \\\hline\hline
GC (15) & 42.4 (1.8\%) & 137.2 (6.0\%) & 165.4 (7.2\%)~ \\\hline\hline
CC (20) & 124.2 (5.4\%) & 231 (10.0\%) & 104.8 (4.5\%)\\\hline
\end{tabular}
}
\caption{Distribution of top neurons across the $ST_{base}$ network for each task. Number of neurons in each layer = 768. Total neurons (`ALL') in the network = 2304. The table reports the number of neurons (\#) in each layer which is a member of $Neu_t$\%. Each cell also reports the \% the selected neurons represents wrt to the `ALL' neurons in the network. Reported number of neurons averaged over 5 runs. }
\label{tab:base_dist}
\end{table}
\subsubsection*{T1: Gender Classification (GC)}
Tables \ref{tab:sup_dist}-\ref{tab:large_dist}
shows that
the salient (e.g., 5\% of ADI network) neurons for gender property
are present mostly in the upper layers.
The findings aligns with the previous task-specific layer- and neuron-level minimal set analysis, pointing that the information is redundantly distributed and any small set of neurons (from any part of the network - see Table \ref{tab:minimal-neurons} Acc$_r$)
can give results closer to using the entire network.
\subsubsection*{T2: Speaker Verification (SV)}
From the minimal set, in SRE pretrained model, we noticed that majority of the neurons ($\approx75$\% --EN) of the layer are needed to represent the information.
Thus, indicating that the information is distributed throughout the last layer. This aligns with our layer-wise observation. Moreover, these salient neurons are also shared with other properties like gender. A similar observation is seen for Chinese and Russian datasets.
\begin{table}[!ht]
\centering
\scalebox{0.7}{
\begin{tabular}{l||c||c||c||c}
\hline
\multicolumn{5}{c}{Network: $ST_{large}$} \\
\hline\hline
\multirow{2}{*}{Layers} & \multicolumn{4}{c}{Tasks ($Neu_t$ \%)} \\
\cline{2-5}
& LID (50) & DID (15) & GC (10) & CC (10) \\
\hline\hline
L1 & 423.6 (4.6\%) & 26.2 (0.3\%) & 11 (0.1\%) & 13.2 (0.1\%) \\
L2 & 171 (1.9\%) & 7.2 (0.1\%) & 6.2 (0.1\%) & 3 (0\%) \\
L3 & 124 (1.3\%) & 10.4 (0.1\%) & 2.4 (0\%) & 5.2 (0.1\%) \\
L4 & 153 (1.7\%) & 11.4 (0.1\%) & 2.6 (0\%) & 124.4 (1.3\%) \\
L5 & 247.6 (2.7\%) & 50.4 (0.5\%) & 12 (0.1\%) & 129.8 (1.4\%) \\
L6 & 266.8 (2.9\%) & 33.2 (0.4\%) & 3.2 (0\%) & 28.6 (0.3\%) \\
L7 & 305.6 (3.3\%) & 53.4 (0.6\%) & 10.8 (0.1\%) & 23 (0.2\%) \\
L8 & 327.6 (3.6\%) & 64.6 (0.7\%) & 9.8 (0.1\%) & 14.2 (0.2\%) \\
L9 & 615.6 (6.7\%) & 175.4 (1.9\%) & 74.6 (0.8\%) & 58.4 (0.6\%) \\
L10 & 662.2 (7.2\%) & 337.2 (3.7\%) & 227.6 (2.5\%) & 146 (1.6\%) \\
L11 & 686 (7.4\%) & 378.8 (4.1\%) & 284 (3.1\%) & 159 (1.7\%) \\
L12 & 625 (6.8\%) & 233.8 (2.5\%) & 276.8 (3\%) & 216.2 (2.3\%)\\\hline
\end{tabular}
}
\caption{Distribution of top neurons across the $ST_{large}$ network for each task. Number of neurons in each layer = 768. Total neurons (`ALL') in the network = 9216. The table reports the number of neurons (\#) in each layer which is a member of $Neu_t$\%. Each cell also reports the \% the selected neurons represents with respect to the `ALL' neurons in the network. Reported number of neurons averaged over 5 runs. }
\label{tab:large_dist}
\end{table}
\subsubsection*{T3: Language Identification (LID)}
\label{lang_neuron}
For the language property, the information is more distributed in the pretrained transformers (see Table \ref{tab:base_dist}-\ref{tab:large_dist}), and localised in the upper layers for CNNs (see Table \ref{tab:sup_dist}).
We hypothesize that such a difference in the models' behaviour is due to their contrasting training objectives (CNN:task-related supervision {\em vs} Transformers:self-supervision). Both the CNN pretrained models are trained with an objective that is either a special case of language identification (dialect identification objective -- ADI) or language discrimination is one of the innate criterion/feature for model prediction (speaker recognition -- SRE). Hence the neurons in the upper layers of CNNs capture more language representative information than its predecessor.
\subsubsection*{T4: Regional Dialect Identification (DID)}
From the minimal set, we observed only 25\% of the network neurons are sufficient to encode the regional dialects in ADI.
Further analysis presented in Table \ref{tab:sup_dist}-\ref{tab:large_dist}, show that the regional dialectal information are predominately localised in the upper layers of the ADI network.
\subsubsection*{T5: Channel Classification (CC)}
We observed that salient neurons are localised in the upper layers of CNNs (in Table \ref{tab:sup_dist}), whereas in transformers the information is distributed in middle and the upper layers (Table \ref{tab:base_dist}-\ref{tab:large_dist}).
\subsection{Summary}
\label{koneuron}
Our neuron analysis shows that i) it is possible to extract a minimal set of neurons to represent the encoded information,
ii) complex tasks (e.g., voice identity)
require more neurons to encode the information compare to the simple tasks (e.g. gender classification), iii) we
found network to store task-specific information redundantly for
simple tasks
such as gender and channel identification, iv)
for the complex task such as dialect and speaker voice verification, the information
is not redundant and is only captured by the task-specific network, and v)
for most of the properties, the salient minimal neurons are localised in the upper-layers of the pretrained models.
\section{Conclusion}
\label{sec:concl}
In this study, we analyzed intermediate layers and salient neurons, in end-to-end speech CNN and transformer architectures for speaker (gender and voice identity), language (and its variants - dialects), and channel information.
We explored the architectures, using proxy classifiers, to investigate: whether information is captured;
where is it learnt?; how distributed or focused their representation are? the minimal number of neurons we can use; and how the learning behaviour changes with different pretrained models?
Our findings suggest that channel and gender information are omnipresent with redundant information, unlike the voice identity and dialectal information, and require a small subset (mostly 1-20\%) of neurons to represent the information.
We observed, for complex tasks (e.g. dialect identification), the information is only captured in the task-oriented model, localised in the last layers and can be encoded using a small salient (e.g., 25\% of the network) neuron set. These salient neurons are sometimes shared with other properties and are indicative of potential bias in the network. Furthermore, this study also suggests, in the era of pretrained models, in addition to popular `transformers', CNNs are also effective as pretrained models and should be explored further.
To the best of our knowledge, this is the first attempt to analyse layer-wise and neuron-level analysis, putting the pretrained speech models under the microscope.
In future work, we plan to further extend the study to other available architecture, like autoencoders, with low-level information such as phoneme and grapheme and dig deeper into class-wise properties.
\section{Layer-wise Analysis}
\label{sec:resultlc}
We first discuss the results from training layer-wise proxy classifiers, addressing the following questions: i) RQ1: whether the understudied property is captured in the network; ii) RQ2: which parts of the network predominantly learns this property. We compare the results with the majority baseline (assigning the most frequent class for each input), and with the oracle (classifier trained using all the network layers (`ALL')) along with other control tasks mentioned in Section \ref{ssec:ns1}.
\subsection{Control Tasks}
\label{ssec:ns1}
We reported the baseline performances -- majority baseline and random initialisation of neuron weights instead of embeddings from pretrained models, in Table \ref{tab:task-baseline} and compare it with our oracle (ALL) results.
To show that the reported performance indicates the strength of the encoded representation, not the probe's memorising capability, we present selectivity ($Sel_a$) in the Table \ref{tab:task-baseline}. We show that the oracle, $Acc$ (ALL), significantly outperforms both the probes with random initialised weights and the control task -- selectivity (described in Section \ref{ssec:control}).
\begin{table}[!htb]
\centering
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$\#Neurons $& 11100 & 11100 & 2304 & 9216 \\
\hline\hline
\multicolumn{5}{|c|}{T1: GC} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc$ (Maj-C) &\multicolumn{4}{c|}{56.70} \\
\hline
$Acc$ (ALL) & 98.20 & 96.79 & 99.16 & 98.14 \\ \hline
$Acc$ (R.INIT) & 68.14 & 68.14 & 56.17 & 56.60 \\\hline
$Sel_a$ & 42.78 & 67.28 & 52.83 & 72.53 \\\hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|l|}{} & ADI & SRE & $ST_{base}$ & $ST_{large}$ \\
\hline\hline
$\#Neurons $& 11100 & 11100 & 2304 & 9216 \\
\hline\hline
\multicolumn{5}{|c|}{T3: LID} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc$ (Maj-C) &\multicolumn{4}{c|}{14.96} \\
\hline
$Acc$ (All) & 86.00 & 76.01 & 57.35 & 76.24 \\
\hline
$Acc$ (R.INIT) & \multicolumn{1}{c}{13.20} & \multicolumn{1}{c|}{13.20} & \multicolumn{1}{c|}{15.58} & \multicolumn{1}{c|}{14.23} \\
\hline
$Sel_a$ & \multicolumn{1}{c|}{75.69} & \multicolumn{1}{c|}{69.20 } & \multicolumn{1}{c|}{41.18 } & \multicolumn{1}{c|}{61.76} \\ \hline
\end{tabular}
}
\medskip
\scalebox{0.7}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T4: DID} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc$ (Maj-C) &\multicolumn{4}{c|}{23.06} \\
\hline
$Acc$ (ALL) & 55.63 & 39.12 & 36.66 & 39.22 \\
\hline
$Acc$ (R.INIT) & \multicolumn{1}{c|}{20.24} & \multicolumn{1}{c|}{20.24} & \multicolumn{1}{c|}{16.70} & \multicolumn{1}{c|}{22.45} \\
\hline
$Sel_a$ & 36.7 & 16.89 & 13.34 & 19.20 \\ \hline
\end{tabular}
\quad
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{T5:CC} \\
\hline\hline
& \multicolumn{1}{c|}{ADI} & \multicolumn{1}{c|}{SRE} & \multicolumn{1}{c|}{$ST_{base}$} & \multicolumn{1}{c|}{$ST_{large}$} \\
\hline\hline
$Acc$ (Maj-C) &\multicolumn{4}{c|}{32.12} \\
\hline
$Acc$ (ALL) & 93.93 & 85.51 & 86.80 & 96.55 \\
\hline
$Acc$ (R.INIT) &26.52 & 28.54 & 37.74 & 37.32 \\
\hline
$Sel_a$ & 63.81 & 77.65 & 68.17 & 83.76 \\ \hline
\end{tabular}
}
\caption{Reported accuracy (Acc) for the proxy Tasks T1: GC, T3:LID, T4:DID and T5:CC using majority baseline (Maj-C), Oracle (ALL: with all neurons), random initialisation (R.INIT) of neuron's weight and selectivity ($Sel_a$). Reported performance are averaged over 5 runs. }
\label{tab:task-baseline}
\end{table}
\subsection{Encoded Properties}
\label{t1layer}
\subsubsection*{T1: Gender Classification (GC)}
\begin{figure}[!htb]
\centering
\scalebox{0.8}{
\includegraphics[width=\linewidth]{figures/gender_classification_layers.png}
}
\caption{Reported accuracy (Acc) for proxy Task T1:GC using intermediate network layers. Figure \ref{fig:speaker_info_gc}a presents the layer-wise performance of ADI and SRE models and Figure \ref{fig:speaker_info_gc}b presents the layer-wise accuracy for $ST_{base}$ and $ST_{large}$. Reported performance averaged over 5 runs. Majority baseline (assigning most frequent class): 56.70 accuracy.}
\label{fig:speaker_info_gc}
\end{figure}
Our layer-wise results reported in Figure \ref{fig:speaker_info_gc} show that gender information is encoded in all layers of the network, indicating the property's distributive nature.
Comparing the pretrained models, we noticed the model trained towards the ADI task is more susceptible to gender information than those trained towards other downstream tasks. We speculate that the ADI model does poorly with the higher-pitched and breathier (mainly female) voices, due to a high gender imbalance in the training data for the ADI model (with $< \frac{1}{3}$ female speakers). In comparison, the other studied pretrained (SRE and transformers) models do not exhibit this problem. Data imbalance in gender representation is unfortunately very common in speech data \cite{garnerin2020gender}. It effects performance of the speech models (e.g., ASR), in most of the cases. Our layer- and neuron-wise analyses (Section \ref{ssec:ns2}) identifies relevant parts of the network that highlight such bias. Such an analysis may be useful for debaising the network (e.g, for gender or other properties). We leave this exploration for the future.
Next we demonstrate how redundantly information is distributed in the network.
Task-specific redundancy \cite{dalvi2020analyzing} highlights the parts/features of the network, that are redundant with respect to a downstream task - e.g. gender classification.
We found that for the CNN architecture, the layers above CNN4 are redundant with respect to the understudied task. In other words higher layers do not posses features that improve performance on this task further beyond 1\% compared to the best performing layer. We found this to be true in both SRE and ADI models.
\subsubsection*{T2: Speaker Verification (SV)}
\label{t2layer}
We noticed that the majority of the models learn speaker-invariant representation (T2:SV -- see Figure \ref{fig:speaker_info_sv}), thus performing poorly at speaker verification task.
This demonstrates the robustness of the model towards unknown speakers and their ability to generalize. While comparing the performance for the networks, we noticed that only the speaker recognition model (SRE) learned voice identity in the final layers of the network. These findings were consistent for the models trained on the other languages (such as Chinese and Russian) also (See Figure\ref{fig:speaker_info_sv}).
\begin{figure}[!ht]
\centering
\scalebox{0.8}{
\includegraphics[width=\linewidth]{figures/sv_layers.png}
}
\caption{Reported Equal Error Rate (EER) for proxy Task T2:SV using intermediate network layers. Figure \ref{fig:speaker_info_sv}a presents the layer-wise performance of ADI and SRE models and Figure \ref{fig:speaker_info_sv}b-c presents the layer-wise EER for $ST_{base}$ and $ST_{large}$. Reported for English - EN, Chinese - ZH and Russian - RU. EER value, the lower the better. }
\label{fig:speaker_info_sv}
\end{figure}
\subsubsection*{T3: Language Identification (LID)}
\label{t3layer}
In the language information task (T3:LID -- See Figure \ref{fig:language_info_lid}a), we found that language markers are encoded in the upper layers (predominantly in the second last layer) in most of the understudied models. This shows the importance of language information in the speaker and dialect identification tasks.
Additionally this emphasises that the self-supervised models also capture language information as subordinate information when encoding the input signal.
Comparing architectures, we observed the CNN to be better at capturing the language representation compared to the transformer (see also Section \ref{lang_neuron} for further explanation). We also found that, the ADI model performed significantly well in our probing task, especially using the CNN4 layer. This showcases that the model learning to distinguish between the dialects is also proficient in discriminating languages and can be fine-tuned for language identification tasks.
\begin{figure}[!htb]
\centering
\scalebox{1.0}{
\includegraphics[width=\linewidth]{figures/lid_layers.png}
}
\caption{Reported accuracy (Acc) for proxy Tasks T3:LID and T4:DID using intermediate network layers. Figure \ref{fig:language_info_lid}a presents the layer-wise performance of all the pretrained models for the language identification task and Figure \ref{fig:language_info_lid}b presents the layer-wise accuracy for dialect identification task. Reported performance averaged over 5 runs. Majority baseline: T3 - 14.96 and T4 - 23.06 accuracy.}
\label{fig:language_info_lid}
\end{figure}
\subsubsection*{T4: Regional Dialect Identification (DID)}
\label{t4layer}
While probing for regional dialectal information, we found that the discriminating properties of dialectal information are not captured in most of the networks (See Figure \ref{fig:language_info_lid}b), reflecting the complexity of distinguishing between dialects.
We found that only the task-specific model ADI, is able to capture the dialectal variation in the upper layers (CNN4-FC2) of the network. This suggests that original pretrained models do not capture sufficient information and task-specific supervision is required to solve the complex task, which are then preserved in the upper layers of the model.
\subsubsection*{T5: Channel Classification (CC)}
\label{t5layer}
Similar to the gender information task (T1:GC), we observed that channel information (see Figure \ref{fig:ch_info_lid}) is omnipresent in the network. As we can see that all layers of the network perform consistently high on this task.
This capability of model to capture channel information, can be potentially misleading as to whether the model has learned the desired property or just discriminating the environmental factors.
This type of misinterpretation occurs mainly when incorporating data with varying environments (e.g. variation in: microphones, source of data - broadcast TV, YouTube, among others).
\begin{figure}[!ht]
\centering
\scalebox{0.8}{
\includegraphics[width=\linewidth]{figures/ch_layers.png}
}
\vspace{-0.3cm}
\caption{Channel classification layer-wise accuracy. Reported performance averaged over 5 runs. Majority baseline (assigning most frequent class): 32.12 accuracy.}
\label{fig:ch_info_lid}
\vspace{-0.5cm}
\end{figure}
\subsection{Summary}
\label{kolayer}
Our layer-wise analysis shows that: i) channel and gender information are encoded and distributed throughout the pretrained networks, ii) the gender information is redundantly encoded in the ADI and SRE pretrained models, iii) ADI network is more prone to gender bias compare to others, iv) all the pretrained model (apart of SRE) learns speaker invariant representation, v) the voice identity is encoded only in upper layer of SRE model, vi) the language property is captured in the upper layers of the pretrained models,
vii) pretrained models trained towards the task of dialect identification are a better choice to transfer knowledge for language identification downstream task, and viii) unlike the language property, the regional dialectal information are only captured in task-specific network (ADI) in the (task-layers) upper layers of the network.
\section{Related Work}
\label{sec:related}
Due to their black-boxed nature, the rise of Deep Neural Networks, has seen a subsequent rise in interpretability studies.
One of the commonly used interpretation technique is the probing-tasks or the diagnostic-classifiers framework.
This approach has been used to probe for different linguistic properties captured within the network. For example researchers probed for i) morphology using attention weights \cite{ghader2017does}, or recurrent neural network (RNN)/transformer representations \cite{peters2018dissecting,shi2016does,blevins2018deep}, in neural machine translation (NMT) \cite{belinkov2017neural,dalvi2017understanding} and language models (LM) \cite{dalvi2019one, dalvi2019neurox} neurons;
(ii) anaphora \cite{voita2018context}; (iii) lexical semantics with LM and NMT states \cite{belinkov2017neural,dalvi2017understanding}; and (v) word presence \cite{liu2018lstms}, subject-verb-agreement \cite{linzen2016assessing}, relative islands \cite{chowdhury2018rnn}, number agreement \cite{gulordava2018colorless}, semantic roles \cite{ettinger2016probing}, syntactic information \cite{shi2016does,linzen2016assessing,conneau2018you,chowdhury2018rnn,merlo2019probing} among others using hidden states. A detailed comprehensive survey is presented in \cite{belinkov2019analysis}.
In the arena of speech modeling, a handful of properties have been examined namely: (i) utterance length, word presence, homonym disambiguation using audio-visual RNN states \cite{chrupala2017representations}; (ii) phonemes and other phonetic features \cite{belinkov2017analyzing} using CNN activation in end-to-end speech recognition, \cite{nagamine2015exploring,nagamine2016role, chaabouni2017learning} using activations in feed-forward network of acoustic model, \cite{silfverberg2021rnn} using RNNs; along with (iii) formants and other prosodic features examined in the CNNs trained from raw speech \cite{beguvs2021interpreting}; (iv) gender \cite{nagamine2015exploring, wang2017does, chowdhury2020does}; (v) speaker information \cite{wang2017does, chowdhury2020does}, style and accent \cite{elloumi2018analyzing} using network activations; (vi) channel \cite{wang2017does, chowdhury2020does} using activations; and (vii) fluency, pronunciation and other audio features from transformers \cite{shah2021all}.
Apart from classification, other methods for finding association includes:
\begin{itemize}
\setlength\itemsep{-0.3em}
\item Computing correlations; for e.g. with other acoustic features \cite{wu2016investigating};
\item Regression; e.g. sentence length with encoder neurons from NMT \cite{shi2016neural};
\item Clustering; e.g. word class using A/V CNN embeddings \cite{harwath2017learning};
\item Detecting change point of activation; e.g. using RNN gates for finding phoneme boundaries \cite{wang2017gate};
\item Visualisation; e.g. in deep CNN models by sampling image patches \cite{girshick2016,zhou6856object,bauvisualizing} or by generating images \cite{nguyen2016synthesizing} that maximize the activation of each individual neuron among others.
\end{itemize}
In line with the work done in \cite{chowdhury2020does,dalvi2019one}, we also employ proxy classifiers to understand the encoded information in pretrained deep learning networks.
However, our approach differs from them, as we analyzed different types of pretrained models with varying objective functions and architectures. In addition to the dialectal model, ADI, used in \cite{chowdhury2020does}, we also examined a speaker recognition model of similar architecture. Moreover,
the current study is the first to probe pretrained speech models to conduct fine-grained neuron-level analysis.
Unlike the studies \cite{wang2017does, chowdhury2020does}, here we analyzed individual/group of neurons that can effectively encode the properties. Drawing motivation from \cite{dalvi2019one} -- where the authors probed the neurons of pretrained language models for linguistic properties -- this is the first attempt to put the pretrained speech network under a microscope and see the efficacy of the neurons for encoding the speaker, language and channel properties.
\section{Introduction}
\label{sec:intro}
Deep Neural Networks have constantly pushed the state-of-the-art in speech technologies, for example automatic speech recognition (ASR) \cite{amodei2016deep,miao2015eesen,pratap2019wav2letter++,chan2016listen,chowdhury2021onemodel,ali2021csmodel}, pretrained speech transformers \cite{baevski2020wav2vec,liu2020mockingjay,liu2020tera,chi2020audio}, dialect, language and speaker identification \cite{jin2017end, trong2016deep, heigold2016end, nagrani2017voxceleb, snyder2017deep, shon2018convolutional, snyder2018x} models;
along with other fields in Artificial Intelligence, including Natural Language Processing (NLP) \cite{deng2018deep} and Computer Vision (CV) \cite{voulodimos2018deep}. While end-to-end deep architectures are simple, elegant and provide a flexible training mechanism, they are inherently black-boxes. Unlike the traditional models they lack the explanation of what information is captured within the learned representations, and how it is encoded and used in the decision process. This opaqueness hinders practitioners from understanding the internal mechanics of the models and the causation process which is critical for debugging, ethical decision making and fairness in these systems \cite{doshi2017towards,lipton2018mythos}.
To this end, researchers have investigated deep neural models for auxiliary knowledge they encode in the learned representation through probing classifiers \cite{dalvi2019one, dalvi2019neurox,belinkov2017neural,dalvi2017understanding}, via visualizations \cite{zhang2018visual,zeiler2014visualizing}, through ablation studies \cite{sheikholeslami2021autoablation}, and using unsupervised methods \cite{harwath2017learning}.
Work done using \emph{Diagonostic Classifiers} analyze representations at the level of individual layers \cite{elloumi2018analyzing, chowdhury2020does,shah2021all,wang2017does}, attention heads \cite{ghader2017does} and a more fine-grained neuron level \cite{dalvi2019one,durrani2020analyzing, qian2016analyzing}. These studies reveal interesting findings such as how different linguistic properties (such as morphology and syntax) are captured in different parts of the network, and how certain properties are more localized or distributed than others.
While some work has been done to interpret representations in the speech models \cite{beguvs2021interpreting, elloumi2018analyzing, chowdhury2020does,shah2021all,wang2017does,becker2018interpreting}, no prior work has been carried to do a neuron-level analysis. Analyzing individual neurons facilitates a deeper understanding of the network \cite{girshick2016, nguyen2016synthesizing, bau2017network,dalvi2019one,durrani2020analyzing, qian2016analyzing,shi2016neural} and entails many potential benefits such as manipulating system's output \cite{indivdualneuron:arxiv19} while debasing the network w.r.t certain property (like gender or racial elements), model distillation and compression \cite{rethmeier2019txray,frankle2018lottery}, domain adaptation \cite{gu2021pruningthenexpanding}, feature selection for downstream tasks \cite{dalvi2020analyzing} and guiding architectural search etc. To
develop a better understanding of the learned representations,
along with presence of bias and redundancy, we carry a layer and neuron-level analysis on the speech models.
Speech signals are long, have variable lengths and are of complex hierarchical structure. Learning meaningful information in such continuous modality is difficult. Moreover, the environmental\footnote{e.g. \textit{channel information} -- signal recording and transmission quality of the speech.} and speaker\footnote{\textit{voice identity, gender, age, language.}} influenced variables in speech models impose different characteristics on the acoustic input, resulting in added complexity, thus making it harder to understand models' decisions. For example, in speaker recognition, does the model learn speaker identify or the environmental properties such as a microphone?
In this study, we focus on post-hoc functional interpretability of pretrained speech models.
We carry a layer-wise and fine-grained neuron-level analysis on the pretrained speech models for (a) speaker - gender and voice identity; (b) language and its dialectal variants; and (c) channel information, using utterance-level representation. We carry our study with the following research questions:
\textit{(RQ1)} Is the understudied information (e.g. channel) captured in the network's learned representation? \textit{(RQ2)} Where in the network is it preserved and how localized or distributed is it?
\textit{(RQ3)} Can we extract a minimal number of neurons that can represent the same information? and \textit{(RQ4)} How do the learning patterns changes across different pretrained models?
We use a framework based on probing classifiers \cite{hupkes2018visualisation,conneau2018you}. The idea is to extract the representations (of speech utterances) from the understudied pretrained model and train an auxiliary classifier towards predicting the studied properties. The accuracy of the classifier serves as a proxy to the quality of the learned representation with respect to that property. We train classifiers using different layers to carry our layer-wise analysis and then do a more fine-grained analysis by probing for neurons that capture these properties.
We investigate 4 pretrained end-to-end architectures -- 2 CNN architectures trained towards the task of speaker recognition and dialect identification;
and 2 transformer architectures trained to reconstruct the masked signal. Our cross architectural analysis shows that:
\begin{itemize}
\setlength\itemsep{0em}
\item \textit{Task complexity and Minimal neuron set:} Less neurons ($\approx$ 1-5\%) are required to encode simple properties (e.g. gender) unlike the complex properties such as regional dialect identification or voice identification. ($\rightarrow$ RQ1, RQ3)
\item \textit{Task-specific Redundancy:} Layer-wise and neuron-level task-specific redundancy is observed in capturing gender and channel information. These
phenomena are distributed throughout the network encoding redundant information, thus can be captured with small number of neurons from any part of the network. Such redundancy is also noticed
when encoding language property. ($\rightarrow$ RQ3)
\item \textit{Localized vs Distributive:} Salient neurons are localised in the final layers of the pretrained models, indicating the deeper neurons of the network are more informative than its predecessor and capture more abstract information. This observation is seen for both simple and complex tasks. ($\rightarrow$ RQ2)
\item \textit{Robustness:} Most of the pretrained models learn speaker-invariant information. ($\rightarrow$ RQ1, RQ4)
\item \textit{Polysemous Neurons:} The salient neurons are sometimes shared between multiple properties in a pretrained model. For example, some neurons were found to be salient for both voice identity, and gender properties. ($\rightarrow$ RQ2, RQ3)
\item \textit{Bias:} It is possible to pinpoint neurons responsible for sensitive information (e.g., gender) and can be used in future to mitigate model bias. ($\rightarrow$ RQ3)
\item \textit{Pretrained Models for Transfer learning:} Pretrained CNNs give comparable (even outperforms) the speech transformers and can easily be used as a feature extractor (or for fine-tuning) in the downstream tasks. In line with \cite{tay2021pretrained}, our findings suggest the potential of re-using large CNNs as pretrained network with similar efficacy as the pretrained speech transformers. ($\rightarrow$ RQ4)
\end{itemize}
\noindent To the best of our knowledge, this is the first attempt that carries a layer-wise and fine-grained neuron-level analysis on the pretrained CNNs and Transformers speech models.
\section{Introduction}
\section{Results and Discussion}
\label{sec:result}
\subsection{Speaker Information}
\subsubsection{T1: Gender Classification, GC}
\subsubsection{T2: Speaker Verification, SV}
\subsection{Language Information}
\subsubsection{T3: Language Identification, LID}
\subsubsection{T4: Regional Dialect Identification, DID}
\subsection{Channel Information}
\subsection{Key Observations}
\subsection{Limitations}
\section{Conclusion}
\label{sec:concl}
Our finding suggests that channel and gender information are omnipresence and requires a small subset of neuron unlike the voice identity information, which in presented in last layer of SRE model . The neurons (50-75\%) representing the voice identity are also shared with other properties mainly gender, and language. For language and dialectal information a small number of salient neurons are sufficient, in CNN-architecture, indicating the redundancy in the networks. We also observed how the size/parameters of the speech transformer contribute in encoding the information. We also discuss the relation between the size of the minimal neuron set with the task complexity and the presence of information. We re-confirm the findings inferring the lower layers of the network act as feature extractor and upper layers as task-oriented classifier.
signaling the task simplicity.
As for the voice identity, the information is captured in the last layer of speaker recognition model utilising majority of the neurons. Furthermore, our studies showed the neurons encoding voice identity are also shared with other properties mainly gender, and language.
From probing the language and dialectal information, we observed CNN architectures capture the information using a small number of salient neurons, which indicate the presence of the redundancy in the network. We have also observed how hard it is for a small pre-trained transformer architecture, without any supervision, to encode language and dialect information and how increasing the model parameters can help.
Our study also showcase how the size of the minimal neuron set represent the task complexity and the presence of information. In addition, we re-confirm the findings inferring the lower layers of the network act as feature extractor and does not always encode task-oriented properties. Nonetheless, they are important for capturing general and meaningful representations that can encapsulate the structure of the high dimensional input signal.
\section{Methodology}
\label{sec:methods}
Our methodology is based on the probing framework called as \emph{Diagnostic Classifiers}. We train a
classifier using the activations generated from the pretrained neural network $\mathds{M}$, with $L$ layers: $\{l_1, l_2, \ldots, l_L\}$, as static features, towards the task of predicting a certain property (See Figure \ref{fig:pipeline}). The underlying
assumption is that if the classifier can predict the property,
the representations implicitly encode this information. We train layer- and neuron-wise probes using logistic-regression classifiers. More details is in Section \ref{ssec:ul-representation} and Section \ref{ssec:proxy_classifier}.
We extract representations from the individual layers for our layer-wise analysis and the entire network for the neuron-analysis. We use the
\emph{Correlation Analysis}, to generate a neuron ranking with respect to the understudied linguistic property: Given the trained classifier
the algorithm extracts a ranking of the neurons
in the model based on weight distribution. The entire process is presented in Figure \ref{fig:pipeline}.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{figures/meth4.png}
\caption{The experimental pipeline of the study.
Figure \ref{fig:pipeline}(a) shows steps to convert frame-level representation, $\ddot{\mathcal{R}}_l$, to utterance-level representation, $\mathcal{R}_l$, using average/statistical polling.
Figure \ref{fig:pipeline}(b) present the proxy classifier and Figure \ref{fig:pipeline}c and Figure \ref{fig:pipeline}d indicate how the trained weight from the proxy classifier is used to rank the neurons. }
\label{fig:pipeline}
\end{figure}
Our designed methodology serves two goals in this study: \textit{(i)} analyzing speech models to understand what properties are encoded in the networks, and \textit{(ii)} how localized or distributed these properties are across different layers in the network.
More specifically, we probe for the following properties: \textit{(a)} Speaker Information -- gender and voice identity; \textit{(b)} Language Information -- language and dialect identification; and \textit{(c)} transmission channel information.
\subsection{Utterance-level Representation}
\label{ssec:ul-representation}
For a given temporal ordered feature sequence input, with $F$ number of frames and $D$ feature dimension ($D \times F$), we first extracted the latent frame $\ddot{\mathcal{R}}_l$ or utterance-level $\mathcal{R}_l$ speech representation, from the layers ($l$) of pretrained neural network model, $\mathds{M}$.
Since our goal is to study utterance level representation $\mathcal{R}_l$, for a layer $l$, we aggregate the frame-level representations by passing it through a statistical/average pooling function ($\varphi^{[l]}$), $\mathcal{R}_l = \varphi^{[l]}(\ddot{\mathcal{R}}_l$).
For the entire network representation, we concatenate the layers ($l$) to obtain $ALL = \mathcal{R}_{l_1} + \mathcal{R}_{l_2} + .. + \mathcal{R}_{l_L}$.
\subsection{Proxy Classifier}
\label{ssec:proxy_classifier}
We then design proxy classifiers, $\mathds{M}_{T}$ for a task, $\mathcal{T}$.
Given a dataset, of $N$ utterances, $\mathbb{D}=\{u_1, u_2, ..., u_N\}$ with a corresponding set of annotation classes $C=\{c_1, c_2, ..., c_C\}$, we map each utterance $u_i$, in the data, to the latent utterance-level representations using pretrained model $\mathds{M}$.
As a proxy classifier, we opt for a simple linear -- logistic regression model trained by minimising the cross entropy loss, $H$. The trained classifier, $\mathds{M}_{T}$, is then used to evaluate the strength of the input representation, by measuring classification performance.
The probe provides insights to the strength of the encoded information, yet it has the potential to identify informative neurons in the network. For fine-grained neuron analysis, we modified the proxy classifier adding elastic-net regularization \cite{zou2005regularization}, as shown below:
\begin{equation}
\label{eq:loss}
\mathcal{L}(\theta)= H_{\theta}+ \lambda_1\left \|\theta_1 \right \| _{1} + \lambda_2\left \|\theta_1 \right \| _{2}^{2}
\end{equation}
\noindent where $\theta$ is the learned weights in the classifier, and $\lambda_1\left \|\theta_1 \right \| _{1}$ and $\lambda_2\left \|\theta_1 \right \| _{2}^{2}$ correspond to $L_{1}$ and $L_{2}$ regularization. The combination of $L_{1}$ and $L_{2}$ regularization creates a balance between selecting very focused localised features ($L_{1}$) \emph{vs} distributed neurons ($L_{2}$) shared among many properties.
Using the modified loss with the $\lambda_*$ parameters, we trained the proxy classifiers to access the learned weights for measuring the importance of each neuron.
\subsection{Selecting Salient Neurons}
\label{ssec:fg-neuron}
Our goal is to investigate which intermediate neurons are more salient with respect to certain task properties.
For this, we first rank the neurons based on its importance per class label $c$. Given the trained proxy classifier, $\mathds{M}_{T}$, we first sort the absolute weight values of the neurons, $|\theta_{n}^{c}|$ from $\mathds{M}_{T}$ in descending order and calculate the cumulative weights vector. We then select the top p\% neurons, corresponding to the percentage of cumulative weights of the total mass.
Finally, we combine all the selected class-wise neurons to get overall top neurons of the network (see details in Algorithm 1).
To obtain task-wise salient neurons, an initial small percentage ($p=0.1\%$) of the total mass is used to find the salient neurons for all the classes. Then, the percentage is iteratively increased, while adding the newly discovered important neurons ($salient\_neurons$ $\leftarrow$ TopNeurons($\theta$,p) $\setminus$ $salient\_neurons $) to the list ordered by importance towards the task \cite{dalvi2019neurox}. The algorithm terminates when the salient neurons obtain accuracy close to the {\emph Oracle} (accuracy using the entire network) within a certain threshold. See \ref{sec:minimalNeurons} for details.
\begin{algorithm}[h]
\label{algo:top}
\caption{Top Neuron Ranking}
\SetAlgoLined
\SetKwFunction{FMain}{TopNeurons}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\FMain{$\theta, p$}}{
$\theta_{top, c} \longleftarrow [ ][ ]$ \Comment{To store top neurons per class-label}
\ForEach{$ class, c \in C $}
{
$ tm \longleftarrow \sum_{n=1}^{N}|\theta_{n}^{c}|$ \Comment{total mass}
$\theta_{s} \longleftarrow sort(\theta^{c})$ \Comment{sorted list by weight}
$\theta_{cm} \longleftarrow cumulativeSum(\theta_s)$
$\theta_{top, c} \longleftarrow \theta_{cm} < p* tm$ \Comment{top neurons per class with threshold p}
}
$\theta_{top} \longleftarrow \bigcup_{c=1}^{C}\theta_{top, c}$ \Comment{top neurons for all the classes}
\textbf{return} $\theta_{top} $
}
\end{algorithm}
\subsubsection{Efficacy of the Ranking}
To evaluate the effectiveness of neuron ranking per encoded information, we selected the top/bottom/random $20\%$ of neurons, while masking-out\footnote{We assigned zero to the activation of the masked neurons.} the rest, from the ranked list.
We then re-evaluated the test set, using the previously trained proxy classifier.
\subsubsection{Minimal Neuron Set}
\label{sec:minimalNeurons}
To obtain
a minimal neuron set for the proxy task, we iteratively gather an array of top $Neu$\% of neurons and re-trained the proxy classifier with the selected neurons only. We repeat this method\footnote{For top: 1, 5, 10, 15, 20, 25, 50 and 75 \% of neurons.} until we reached a close \textit{Oracle} performance (Accuracy) with the model trained with all the neurons (`ALL') of the network. We defined a close performance by a threshold $\delta=1.0$. We selected the neuron set with the highest accuracy close to $[Acc(ALL)\pm\delta]$.
\subsection{Control tasks}
\label{ssec:control}
Recent research on probing classifiers has pointed to a potential pitfall of the approach: \textit{is the accuracy of the probe a true reflection that the intermediate representation actually encodes the property or is it memorizing the task?} To ensure that the reported performance of the proxy classifiers is indicative of the representation strength for the encoded information, and not the capability of the probe to memorize the property, we designed the following two control tasks: firstly, we test the strength of the extracted embedding by training the classifier with randomly initialized features; secondly, we test the ability of the probe to memorize random assignment of the class labels using selectivity criterion \cite{hewitt2019designing}.
Given all the classes ($C_T$) of the proxy task, we randomly assign the training instances to $c_{i}$ ($c_{i} \in C_T$), maintaining the original class distribution in the train set. The selectivity is computed by subtracting the performance of the probe with re-assigned labels from the reported the proxy task performance, $Sel_a= Acc(ALL) - Acc_{R}(ALL)$, where $R$ indicates dataset with re-assigned labels.
\section{Experimental Setup}
\label{sec:expsetup}
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{figures/architectures.png}
\caption{Architecture of the pretrained models. Figure \ref{fig:archi}a presents the CNN and Figure \ref{fig:archi}b presents the transformer architecture. FC - fully-connected layer, CNN - convolution layer. }
\label{fig:archi}
\end{figure}
\subsection{Pretrained Models}
\label{ssec:models}
We experimented with temporal convolution neural network trained with two different objective functions (see Section \ref{sssec:cnn}), and two speech transformer architectures (see Section \ref{sssec:transformer}). The choice of architecture is motivated by the following: CNNs -- due to its effectiveness in modelling different speech phenomena; and transformers -- due to its increase in popularity for speech and language modeling.
\subsubsection{CNN:}
\label{sssec:cnn}
The CNN models (see Figure \ref{fig:archi}a), contain four temporal CNNs layers followed by two feed-forward layers,\footnote{See \ref{appen:cnn} for detailed model parameters.} and are optimized for the following tasks: (i) Arabic dialects identification (\textit{ADI}) and (ii) speaker\sout{s} recognition (\textit{SR}).
Arabic Dialect Identification is more challenging than other dialect identification tasks \cite{ali2021connecting}. Dialectal Arabic is shared among 22 countries, with more than 20 mutually incomprehensible dialects and a common phonetic and morphological inventory.
The model is trained using the Arabic Dialect Identification 17 (ADI17) dataset \cite{adi17,mgb5}.
For the the speaker recognition task, we experimented with English (SRE). We adapted from \cite{shon2018frame}, and trained using the Voxceleb1 development set (containing 1211 speakers and $\approx$ 147K utterances).
\subsubsection{Transformer}
\label{sssec:transformer}
We included two transformer-encoder architectures\footnote{See \ref{appen:transformer} for detailed model parameters.} using Mockingjay \cite{liu2020mockingjay}, of which we tried two variations differing in the number of encoder layers (3 or 12 (see Figure \ref{fig:archi}b). We refer the former as base ($ST_{base}$) and the latter as large ($ST_{large}$) models.
The base, $ST_{base}$, model is trained using Mel features, such as reconstruction target, whereas for the large model, $ST_{large}$, the reconstruction target was a linear- scale spectrogram. The models were trained using LibriSpeech corpus:train-clean-360 subset \cite{panayotov2015librispeech}.
\subsection{Proxy Tasks}
\label{ssec:probing}
We experimented with the following proxy tasks for our study:
\subsubsection*{T1 - Gender Classification (GC)}
For gender classification, we trained proxy classifiers using \textit{VoxCleleb1}-test \cite{nagrani2017voxceleb} (English) dataset, which includes videos of celebrities, from different ethnicities, accents, professions and ages, uploaded to YouTube.
For the task, we used a gender-balanced train and test sets with no overlapping speakers.
Detailed label distribution for the task is shown in Figure \ref{fig:data_dist}(a).
\begin{figure} [!htb]
\centering
\scalebox{0.7}{
\includegraphics[width=\linewidth]{figures/dist3.png}
}
\caption{Data distribution for proxy classification tasks. $\#Spk$ represent number of speakers. Figure \ref{fig:data_dist}(a): GC -- Gender Classification (Task1); Figure \ref{fig:data_dist}(b), Speaker Verification -- SV (Task2), the blue bar represent positive (from same speaker) verification pairs and orange is from different speaker pairs (negative instances); Figure \ref{fig:data_dist}(c): LID -- Language Identification (Task3); Figure \ref{fig:data_dist}(d): DID -- Regional Dialect Identification (Task4); and Figure \ref{fig:data_dist}(e): CC -- Channel Classification (Task5).}
\label{fig:data_dist}
\end{figure}
\subsubsection*{T2 - Speaker Verification (SV)}
For voice identity verification, our experiment is two folds.
First, we performed a `generic speaker verification' on pairs of input signals, to verify if they are from the same or different speaker. To do this, we extracted length normalized embeddings from individual layers and their combination (`ALL')
and computed the cosine similarity between pairs.
Second, for the neuron-level analysis, we trained a proxy classifier -- for the speaker recognition task -- using embeddings from the layers of the pretrained models. We selected the minimal neurons, using the algorithm described in Section \ref{sec:minimalNeurons} and then used it to test our verification pairs.
For training the proxy speaker recognition model, we used \textit{VoxCleleb2}-test set \cite{chung2018voxceleb2} containing $118$ speakers, with $4,911$ videos and $36,237$ utterances.
For speaker verification tasks, we used a multi-lingual subset of the \textit{Common Voice} dataset \cite{ardila2019common} and the \textit{Voxceleb1} official verification test set (Voxceleb1-tst)\footnote{In this case, we used the official verification pairs to evaluate.} \cite{nagrani2017voxceleb}.
The \textit{Common Voice corpus} contains more than $2,500$ hours of speech data from $\approx39$ languages,\footnote{last accessed: April 10, 2020.} collected and validated via a crowdsourcing approach. This is one of the largest multilingual datasets available for speech research recorded using a website or an iPhone application available from the Common Voice project.
We performed speaker verification using three languages including English (EN) -- from the Voxceleb1-tst, and a subset\footnote{Randomly selected $\approx$4 hours from each language.} from the Common Voice -- Russian (RU) and Chinese (ZH) datasets.
We constructed the RU and ZH verification pair trials by randomly picking up utterance pairs from speakers with the same gender, maintaining a balanced distribution between positive and negative targets. Details of the verification pairs are given in Figure \ref{fig:data_dist}(b).
\subsubsection*{T3 - Language Identification (LID)}
For the language identification task, we designed classifiers for discriminating between the $7$ languages selected from the Common Voice dataset. The language subset used for this study includes: Arabic (AR), English (EN), Spanish (ES), German (DE), French (FR), Russian (RU) and Chinese (ZH). The distribution of the datasets for training and testing the classifiers are shown in Figure \ref{fig:data_dist}(c).
\subsubsection*{T4 - Regional Dialect Classification (DID)}
For training the regional dialect classification model, we used the Arabic ADI-5 dataset \cite{ali2017speech}, which is composed of the following five dialects:
Egyptian (EGY), Levantine (LAV), Gulf (GLF), North African Region (NOR) and Modern Standard Arabic (MSA). The dataset contains satellite cable recording (SatQ) in the official training split and high-quality (HQ) broadcasts videos for development and test set. For the classification, we used the balanced train set to design the proxy task and tested using the test split.
Details of the class distribution is reported in Figure \ref{fig:data_dist}(d).
\subsubsection*{T5 - Channel Classification (CC)}
We used the ADI-5 \cite{ali2017speech} dataset and CALLHOME\footnote{\url{https://catalog.ldc.upenn.edu/LDC97S45}} \cite{kumar2014translations,billa1997multilingual} dataset to probe the Channel Classification task. Our multi-class classifier with labels indicate the input signal quality as Satellite recording (SatQ), High quality archived videos (HQ) {\em or} Telephony data (TF).
The SatQ data was built using the ADI-5 train data. The HQ data was build using the ADI-5 dev and test sets. For the TF data, we upsampled the CALLHOME data to 16KHz sampling rate. We used a VAD to split the conversation into speech segments and then randomly selected segments with a duration greater than 2.5 secs.
For the proxy task, we then randomly picked balanced samples from each class and divided them into train-test using 60-40\% split for the experiment.
Distribution of the dataset is given in Figure \ref{fig:data_dist}(e).\footnote{A similar pattern is observed when experimenting using just SatQ and HQ labels and removing the upsampled TF. Therefore, for brevity, we are presenting the experiments with SatQ, HQ and TF.}
\subsection{Classifier Settings}
\label{ssec:model_setting}
We trained logistic regression models with elastic-net regularization. The classifier is trained by minimising the cross-entropy loss, using Adam optimizer with default learning rate for 20 epochs and a batch size of 128.
We used fixed values for $\lambda_*$ ( $\lambda_*=0$ and $\lambda_*=1e-5$ in our experiments\footnote{In our preliminary results, we found no significant difference in neuron distribution between the $\lambda_*=0$ and $\lambda_*=1e-5$.
}).
\section{}
\subsection{CNN Architecture}
\label{appen:cnn}
The input to the models is 40 coefficient MFCCs features from a spectrogram computed with a 25ms window and 10ms frame-rate from 16kHz audio.
The architecture of the models includes four temporal convolution neural networks (1D-CNNs), followed by a global (statistical) pooling layer to aggregate the frame-level representations to utterance level representations.\footnote{We followed a similar approach to extract utterance level representation from the first 3 CNN layers for our study (see Figure \ref{fig:pipeline}).}
For the CNN layers, we used filter sizes of 40$\times$5, 1000$\times$7, 1000$\times$1, 1000$\times$1 with 1-2-1-1 strides and 1000-1000-1000-1500 filters respectively.
This utterance level representation is then passed to two fully connected layers (hidden units: 1500 and 600). We used Rectified Linear Units (ReLUs) as activation functions of the network.
The network is trained using the stochastic gradient descent (SGD) optimizer with a learning rate of 0.001.
\subsection{Transformer Architecture}
\label{appen:transformer}
The input to the models is Mel-features, which is then transformed into high-level representations. For the transformation, the input is first downsampled to adapt to long input sequences, and then the consecutive frames are stacked into one step.
This step reduces the number of frames used in the architecture. These input frames are then projected to a fixed dimension of 768 before passing to a sinusoidal function for encoding position.
As a result, these frames passed through multi-layer transformer encoder with multi-head self-attention for left-and-right bidirectional encoding. Each transformer encoder outputs the encoder's hidden states and has a dimension of 768. The transformers are trained for $50000$ steps with learning rate 0.001.
\subsection{ADI and SRE Model Performance}
The overall performance of the trained ADI model using official MGB-5 dialect test set \cite{mgb5} are: accuracy - $82.0$\% and $F_1$ - $82.7$\%.
To evaluate the SR model performance, we performed speaker verification, using the embedding from the last intermediate layer (second fully-connected layer, FC2) of the SRE model with Voxceleb1 official test verification pairs, obtaining $EER=6.81$.
|
1,477,468,751,386 | arxiv | \subsection*{Question} Let $\mathcal{GI}$ and $\mathcal{GP}$ denote the classes of Gorenstein injective and Gorenstein projective modules, respectively. There is a ``Gorenstein version'' of the aforementioned recollements in \cite{Gil16}, i.e. $\xymatrix{
\mathbf{K}_{ex}(\mathcal{GI})\ar[r]^{} & \mathbf{K}(\mathcal{GI}) \ar[r]^{}\ar@<-0.6ex>[l]^{} \ar@<0.6ex>[l]_{} &\mathbf{D}(R)\ar@<-0.6ex>[l]^{} \ar@<0.6ex>[l]_{}}$ and $\xymatrix{
\mathbf{K}_{ex}(\mathcal{GP})\ar[r]^{} & \mathbf{K}(\mathcal{GP}) \ar[r]^{}\ar@<-0.6ex>[l]^{} \ar@<0.6ex>[l]_{} &\mathbf{D}(R)\ar@<-0.6ex>[l]^{} \ar@<0.6ex>[l]_{}}$.
If the underlying ring is left-Gorenstein, it follows from \cite{Chen10} that $\mathbf{K}(\mathcal{GP})\simeq \mathbf{K}(\mathcal{GI})$; we also recovered this equivalence in \cite{Ren19}. However, we do not know if it is true that $\mathbf{K}_{ex}(\mathcal{GP})\simeq \mathbf{K}_{ex}(\mathcal{GI})$. We remark that one can not get an answer by simply restricting the equivalent functor $\mathbf{K}(\mathcal{GP})\simeq \mathbf{K}(\mathcal{GI})$ in \cite{Chen10}, or by the methods in \cite{Ren19}.
\section { \bf The proof of the theorem}
Throughout the paper, let $R$ be a left-Gorenstein ring. All modules are left $R$-modules.
Let $\mathcal{A}$ be an abelian category with enough projectives and injectives. A pair of classes $(\mathcal{X}, \mathcal{Y})$ in $\mathcal{A}$ is a cotorsion pair provided that $\mathcal{X} = {^\perp}\mathcal{Y}$ and $\mathcal{Y} = \mathcal{X}^{\perp}$, where $^{\perp}\mathcal{Y} = \{X \mid \mathrm{Ext}^{1}_{\mathcal{A}}(X, Y) = 0,~~\forall~~Y\in \mathcal{Y}\}$ and
$\mathcal{X}^{\perp} = \{Y \mid \mathrm{Ext}^{1}_{\mathcal{A}}(X, Y) = 0,~~\forall~~X\in \mathcal{X}\}$.
The cotorsion pair $(\mathcal{X}, \mathcal{Y})$ is complete provided that for any $M\in \mathcal{A}$,
there exist short exact sequences $0\rightarrow Y\rightarrow X \stackrel{f}\rightarrow M \rightarrow 0$ and $0\rightarrow M\stackrel{g}\rightarrow Y^{'} \rightarrow X^{'} \rightarrow 0$ with $X, X^{'}\in \mathcal{X}$ and $Y, Y^{'}\in \mathcal{Y}$. In this case, for any $N\in \mathcal{X}$, $\mathrm{Hom}_{\mathcal{A}}(N, f): \mathrm{Hom}_{\mathcal{A}}(N, X)\rightarrow \mathrm{Hom}_{\mathcal{A}}(N, M)$ is surjective since $\mathrm{Ext}^{1}_{\mathcal{A}}(N, Y) = 0$, and then $f: X\rightarrow M$ is said to be a special $\mathcal{X}$-precover of $M$. Dually, $g: M\rightarrow Y^{'}$ is called a special $\mathcal{Y}$-preenvelope of $M$.
By \cite[Theorem 2.2]{Hov02}, an abelian model structure on $\mathcal{A}$ is equivalent to a triple $(\mathcal{A}_{c}, \mathcal{A}_{tri}, \mathcal{A}_{f})$ of subcategories, for which $\mathcal{A}_{tri}$ is thick and both $(\mathcal{A}_{c}, \mathcal{A}_{f}\cap \mathcal{A}_{tri})$ and $(\mathcal{A}_{c}\cap \mathcal{A}_{tri}, \mathcal{A}_{f})$ are complete cotorsion pairs; see also \cite[Chapter VIII]{BR07}. In this case, $\mathcal{A}_{c}$ is the class of cofibrant objects, $\mathcal{A}_{tri}$ is the class of trivial objects and $\mathcal{A}_{f}$ is the class of fibrant objects. The model structure is called ``abelian'' since it is compatible with the abelian structure of the category in the following way: (trivial) cofibrations are monomorphisms with (trivially) cofibrant cokernel, (trivial) fibrations are epimorphisms with (trivially) fibrant kernel, and weak equivalences are morphisms which factor as a trivial cofibratin followed by a trivial fibration.
For convenience, we will use the triple $(\mathcal{A}_{c}, \mathcal{A}_{tri}, \mathcal{A}_{f})$ to denote the corresponding model structure. The following is immediate from \cite[Section 2]{Bec14} or \cite[Theorem 4.7]{Gil08}.
\begin{lemma}\label{lem 1}
On the category $\mathrm{Ch}(R)$ of complexes, there is a singular contraderived model structure
$\mathcal{M}_{sing}^{ctr} = (ex\widetilde{\mathcal{P}}, (ex\widetilde{\mathcal{P}})^{\perp}, \mathrm{Ch}(R))$, and a singular coderived model structure $\mathcal{M}_{sing}^{co} = (\mathrm{Ch}(R), {^{\perp}}(ex\widetilde{\mathcal{I}}), ex\widetilde{\mathcal{I}})$.
\end{lemma}
For a bicomplete abelian category $\mathcal{A}$ with the model structure $\mathcal{M} = (\mathcal{A}_{c}, \mathcal{A}_{tri}, \mathcal{A}_{f})$, the associated homotopy category $\mathrm{Ho}(\mathcal{M})$ is constructed by localization with respect to weak equivalences. The homotopy category of an abelian model category is always a triangulated category. There is an equivalence of categories $\mathrm{Ho}(i): \mathcal{A}_{cf}/\omega = {\mathcal{A}_{cf}/\sim}\rightarrow \mathrm{Ho}(\mathcal{M})$ induced by the inclusion functor $i: \mathcal{A}_{cf}\rightarrow \mathcal{A}$, where $\mathcal{A}_{cf}=\mathcal{A}_{c}\cap \mathcal{A}_{f}$, $f\sim g: M\rightarrow N$ if $g-f$ factors through an object in $\omega = \mathcal{A}_{c}\cap \mathcal{A}_{tri}\cap \mathcal{A}_{f}$; see e.g. \cite[Section 1.2]{Hov99}.
\begin{corollary}\label{cor 1}
There are equivalences $\mathrm{Ho}(\mathcal{M}_{sing}^{ctr})\simeq \mathbf{K}_{ex}(\mathcal{P})$ and
$\mathrm{Ho}(\mathcal{M}_{sing}^{co})\simeq \mathbf{K}_{ex}(\mathcal{I})$.
\end{corollary}
\begin{proof}
We use $\widetilde{\mathcal{P}}$ (resp. $\widetilde{\mathcal{I}}$) to denote the subcategory of contractible complexes of projective (resp. injective) modules. It is well known that a complex $P\in \widetilde{\mathcal{P}}$ if and only if $P$ is exact and each $\mathrm{Ker}d_{i}^{P}$ is a projective module; similarly, complexes in $\widetilde{\mathcal{I}}$ are characterized. Note that for any chain maps $f$ and $g$, if $g-f$ factors through a complex in $\widetilde{\mathcal{P}}$ (or, a complex in $\widetilde{\mathcal{I}}$), then $f$ is chain homotopic to $g$, denoted by $f\sim g$. Since $ex\widetilde{\mathcal{P}}\cap (ex\widetilde{\mathcal{P}})^{\perp}=\widetilde{\mathcal{P}}$ and $ex\widetilde{\mathcal{I}}\cap {^{\perp}}(ex\widetilde{\mathcal{I}}) = \widetilde{\mathcal{I}}$, the equivalences hold directly.
\end{proof}
Let $F=\Lambda\Omega$ and $G=\Lambda\Theta$ be functors on $\mathrm{Ch}(R)$, where $\Omega$ and $\Theta$ are functors from $\mathrm{Ch}(R)$ to $\mathrm{Mod}(R)$ such that for any $X\in \mathrm{Ch}(R)$, $\Omega(X)= X_0/\mathrm{Im}d_{1}^{X}$ and $\Theta(X)= \mathrm{Ker}d_{0}^{X}$. Let $\Lambda: \mathrm{Mod}(R)\rightarrow \mathrm{Ch}(R)$ be a functor which sends every module to a stalk complex concentrated on degree zero.
\begin{lemma}\label{lem 2}
Let $X$, $Y$ be any $R$-complexes, and $f: X\rightarrow Y$ a monomorphism of complexes. If $f$ is a quasi-isomorphism, then $\Omega(f)$ is also a monomorphism of $R$-modules.
\end{lemma}
\begin{proof}
We consider the following commutative diagram
$$\xymatrix{
0\ar[r] & \mathrm{Ker}d_{0}^{X} / \mathrm{Im}d_{1}^{X} \ar[r]\ar[d] & X_{0} / \mathrm{Im}d_{1}^{X} \ar[r]\ar[d]_{\Omega(f)}
& X_{0} / \mathrm{Ker}d_{0}^{X}\ar[r]\ar[d] &0\\
0\ar[r] & \mathrm{Ker}d_{0}^{Y} / \mathrm{Im}d_{1}^{Y} \ar[r] & Y_{0} / \mathrm{Im}d_{1}^{Y} \ar[r] & Y_{0} / \mathrm{Ker}d_{0}^{Y}\ar[r] &0 }$$
Since $f$ is a quasi-isomorphism, we have an isomorphism induced by $f$:
$$\mathrm{H}_0(f): \mathrm{H}_0(X)=\mathrm{Ker}d_{0}^{X} / \mathrm{Im}d_{1}^{X}\longrightarrow \mathrm{Ker}d_{0}^{Y} / \mathrm{Im}d_{1}^{Y}=\mathrm{H}_0(Y).$$
The chain map $f$ is monic, then the induced map of modules $X_{0} / \mathrm{Ker}d_{0}^{X}\cong \mathrm{Im}d_{0}^{X}\longrightarrow \mathrm{Im}d_{0}^{Y}\cong Y_{0} / \mathrm{Ker}d_{0}^{Y}$ is also monic. Hence, by the ``Five Lemma'' for the above diagram, we get that $\Omega(f): X_{0} / \mathrm{Im}d_{1}^{X}\longrightarrow Y_{0} / \mathrm{Im}d_{1}^{Y}$ is a monomorphism. We mention that it is also direct to check injectivity of $\Omega(f)$ by diagram chasing.
\end{proof}
For model categories $\mathcal{C}$ and $\mathcal{D}$, recall that an adjunction $(F, G): \mathcal{C}\rightarrow \mathcal{D}$ is a Quillen adjunction if $F$ is a left Quillen functor, or equivalently $G$ is a right Quillen functor. That is, $F$ preserves cofibrations and trivial cofibrations, or $G$ preserves fibrations and trivial fibrations.
\begin{proposition}\label{prop 1}
$(F, G): (\mathrm{Ch}(R), \mathcal{M}_{sing}^{ctr})\rightarrow (\mathrm{Ch}(R), \mathcal{M}_{sing}^{co})$ is a Quillen adjunction.
\end{proposition}
\begin{proof}
Let $X$, $Y$ be any $R$-complexes. It follows from \cite[Lemma 3.1]{Gil04} that $(\Omega, \Lambda): \mathrm{Ch}(R)\rightarrow \mathrm{Mod}(R)$
and $(\Lambda, \Theta): \mathrm{Mod}(R)\rightarrow \mathrm{Ch}(R)$ are adjunctions. Then we have the following natural isomorphisms:
$\mathrm{Hom}_{\mathrm{Ch}(R)}(F(X), Y)\cong \mathrm{Hom}_{R}(\Omega(X), \Theta(Y))\cong \mathrm{Hom}_{\mathrm{Ch}(R)}(X, G(Y))$.
This implies that $(F, G): \mathrm{Ch}(R)\rightarrow \mathrm{Ch}(R)$ is an adjunction.
It suffices to show that $F$ preserves cofibration and trivial cofibration. Let $f: X\rightarrow Y$ be a cofibration in $\mathcal{M}_{sing}^{ctr}$, i.e. $f$ is a monomorphism with $\mathrm{Coker}f \in ex\widetilde{\mathcal{P}}$. This yields that $f$ is a quasi-isomorphism, and by Lemma \ref{lem 2}, $\Omega(f)$ is monic. Then, we have an exact sequence $$0\longrightarrow F(X)\stackrel{F(f)}\longrightarrow F(Y)\longrightarrow F(\mathrm{Coker}f)\longrightarrow 0.$$ Since every complex is a cofibrant object in $\mathcal{M}_{sing}^{co}$, this implies that $F(f)$ is a cofibration.
Now suppose $f: X\rightarrow Y$ is a trivial cofibration in $\mathcal{M}_{sing}^{ctr}$, i.e. $f$ is a monomorphism with $\mathrm{Coker}f \in ex\widetilde{\mathcal{P}}\cap (ex\widetilde{\mathcal{P}})^{\perp}= \widetilde{\mathcal{P}}$. Then we have an exact sequence $$0\longrightarrow F(X)\stackrel{F(f)}\longrightarrow F(Y)\longrightarrow F(\mathrm{Coker}f)\longrightarrow 0.$$
Note that $\Omega(\mathrm{Coker}f)$ is a projective module. For any complex $I\in ex\widetilde{\mathcal{I}}$, it is easy to show that any chain map $F(\mathrm{Coker}f) = \Lambda\Omega(\mathrm{Coker}f)\rightarrow I$ is null homotopic, and then $F(\mathrm{Coker}f)\in {^{\perp}}(ex\widetilde{\mathcal{I}})$. Thus $F(f)$ is a trivial cofibration in $\mathcal{M}_{sing}^{co}$. This completes the proof.
\end{proof}
Recall that a module $M$ is Gorenstein projective if $M$ is a syzygy of a totally acyclic complex of projective modules; and dually, Gorenstein injective modules are defined; see \cite{EJ00}. We use $\mathcal{GP}$ and $\mathcal{GI}$ to denote the classes of Gorenstein projective and Gorenstein injective modules, respectively. It is widely accepted that over a left-Gorenstein ring, $(\mathcal{GP}, \mathcal{W})$ and $(\mathcal{W}, \mathcal{GI})$ are complete cotorsion pairs, where $\mathcal{W}$ is the class of modules with finite projective (injective) dimension. In \cite[Theorem 2.7]{Ren18} we show that the cotorsion pair $(\mathcal{GP}, \mathcal{W})$ is cogenerated by a set, i.e. there exists a set $S$ such that $\mathcal{W} =\{S\}^{\perp}$. This also implies the completeness of $(\mathcal{GP}, \mathcal{W})$, and generalizes the Gorenstein projective model structure of $\mathrm{Mod}(R)$ in \cite[Theorem 8.3 and 8.6]{Hov02} from Iwanaga-Gorenstein rings to left-Gorenstein rings.
\begin{lemma}\label{lem 3}
Let $X$, $Y$ be complexes in $ex\widetilde{\mathcal{P}}$, and $f: X\rightarrow Y$ a chain map. If $F(f)$ is a weak equivalence in $\mathcal{M}_{sing}^{co}$, then $f$ is a weak equivalence in $\mathcal{M}_{sing}^{ctr}$.
\end{lemma}
\begin{proof}
In the model category $(\mathrm{Ch}(R), \mathcal{M}_{sing}^{ctr})$, we can factor $f: X\rightarrow Y$ as a trivial cofibration $i: X\rightarrow Z$ followed by a fibration $p: Z\rightarrow Y$. By Proposition \ref{prop 1}, $F(i)$ is a trivial cofibration in $\mathcal{M}_{sing}^{co}$, and then $F(i)$ is a weak equivalence. Then $F(f) = F(p)F(i)$ is a weak equivalence if and only if so is $F(p)$.
Let $L = \mathrm{Coker}i$ and $K = \mathrm{Ker}p$. It follows from the exact sequence $0\longrightarrow X\stackrel{i}\longrightarrow Z\longrightarrow L\longrightarrow 0$ that $Z\in ex\widetilde{\mathcal{P}}$, where $X\in ex\widetilde{\mathcal{P}}$ and $L\in ex\widetilde{\mathcal{P}}\cap (ex\widetilde{\mathcal{P}})^{\perp} = \widetilde{\mathcal{P}}$. Moreover, it follows from the exact sequence $0\longrightarrow K\longrightarrow Z\stackrel{p}\longrightarrow Y\longrightarrow 0$ that $K\in ex\widetilde{\mathcal{P}}$.
Let $M$ be any Gorenstein injective module. Then there exists a totally acyclic complex of injective module, saying $I$, such that $M\cong \Theta(I)$. It follows from \cite[Lemma 4.2]{Gil08} that there is an isomorphism
$$\mathrm{Ext}^{1}_{\mathrm{Ch}(R)}(F(K), I) = \mathrm{Ext}^{1}_{\mathrm{Ch}(R)}(\Lambda\Omega(K), I)\cong \mathrm{Ext}^{1}_{R}(\Omega(K), \Theta(I)).$$
Since $F(p)$ is a weak equivalence, $F(K) = \mathrm{Ker}(F(p)) \in {^{\perp}(ex\widetilde{\mathcal{I}})}$. For any Gorenstein injective module $M$, it yields that $\mathrm{Ext}^{1}_{R}(\Omega(K), M) = 0$. Since $(\mathcal{W}, \mathcal{GI})$ is a cotorsion pair, we get that $\Omega(K)$ is a module of finite projective dimension. Moreover, $\Omega(K)$ is a Gorenstein projective module since $R$ is a left-Gorenstein ring and $K\in ex\widetilde{\mathcal{P}}$ is a totally acyclic complex of projective modules. By \cite[Proposition 10.2.3]{EJ00}, the projective dimension of any Gorenstein projective module is either zero or infinity, so $\Omega(K)$ is a projective module. Considering exact sequences $0\rightarrow \mathrm{Ker}d_{i}^{K}\rightarrow K_{i}\rightarrow \mathrm{Ker}d_{i-1}^{K}\rightarrow 0$ inductively, we can prove each syzygy of $K$ is projective, that is, $K$ is a complex in $\widetilde{\mathcal{P}}$. This implies that $p: Z\rightarrow Y$ is a trivial fibration, and hence $f = pi$ is a weak equivalence, as desired.
\end{proof}
\begin{lemma}\label{lem 4}
Let $Y$ be an exact complex of injective $R$-modules. Then $\varepsilon: FG(Y)\rightarrow Y$ is a weak equivalence in $\mathcal{M}_{sing}^{co}$, where $\varepsilon$ is the counit of the adjoint pair $(F, G)$.
\end{lemma}
\begin{proof}
For $Y$, $G(Y)=\Lambda\Theta(Y)= \cdots\rightarrow 0 \rightarrow \mathrm{Ker}d_0^Y\rightarrow 0\rightarrow\cdots$ is a stalk complex with $\mathrm{Ker}d_0^Y$ concentrated in degree zero. It is easy to see that $FG(Y) = G(Y)$. Then the map $\varepsilon: FG(Y)\rightarrow Y$ is given by a natural embedding $\varepsilon_0: \mathrm{Ker}d_0^Y\rightarrow Y_0$ and $\varepsilon_i =0$ for any $i\neq 0$.
Let $C= \mathrm{Coker}\varepsilon$. Then $C=\cdots\longrightarrow Y_2\stackrel{d_2^Y}\longrightarrow Y_1\stackrel{0}\longrightarrow \mathrm{Im}d_0^Y\stackrel{\iota}\longrightarrow Y_{-1}\stackrel{d_{-1}^Y}\longrightarrow Y_{-2}\longrightarrow\cdots$, where $\iota$ is an embedding. Let $Y_{\sqsupset}=\cdots\rightarrow Y_2\stackrel{d_2^Y}\rightarrow Y_1\rightarrow 0$ be a hard truncation, $D=0\rightarrow \mathrm{Im}d_0^Y\stackrel{\iota}\rightarrow Y_{-1}\stackrel{d_{-1}^Y}\rightarrow Y_{-2}\rightarrow\cdots$. Then there is an exact sequence of complexes $0\longrightarrow Y_{\sqsupset}\longrightarrow C\longrightarrow D\longrightarrow 0$.
Let $E$ be any $R$-complex in $ex\widetilde{\mathcal{I}}$. Since $R$ is left-Gorenstein, then $E$ is totally acyclic, and for any $Y_i$, $\mathrm{Hom}_{R}(Y_i, E)$ is an exact complex. By \cite[Lemma 2.4]{CFH06}, the complex $\mathrm{Hom}_{R}(Y_{\sqsupset}, E)$ is exact.
Note that $D$ is an exact sequence, and then $\mathrm{Hom}_{R}(D, E_i)$ is an exact complex for any $i\in \mathbb{Z}$. By \cite[Lemma 2.5]{CFH06}, the complex $\mathrm{Hom}_{R}(D, E)$ is exact. Moreover, it follows from the short exact sequence
$$0\longrightarrow \mathrm{Hom}_{R}(D, E)\longrightarrow \mathrm{Hom}_{R}(C, E)\longrightarrow \mathrm{Hom}_{R}(Y_{\sqsupset}D, E)\longrightarrow 0$$
that the complex $\mathrm{Hom}_{R}(C, E)$ is exact. This implies that every map from $C$ to any complex in $ex\widetilde{\mathcal{I}}$ is null homotopic. Then $C\in {^{\perp}}ex\widetilde{\mathcal{I}}$. Hence, $\varepsilon: FG(Y)\rightarrow Y$ is a trivial cofibration in $\mathcal{M}_{sing}^{co}$, and moreover, $\varepsilon$ is a weak equivalence.
\end{proof}
\begin{lemma}\label{lem 5}
Let $Y$ be an exact complex of injective $R$-modules. Then $F(q): FQG(Y)\rightarrow FG(Y)$ is a weak equivalence in $\mathcal{M}_{sing}^{co}$, where $q: QG(Y)\rightarrow G(Y)$ is a cofibrant replacement in the model category $(\mathrm{Ch}(R), \mathcal{M}_{sing}^{ctr})$.
\end{lemma}
\begin{proof}
For $Y$, $G(Y) = FG(Y) = \cdots\rightarrow 0 \rightarrow \mathrm{Ker}d_0^Y\rightarrow 0\rightarrow\cdots$. By the completeness of the cotorsion pair $(\mathcal{GP}, \mathcal{W})$, there is an exact sequence of $R$-modules $0\rightarrow W\rightarrow M\rightarrow \mathrm{Ker}d_0^Y\rightarrow0$ with $M\in \mathcal{GP}$ and $W\in \mathcal{W}$. Consider the totally acyclic complex $P$ of $M$, we have a short exact sequence $0\rightarrow K\rightarrow P\stackrel{q}\rightarrow G(Y)\rightarrow 0$, see the following diagram
$$\xymatrix@C=20pt@R=10pt{
K=\cdots \ar[r] &P_{1}\ar[dd]_{=}\ar[r]^{} &K_{0}\ar[dd]\ar@{-->}[rd]^{\pi}\ar[rr]^{} & &P_{-1}\ar[r]\ar[dd]_{=}&P_{-2}\ar[r]\ar[dd]_{=}&\cdots \\
& & & W\ar@{-->}[ur]\ar@{-->}[dd]^{}\\
P= \cdots \ar[r] &P_{1}\ar[dd]_{}\ar[r]^{} &P_{0}\ar[dd]\ar@{-->}[rd]^{}\ar[rr]^{} & &P_{-1}\ar[r]\ar[dd]_{}&P_{-2}\ar[r]\ar[dd]_{}&\cdots\\
& & & M\ar@{-->}[ur]\ar@{-->}[dd]^{}\\
G(Y) = \cdots \ar[r] &0\ar[r]^{} &\mathrm{Ker}d_0^Y\ar@{==}[rd]\ar[rr]^{} & &0\ar[r]_{} &0\ar[r]&\cdots\\
& & & \mathrm{Ker}d_0^Y \ar@{-->}[ur]
}$$
Let $K_{0\supset}= \cdots\rightarrow P_2\rightarrow P_1\rightarrow \mathrm{Ker}\pi\rightarrow 0$ and $K_{\subset0}= 0\rightarrow W\rightarrow P_{-1}\rightarrow P_{-2}\rightarrow\cdots$. Then there is a short exact sequence of complexes $0\longrightarrow K_{0\supset}\longrightarrow K\longrightarrow K_{\subset0}\longrightarrow 0.$
Let $T$ be any complex in $ex\widetilde{\mathcal{P}}$. Note that $T$ is totally acyclic. Then it follows from
\cite[Lemma 2.5]{CFH06} that the complex $\mathrm{Hom}_{R}(T, K_{\subset0})$ is exact, and this implies that $K_{\subset0}\in (ex\widetilde{\mathcal{P}})^{\perp}$. Note that $K_{0\supset}$ is an exact complex. For any morphism
$f: T\rightarrow K_{0\supset}$, we consider the following diagram:
$$\xymatrix@C=40pt{
\cdots \ar[r] &T_{2}\ar[d]_{f_2}\ar[r]^{} &T_{1}\ar[d]_{f_1}\ar[r]^{}\ar@{-->}[ld]_{s_1} &T_{0}\ar[r]\ar[d]_{f_{0}}\ar@{-->}[ld]_{s_0}
&T_{-1}\ar[r]\ar[d]^{}\ar@{-->}[ld]_{s_{-1}} &\cdots \\
\cdots \ar[r] &P_{2}\ar[r] &P_{1}\ar[r]^{} &\mathrm{Ker}\pi \ar[r]&0\ar[r]&\cdots
}$$
Let $s_i=0$ for any $i< 0$. Since $d_1^K: P_1\rightarrow \mathrm{Ker}\pi$ is an epic and $T_0$ is a projective module, there is a map
$s_0: T_0\rightarrow P_1$ such that $f_0 = d_{1}^{K}s_0$. Since $d_{1}^{K}(f_{1} - s_{0}d_{1}^{T}) = d_{1}^{K}f_{1} - d_{1}^{K}s_{0}d_{1}^{T} = d_{1}^{K}f_{1} - f_{0}d_{1}^{T} = 0$, then $f_{1} - s_{0}d_{1}^{T}: T_1\rightarrow \mathrm{Ker}d_{1}^{K}$, and there exists a map $s_1: T_1\rightarrow P_2$ such that $f_{1} - s_{0}d_{1}^{T} = d_{2}^{K}s_{1}$. Analogous to comparison theorem, we inductively get homotopy maps $\{s_i\}$ such that $f$ is null homotopic. Then $K_{0\supset}\in (ex\widetilde{\mathcal{P}})^{\perp}$. Thus, we have $K\in (ex\widetilde{\mathcal{P}})^{\perp}$.
Note that for any object in the model category $(\mathrm{Ch}(R), \mathcal{M}_{sing}^{ctr})$, its cofibrant replacement is precisely a special $ex\widetilde{\mathcal{P}}$-precover. Then it follows from the short exact sequence $0\rightarrow K\rightarrow P\stackrel{q}\rightarrow G(Y)\rightarrow 0$ that $P$ is a cofibrant replacement of $G(Y)$, and we can set $QG(Y) = P$.
Note that $F(K)= \cdots \rightarrow 0\rightarrow W\rightarrow 0\rightarrow\cdots$. Since $W$ is a module of finite projective dimension, for any complex $E\in ex\widetilde{\mathcal{I}}$, $\mathrm{Hom}_{R}(W, E)$ is exact. This implies that $F(K)\in {^{\perp}}(ex\widetilde{\mathcal{I}})$.
For $F(K)$, there is an exact sequence $0\rightarrow F(K)\rightarrow I\rightarrow L\rightarrow 0$ with $I\in ex\widetilde{\mathcal{I}}$ and $L\in {^{\perp}}(ex\widetilde{\mathcal{I}})$. We consider the following push-out diagram:
$$\xymatrix@C=20pt@R=20pt{ & 0\ar[d] & 0\ar[d] \\
0 \ar[r]^{} &F(K) \ar[d] \ar[r] & F(P) \ar@{-->}[d]_{i}
\ar[r]^{F(q)} &FG(Y) \ar@{=}[d] \ar[r] &0 \\
0 \ar[r] & I \ar@{-->}[r] \ar[d] & J \ar[r]^{p} \ar[d] & GF(Y) \ar[r] & 0\\
& L \ar[d] \ar@{=}[r] & L\ar[d]\\
& 0 & 0
}$$
It is clear that $i$ is a trivial cofibration. By the left column, we have $I\in {^{\perp}}(ex\widetilde{\mathcal{I}})$. Then $I\in ex\widetilde{\mathcal{I}}\cap {^{\perp}}(ex\widetilde{\mathcal{I}})$, and $p$ is a trivial fibration. Hence $F(q) = pi$ is a weak equivalence in $\mathcal{M}_{sing}^{co}$.
\end{proof}
\subsection*{The proof of the theorem}
It follows from Proposition \ref{prop 1} that $(F, G): (\mathrm{Ch}(R), \mathcal{M}_{sing}^{ctr})\longrightarrow (\mathrm{Ch}(R), \mathcal{M}_{sing}^{co})$ is a Quillen adjunction.
By \cite[Corollary 1.3.16]{Hov99}, there is a useful criterion for checking the given Quillen adjunction is a Quillen equivalence. Specifically, we need to show that $F$ reflects weak equivalences between cofibrant objects in $\mathcal{M}_{sing}^{ctr}$ (i.e. complexes in $ex\widetilde{\mathcal{P}}$), see Lemma \ref{lem 3}; moreover, for every fibrant object $Y$ in $\mathcal{M}_{sing}^{co}$ (i.e. $Y\in ex\widetilde{\mathcal{I}}$), we need to show that the composition $FQG(Y)\stackrel{F(q)}\rightarrow FG(Y)\stackrel{\varepsilon}\rightarrow Y$ is a weak equivalence, where $\varepsilon$ is the counit of the adjunction $(F, G)$, and $q: QG(Y)\rightarrow G(Y)$ is a cofibrant replacement of $G(Y)$, see Lemma \ref{lem 4} and \ref{lem 5}.
\begin{ack*}
The author is supported by National Natural Science Foundation of China (11871125), Natural Science Foundation of Chongqing (cstc2018jcyjAX0541) and the Science and Technology Research Program of Chongqing Municipal Education Commission (No. KJQN201800509).
\end{ack*}
\bigskip
|
1,477,468,751,387 | arxiv | \section{Introduction}
Rapidly evolving technologies create a continuous demand for solid-state materials with one or more functionalities tailored for specific applications. An important category of single-phase multifunctional materials is that of magnetoelectric multiferroics that possess spontaneous coexisting magnetic and ferroelectric orders \cite{Spaldin2005,Khomskii2006,Fiebig2016}. Although a number of materials are known to have these properties, those with potential for real-world applications belong to a smaller subset which exhibit substantial coupling between their magnetic and electric properties. Moreover, the multiferroics which exhibit strong interactions between electric and magnetic orders have very complex microscopic coupling mechanisms\cite{Ramakrishnan2019}, a deep understanding of which are essential to optimize their properties. \\ \par
In-depth understanding of multiferroicity on a microscopic level requires a combination of exhaustive experiments and theoretical analyses, often employing computational tools such as density functional theory (DFT). Neutron scattering is the method of choice to understand magnetic structure and quantum mechanical interactions at the fundamental level, and techniques using photons from the terahertz to X-ray regimes offer a wealth of complementary information. X-ray spectroscopic techniques (like X-ray magnetic circular dichroism, for example) are being regularly used to determine element specific electronic and magnetic properties in multiferroics. Ideally, one needs a technique which combines spectroscopy with diffraction to examine the long-range order of certain fine aspects of electronic and magnetic structure in a comprehensive manner.
\\ \par
Resonant X-ray diffraction (RXD) is a technique that effectively combines diffraction with core-level absorption spectroscopy to observe long-range order in crystalline materials with element-specific electronic information. It has thus proved to be a useful tool to study fine details of magnetic arrangements and different types of magnetoelectric interactions in multiferroics over the years \cite{Wilkins2009,Walker2011,Windsor2015,Ramakrishnan2019}. In standard RXD experiments, one obtains the long-range ordered electronic properties, such as magnetic dipoles and anisotropies in the electron density distribution (also referred to as orbital order), within the material system. However, combining RXD with \textit{ab initio} calculations provides access to information regarding magnetic interactions, which are difficult to obtain using other experimental techniques \cite{Mannix2007, Lovesey2009, Dmitrienko2014, Ramakrishnan2017}. In particular, RXD is sensitive to long-range-ordered localized multipoles, including the exotic magnetoelectric multipoles \cite{Arima2005,Staub2009,Scagnoli2011}. Magnetoelectric multipoles are ground-state localized entities which simultaneously break parity and time-reversal symmetries \cite{Arima2005,Lovesey2009} and whose magnitudes and orientation in space can be calculated using DFT \cite{Spaldin2013}. These multipoles have the appropriate symmetries to provide a single order-parameter in material systems lacking inversion and time-reversal\cite{Zimmermann2014}. More recently, it has also been suggested that they could be the order parameter for the pseudogap phase of high-temperature cuprate superconductors \cite{Shekhter2009,Scagnoli2011,Fechner2016}. Even though a lot of theoretical work has been performed on magnetoelectric multipoles \cite{Spaldin2013,Fechner2014,Thole2016,Meier2019}, the lack of model systems where their presence can be indisputably confirmed using RXD experiments has hindered the progress in this field. In this study, we take advantage of the recent progress in the FDMNES code \cite{Joly2001,Joly2012} which allows for a spherical tensor expansion of the scattering amplitudes contributing to a particular Bragg reflection as a function of energy, x-ray polarization and azimuthal angle, to disentangle the various multipolar contributions in h-YMO.
\\ \par
Hexagonal manganites with a general formula RMnO$_3$ (R = Y, In, Sc, Dy, Ho, Er, Tm, Yb, Lu) are one of the most studied classes of multiferroics. These are type-I multiferroics in which ferroelectricity sets in at a temperature $T_C$, which is well above the magnetic transition temperature $T_N$. Several types of antiferromagnetic orderings have been observed in compounds with different R atoms due to a complex interplay of geometrical frustration, spin-orbit coupling, lattice distortions and magnetic exchange interactions \cite{Fiebig2016,Brown2006}. One prominent system of this family, hexagonal YMnO$_3$ (h-YMO), crystallizes in the space group $P6_3/mmc$ at high temperatures and undergoes a geometrically driven ferroelectric transition around 1259 K, below which the symmetry changes to $P6_3cm$ \cite{Lilienblum2015}. The ferroelectricity in this material arises due to the buckling of the MnO$_5$ bi-pyramids \cite{Aken2004,Artyukhin2014}, which is dissimilar to the displacement of the B-site cations seen in orthorhombic ABO$_3$ perovskites. The onset of magnetic order takes place at $T_N \, \approx \, 71 K$, below which the Mn$^{3+}$ moments order in a non-collinear arrangement with magnetic symmetry \textit{P}6'$_3$\textit{cm}'.
\\ \par
Even though the ferroelectric and magnetic transitions in h-YMO occur independent of one another, anomalies are found in dielectric susceptibility at $T_N$ indicating strong magnetoelectric coupling \cite{Tomuta2001,Giraldo2021}. Magneto-elastic displacements of the Mn$^{3+}$ ions have also been observed below $T_N$ \cite{Lee2008}. The $d$-orbitals in the Mn$^{3+}$ ions are strongly anisotropic, and hence, canting of the magnetic moments perpendicular to the {\emph{ab}} plane are energetically favorable \cite{Solovyev2012}. An indication of this spin-canting along the \textbf{\emph{c}}-axis has been obtained from optical measurements \cite{Degenhardt2001}, but not in neutron scattering. \\ \par
In this article, we present our resonant X-ray magnetic diffraction studies to observe possible spin cantings and magnetoelectric multipoles in h-YMO. The article is organized as follows. In Sec. \ref{sec:exp}, the RXD experiment is described and basic results are analyzed. Detailed first-principles calculations of the RXD spectral profiles and magnetoelectric multipoles are described in Sec. \ref{sec:disc}, and the major outcomes are summarized in the concluding paragraphs.
\section{Experimental Details}
\label{sec:exp}
\subsection{Sample Preparation and Characterization}
\label{ssec:exp_samp}
Crystalline hexagonal YMnO$_3$ was prepared by the optical floating zone melting technique using a Cyberstar mirror furnace \cite{Lichtenberg2017}. Starting materials were powders of Y$_2$O$_3$ (Alfa Aesar, 99.99 \%, Lot B02X020) and Mn$_2$O$_3$ (MaTeck, 99.9 \%, Ch. 250708). The chemical composition of the Mn$_2$O$_3$ powder was checked by heating a small amount (about 70 mg) up to 1100 \degree C at 10 \degree C/min under a flow of synthetic air in a thermogravimetric analyzer NETZSCH TG 209 F1 Libra or NETZSCH STA 449 C Jupiter. The dwell time at 1100 \degree C was 5 min followed by a cooling down to 100 \degree C at -30 \degree C/min. A relatively large and step-like weight loss was observed in the temperature range of about 900 - 1000 \degree C, followed by a constant weight at higher temperatures. The chemical composition of Mn$_2$O$_3$ was confirmed since the observed weight loss above about 900 \degree C corresponds precisely to the mass loss which is expected from the well-known transformation of Mn$_2$O$_3$ into Mn$_3$O$_4$. Further details, results as well as pictures are presented in the thermogravimetry section of Ref. \onlinecite{Lichtenberg2017}.
\subsection{X-ray Diffraction}
\label{ssec:exp_xrd}
The experiments were carried out at the endstation RESOXS \cite{Staub2010} at the X11MA beamline \cite{Flechsig2010} of the Swiss Light Source. The single crystal was mounted such that the [001] direction was along the horizontal scattering plane. Linear horizontal ($\pi$) and vertical ($\sigma$) polarized X-rays were focused at the sample with a spot-size of 130 $\times$ 50 $\mu m$. The beamline produced monochromatic x-rays with an energy resolution of about 0.15 eV and at the Mn $L_{2,3}$ edges. The sample was manually rotated in-situ with an accuracy of $\pm 3 \degree$ for the azimuthal angle ($\Psi$) scans. Reciprocal space scans along $(0,0,L)$ were performed as a function of energy and temperature, with $\pi$ polarized x-rays. \\ \par
\section{Results and Interpretation}
\label{sec:res}
In h-YMO, the $(0,0,1)$ Bragg reflection is forbidden according to the $P6_3cm$ space-group, but a strongly resonant diffraction signal was observed below the N\'eel temperature $T_N$. Fig. \ref{fig:tdep1} shows the reciprocal space scans along $(0,0,L)$ showing an intense diffraction peak. The intensity of the reflection was found to reduce with increasing temperature and eventually go to zero at $T_N$, thereby proving its magnetic origin. \\ \par
\begin{figure*}[ht]
\includegraphics[width=1.0\textwidth]{images/NEW_th2thTdep}
\caption{(a) Figure showing the magnetic $(0,0,1)$ diffraction peak in h-YMO for various temperatures, measured at E = 642.75 eV (Mn $L_3$ edge) using $\pi$-polarized x-rays. The dashed vertical lines indicate the positions of the magnetic peak which undergoes refraction at the Mn L$_3$ edge and is consequently not centered exactly at $L=1$, and the residual peak at 70 K which originates from the $\lambda/2$ leakage of the monochromator diffracting off the symmetry allowed $(0,0,2)$ Bragg reflection. (b) Azimuthal dependence of the ratio $I_\pi/(I_\sigma+I_\pi)$ in h-YMO at $T$ = 10 K, for the $L_3$ and $L_2$ edges of Mn. The error bars are within the symbols, and the solid lines are a guide to the eye. The inset shows the diffraction geometry.}
\label{fig:tdep1}
\end{figure*}
\subsection{Origin of the forbidden Bragg reflection}
To understand the nature of the magnetic form factor contributing to this Bragg reflection, the dependence of the scattering intensity on x-ray polarization and azimuthal angle was investigated. Fig. \ref{fig:tdep1} shows the ratio $I_\pi/(I_\sigma+I_\pi)$ of the $(0,0,1)$ reflection as a function of the azimuthal angle $\Psi$. The intensity is independent of the azimuthal angle within experimental accuracy, and has equal intensities in both polarization channels. The reflection also shows identical spectral shapes across the Mn $L_{2,3}$ edges with both $\sigma$- and $\pi$-polarized x-rays. The precise nature of the magnetic moments (and/or other scattering tensors) contributing to this reflection can be understood by evaluating the structure factor. The structure factors, for a resonant Bragg reflection, can be written in the most general form as -
\begin{equation}
\label{eq:strfac}
S(h,k,l) = \sum_n{f_n (E) e^{i 2\pi (h\hat{x}+k\hat{y}+l\hat{z})\mathbf{\cdot r}_n}}
\end{equation}
where $h, k, l$ are the Miller indices and $n$ runs over all the resonant atoms (Mn) in the unit cell. In the magnetic unit cell (which is same as the structural unit cell in h-YMO), there are six Mn atoms - three atoms $(n=1,2,3)$ at $z=0$ and three $(n=4,5,6)$ at $z=0.5$. For the $(0,0,1)$ Bragg reflection with $h=k=0$, Eq. (\ref{eq:strfac}) reduces to -
\begin{equation}
\label{eq:strfac3}
S(0,0,1) = (f_1 + f_2 + f_3) - (f_4 + f_5 + f_6)
\end{equation}
Since the scattering intensity is equal for both $\sigma$ and $\pi$ polarizations, the same structure factor is valid for both polarizations. As a first approximation, we look at the scattering terms within the form factor $f_n$, originating from the $E1E1$ process alone \cite{Hill1996}. Here $E1$ refers to an electric dipole transition between the core and valence atomic levels involved ($E$ stands for electric, and the number denotes the $\Delta l$ between the atomic states. $E2$ and $M1$ would thus refer to an electric quadrupole and a magnetic dipole transition, respectively. Since RXD is a two-photon process, we always look at combination of two transitions.) Equal intensities in both polarization channels ($I_\sigma=I_\pi$) and the absence of azimuthal dependence implies that
\begin{equation}
\label{eq:fapprox}
f_n \propto m_n^{z} sin\theta
\end{equation}
where $m_n^{z}$ is the spin-component along the $\mathbf{\hat{z}}$ direction, also indicating scattering only in the rotated polarization channels ($\sigma \rightarrow \pi'$ and $\pi \rightarrow \sigma'$)\cite{Joly2009,Ramakrishnan2017}. Due to the negative sign in Eq. \ref{eq:strfac3}, only the antiferromagnetic (AFM) component of the spins contributes to the structure factor. In other words, the $(0,0,1)$ reflection directly measures the AFM spin canting along the \textbf{c}-axis of the crystal. An indication for such a spin-canting has been reported earlier from optical second-harmonic generation (SHG) studies \cite{Degenhardt2001}. Polarized neutrons are not sensitive to spin components along the Bragg wavevector. Hence, this AFM-canting of the Mn moments along the \textbf{c}-axis does not contribute to any $(0,0,L)$ type of reflections in neutron diffraction experiments. It should be noted that the magnetic $(0,0,1)$ reflection which has been observed in neutron diffraction of h-HoMnO$_3$ \cite{Lonkai2004} originates from the long-range ordering of the Ho$^{3+}$ magnetic moments. This is unlike the case of Y$^{3+}$ ions, which do not have ordered magnetic moments\cite{Lonkai2002}.
\subsection{Spectral shape evolution with temperature}
\label{ssec:rxd_spec}
\begin{figure*}[ht]
\includegraphics[width=1.0\textwidth]{images/NEW_efixQTdep.pdf}
\caption{(a) The spectral intensity profile of the (0,0,1) reflection at the Mn $L_{2,3}$ edges at various temperatures. (b) The temperature dependence of the diffraction intensity at the energies indicated A and B, and the square of the magnetic moment obtained in neutron diffraction (adapted with permission from Ref. \onlinecite{Munoz2000}. Copyrighted by the American Physical Society). The intensities have been normalized to unity at the lowest temperature, and the temperature axis has been adjusted so that the $T_N$ of both experiments coincide. The solid lines are guides to the eye.}
\label{fig:efixq}
\end{figure*}
In the expression for the structure factor for the $(0,0,1)$ magnetic Bragg reflection presented earlier, the energy dependence of the form factor was ignored. However, near an atomic absorption edge, the form factor $f_n(E)$ has a strong dependence on energy. The resulting energy dependence of the diffraction intensity at the absorption edge, called the spectral profile, contains detailed information regarding the symmetry and magnetoelectric interactions \cite{Ramakrishnan2017}. Hence, we measured the spectral profile of the magnetic $(0,0,1)$ reflection for several temperatures (see Fig. \ref{fig:efixq}). The spectra were obtained by integrating the intensity of the reciprocal space scans of the $(0,0,1)$ reflection at every energy point around the Mn $L_{2,3}$ edges. \\ \par
A striking observation is that the shape of the spectrum changes with temperature. For example, the relative intensity of the two peak-like features A and B at the Mn $L_3$ edge vary with temperature. This is also clearly seen in measurements of the scattered intensity for different energies with finer temperature steps. Fig. \ref{fig:efixq} shows the intensity at each temperature normalized with respect to the intensity at the base temperature, for two different energies (corresponding to A and B Fig. \ref{fig:efixq}(a)). A detailed discussion about the origin of this observation is provided in the following section. \\ \par
\section{Discussion and Calculations}
\label{sec:disc}
\subsection{Antiferromagnetic canting}
\label{ssec:disc_mag}
The series of hexagonal manganites feature a wide variety of magnetic configurations whose origins have been extensively studied by a variety of techniques. Resonant x-ray diffraction at the Mn $L_{2,3}$ edges is sensitive to electronic ordering phenomena local to the Mn$^{3+}$ ions. The $(0,0,1)$ Bragg reflection, which is forbidden according to the $P6_3cm$ space group, shows strong resonant scattering below $T_N$. As seen in the previous section, this Bragg peak originates from the antiferromagnetic canting of spins along the \textbf{c}-axis. This is yet another demonstration of the ability of RXD to investigate features like spin canting with high sensitivity. However, it must be mentioned that it is not possible to obtain a quantitative estimate of the canting angle in this particular case. This type of spin canting is allowed under the magnetic space group $P6'_3cm'$. \\ \par
Even though there have even been indications of such a spin-canting from optical SHG experiments\cite{Degenhardt2001}, it had not yet been reported in any scattering experiments. Spin-cantings in ME systems are usually described by the antisymmetric exchange mechanism based on relativistic Dzyaloshinkii-Moriya (DM) interactions. It would be of fundamental interest to understand whether we can control the strength of the DM interactions and thereby, the canted moments using strain. Hence, one needs to repeat these experiments on differently strained epitaxial films, where the magnetic ordering temperature has been reported to change as a function of strain \cite{Wu2013}. \\ \par
\subsection{Changes in Spectral Shape}
\label{ssec:disc_spec}
In \ref{ssec:rxd_spec}, it was seen that the spectral shape of the $(0,0,1)$ Bragg reflection was different for different temperatures. Changes in symmetry at the site of the resonant atom or its position in the unit cell can, in principle, alter the local electronic distribution affecting the RXD spectra \cite{Ramakrishnan2017}. However, it has been observed experimentally that movement of the Mn atoms within the unit cell occurs only at temperatures close to $T_N$, and, no structural changes have been observed below 40 K \cite{Lee2008}. Hence, at temperatures below 40 K, any kind of atomic motion induced changes in spectral shape can be ruled out. Fine changes in the magnetic structure like the canting angle can also be ruled out since the temperature dependence of the (0,0,1) reflection coincides with the total magnetic moment in the system as observed with neutrons (see Fig. \ref{fig:efixq}). \\ \par
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{images/NEW_specDiff.pdf}
\caption{(a) The normalized spectral shapes of the magnetic $(0,0,1)$ reflection obtained at T = 10 K and T = 40 K.The intensities were normalized such that the difference spectrum averages to zero when integrated over the given energy range. (b) Spectral shapes obtained with and without contributions from ME multipoles, calculated using FDMNES. In addition to $E1E1$, the $E1E2$ and $E1M1$ transition processes were also included in the calculation for the ME multipoles. The difference between the two profiles is also shown in both panels. (c) The comparison of the experimental and calculated difference spectral profiles.}
\label{fig:specDiff}
\end{figure}
To understand other possible causes for this spectral change, we need to revisit the approximation made to arrive at Eq. \ref{eq:fapprox}, where we limited the form factor $f_n(E)$ to scattering terms originating from the $E1E1$ process. Higher order electric ($E1E2$, $E2E2$) and mixed electric-magnetic processes ($E1M1$) have been found to contribute to resonant X-ray scattering in several correlated electron materials ($E2$: electric quadrupole transition, $M1$: magnetic dipole transition) \cite{Matteo2005}. Since scattering terms originating from different resonant processes can have different amplitudes and phases as a function of energy, the final spectral shape is the result of interference of all such contributions-
\begin{equation}
\label{eq:ftotal}
\begin{split}
f_n (E) & \propto f_n^{E1E1} (E) + f_n^{E1E2} (E) \\
& + f_n^{E1M1} (E)+ f_n^{E2E2} (E)
\end{split}
\end{equation}
where $f_n^{E1E1}(E) \propto m_n^{z} sin\theta$ (given by Eq. \ref{eq:fapprox}). The higher order terms $f_n^{E1E2} (E)$, $f_n^{E1E2} (E)$, and $f_n^{E1E2} (E)$ denote the combined form factor of all allowed multipoles from the respective processes. Only those multipoles which are long-range ordered with the Fourier component along the $(0,0,1)$ wavevector contribute to this reflection \cite{Lovesey2009,Scagnoli2011,Staub2010}. Moreover, since there is no dependence of the scattering intensity on the azimuthal angle, the relevant atomic multipoles should also be symmetric with respect to any rotation about the \textbf{c}-axis. \\ \par
To investigate interference of one or more atomic multipoles in our experimental spectra, we look at the difference of the normalized spectra at 10 K and at 40 K. Fig. \ref{fig:specDiff} shows the the normalized spectra of the $(0,0,1)$ reflection measured at 10K and 40K, and the difference spectral profile. The difference spectrum is obtained after normalizing the two spectra so that they have equal weight when integrating the intensity over the edges. The difference resembles a scenario when there is interference between different scattering terms \cite{Dmitrienko2014,Sessoli2015}. A quantitative evaluation is not feasible with the computational tools currently available. However, we employ a combination of DFT and phenomenology to provide a semi-quantitative description of this interference using a model of resonant scattering from the magnetoelectric multipoles. \\ \par
\subsection{\textit{Ab initio} Calculations}
\label{ssec:abinitio}
The presence of higher order multipoles in the scattering signal can be addressed using the \textit{ab initio} FDMNES code \cite{Joly2001,Joly2009}. The package uses a given crystal and starting magnetic structure to compute the spin-polarized density of states of a given material using density functional theory (DFT). Following this, the calculations for spectra for x-ray absorption and x-ray diffraction are performed. The code enables one to choose the transition processes for which the absorption or diffraction spectra are calculated. FDMNES does not compute the relaxed crystal structure, and hence, we used the crystal structure provided in Ref. \onlinecite{Munoz2000}. The magnetic structure corresponding to the space group $P6'_3cm'$ given in Ref. \onlinecite{Brown2006} was used. To correlate with our findings, an antiferromagnetic canting of 1$\degree$ was added to this input magnetic structure. The fully relativistic calculations were performed in the self-consistent mode using multiple scattering approach to obtain the magnetic ground state\cite{Joly2001,Joly2012}. A cluster radius of 4\AA \, and a uniform broadening of 0.1 eV were used for all calculations. The polarization resolved scattering intensities for the $(0,0,1)$ Bragg reflection were calculated for $E1E1$ and a combination of $E1E1$, $E1E2$ and $E1M1$ processes, including correction factors for self absorption. Further details and a sample code are given in Ref. \onlinecite{RamakrishnanThesis2017}. Due to inherent limitations in the estimation of core-hole effects and other interactions in the excited-state, the multiplet structure of the partially filled $3d$ orbitals are not accurately computed. Hence, the shapes of the spectra at the $L_{2,3}$ edges of Mn are not reproduced. However, we can get a semi-quantitative estimate of the relative scattering contributions from the magnetic canting and other higher order multipoles. \\ \par
For simplicity, we limit our calculations to the $E1E1$, $E1E2$ and $E1M1$ processes. Even though the latter processes are much weaker than $E1E1$, they can interfere amongst themselves giving visible changes in the spectral profiles. Including this interference, the total scattering intensity for a combination of the above processes can be approximated as -
\begin{equation}
\label{eq:Itot}
\begin{split}
I^{tot} (E) & \propto |f_n^{E1E1} (E) + f_n^{E1E2} (E) \\
& + f_n^{E1M1} (E)+ f_n^{E2E2} (E)|^2 \\ \\
& \approx |f_n^{E1E1} (E)|^2 + |f_n^{E1E1} (E)f_n^{E1E2} (E)| \\
& + |f_n^{E1E1} (E)f_n^{E1M1} (E)|
\end{split}
\end{equation}
The squares and combinations of higher order scattering terms can be neglected since they are generally too weak to be detected in such an experiment. The interference terms $|f_n^{E1E1}(E) .f_n^{E1E2}(E)|$ and $ |f_n^{E1E1}(E).f_n^{E1M1} (E)|$ enable us to observe the weak scattering from the higher order multipoles. In the calculation using FDMNES, we can selectively calculate the scattering intensities from each process or any combinations of these (to account for any interference) \cite{Joly2009}. Thus, we calculate the scattering intensities in case of (i) a $E1E1$ transition process alone, and (ii) combination of $E1E1$, $E1E2$ and $E1M1$ transition processes. We focus only on the scattered intensity in the rotated light channels for the $(0,0,1)$ reflection, in accordance with experimental observations. The intensity in these channels for case (i) is exclusively the scattering due to the AFM canting of the magnetic dipole moments along the hexagonal \textbf{c}-axis. For case (ii), the diffraction amplitudes from the higher-order multipoles interfere with the strong scattering signal from the canted AFM dipoles. On subtracting the spectra obtained for cases (i) and (ii) above, we observe a clear difference, of the order of a few percentage of the total diffraction intensity. As we can see, the calculations do not reproduce all features of the experimental spectra given in Fig. \ref{fig:specDiff}. The spectrum is shifted on the energy axis due to the inaccurate determination of the Fermi energy of the system in the presence of a core-hole. This difference spectrum along with the intensity profiles for calculations (i) and (ii) described above are plotted in Fig. \ref{fig:specDiff}. The difference spectrum obtained experimentally can now be compared with the calculated one [see Fig. \ref{fig:specDiff}(c)]. The mismatch between experiment and calculation is likely due to the fact that the effects like localization of electronic states in presence of the core-hole is not well accounted for in DFT-based calculations of $3d$ systems. \\ \par
\subsection{Multipolar Analysis}
\label{ssec:calc_mult}
In Sec. \ref{ssec:abinitio}, we established that the anomalous evolution of the spectral shapes with temperature can be explained by considering interference of scattering signals from the canted AFM dipoles and higher-order multipoles. The FDMNES code also allows the expansion of the scattering tensor in cartesian and/or spherical tensors, and to obtain the contribution of individual atomic multipoles. We expand the intensities as spherical tensors, and use the notation introduced in Refs. \onlinecite{Lovesey2005,Lovesey2009}. The atomic tensors derived from spherical harmonics are denoted by $\langle X^K_Q \rangle$, where $X$ is the tensor type ($T$: parity-even and non-magnetic; $U$: parity-odd and non-magnetic, called polar multipoles; $G$: parity-odd and magnetic, called magnetoelectric multipoles), $K$ is the rank of the tensor (0: monopolar, 1: dipolar, 2: quadrupolar etc.) and $Q$ is the projection of the tensor on the chosen basis. The multipolar contributions to the scattering intensity are shown in Fig. \ref{fig:me}. The figure shows the square of the form factor for all the nonzero multipoles obtained from $E1E1$, $E1E2$ and $E1M1$ processes as a function of energy, in the FDMNES calculation for h-YMO. The strongest scattering term is the \emph{magnetoelectric octupole}, which is represented as $ \langle G^3_3 \rangle -\langle G^3_{-3} \rangle$. This spherical octupole resembles an \textit{f} orbital which is symmetric with respect to rotations about the principal axis. In our calculations, the magnitude of the form factor corresponding to this octupole is about $1\%$ of scattering from the magnetic dipole. One should note that even though the scattering intensities from the individual multipoles shown in the figure are small, they interfere with each other affecting the overall spectral shape significantly. Certain multipoles contribute to scattering in both the $E1E2$ and the $E1M1$ processes, albeit with different spectral shapes. For the Mn $L_{2,3}$ edges in h-YMO, the intensities resulting from the $E1E2$ process are generally stronger at these energies compared to those from $E1M1$, exemplified by the intensity of the magnetoelectric quadrupole $\langle G^2_0 \rangle$. However, to our knowledge, no experimental evaluation of the overall cross-sections of these two processes in RXD has been done to date. Yet another quantity of tremendous interest is $\langle G^0_0 \rangle$, which is a magnetic rank zero tensor. This entity, referred to as the magnetoelectric monopole or the magnetic charge, is fundamentally different from the monopole forbidden by classical electromagnetism \cite{Spaldin2013}. It has nonzero intensity in our calculations, even though its contribution is weaker compared to the other multipoles. \\ \par
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{NEW_me}
\caption{Calculated spectral intensities for (a) the overall form factor of the $(0,0,1)$ reflection in h-YMO obtained following interference of magnetic dipole and magnetoelectric multipoles, (b) the magnetic dipole term, and (c) the magnetoelectric $\langle G^K_Q \rangle$ multipoles at the Mn $L_{2,3}$ edges.}
\label{fig:me}
\end{figure}
One key aspect that has not been discussed so far concerns how interference of magnetoelectric terms leads to a different spectral shape at 40 K compared to 10 K. Ideally, one should calculate the RXD spectra as a function of temperature. There is a dearth of computational tools to quantitatively simulate the spectra for temperatures other than absolute zero. DFT-based methods are usually employed to deal with the ground state of a system. Due to the above reasons, we can only provide a phenomenological explanation for the observed temperature dependence of the RXD spectra. The temperature dependence of a purely magnetic term contributing to scattering can be measured in an experiment. Since the intensity of the $(0,0,1)$ Bragg reflection in h-YMO is heavily dominated by magnetic scattering, we can assume its temperature dependence to follow that of a pure magnetic dipole. Upon fitting the normalized intensity with a mean-field model $I \propto (T_N - T)^{2\beta_{mag}}$, we obtain $\beta_{mag} \approx 0.38$, where $\beta_{mag}$ is the critical exponent of the magnetic scattering from the canted AFM moments. Magnetoelectric multipoles, on the other hand, are products of spatial and spin-density terms \cite{Matteo2005,Spaldin2013}. Therefore, they have distinct temperature dependences, based on their actual tensorial form. Since the polar toroidal octupole is by far the strongest higher-order scattering term, we ignore the other multipoles to simplify our analysis. This octupole, which is a product of the spin density term and a spatial term to the power of two \cite{Matteo2005} can be approximated as:
\begin{equation}
\vert \langle G^3_3 \rangle - \langle G^3_{-3} \rangle \vert (T) \, \propto \, \mu(T) \, r^2(T)
\label{eq:tdep_oct}
\end{equation}
where $\mu(T)$ and $r(T)$ are the temperature-dependent spin density and spatial terms. From literature, for spatially dependent electric polarization (like in ferroelectric materials) which depends linearly on $r(T)$, the value of the critical exponent $\beta_r$ (critical exponent for the spatial dependence) falls within the range of 0.24 to 0.62 \cite{Kadanoff1967,Say2010,Sarikaya2013}. From Eq. \ref{eq:tdep_oct}, $\beta_{oct}$ = $\beta_{mag} + 2 \beta_r$ and hence, we can approximate the value of the overall critical exponent $\beta_{oct}$ for the octupoles to be between 0.86 and 1.62. Based on these critical exponents, we can model the scattering intensity as a function of temperature for the magnetic dipole moments and octupoles as shown in Fig. \ref{fig:tdepME}. It is clear that the scattering contribution from magnetoelectric octupoles decreases at a comparatively higher rate with increasing temperature. Hence, the overall spectral shape is expected to change as a function of temperature and, for $T\approx40$ K, one can assume that there is a relatively smaller contribution from $f_n^{E1E2} (E)$ and $f_n^{E1M1} (E)$ compared to $f_n^{E1E1}(E)$.
\\ \par
\begin{figure*}[ht]
\includegraphics[width=0.7\textwidth]{images/NEW_TdepSim_motif.png}
\caption{The comparison of temperature dependence of the magnetic scattering upper and lower bounds of scattering intensity from the magnetoelectric octupole $\langle G^3_3\rangle - \langle G^3_{-3} \rangle$, according to our model. The arrangement of the canted part of the magnetic moment and the magnetoelectric octupoles is also shown.}
\label{fig:tdepME}
\end{figure*}
In RXD, the dipole-quadrupole $E1E2$ process is usually invoked in studies involving the pre-edge region of $K$ edges of transition metals (where the $E2$ excitation $1s \rightarrow nd$ probes the partially filled $d$-states), or the $L$ edges of rare-earths (where the $E2$ excitation $2p \rightarrow nf$ probes the $f$-states). The fact that we find a measurable cross-section for the $E1E2$ process at the Mn $L_{2,3}$ edges is very interesting from a fundamental perspective. For example, this could be due to a strong $d-f$ hybridization leading to an $f$-like character of the final states. Note that this effect is visible due to the small spin canting leading to an effective \textbf{c} axis projection of the dipole moment that is approximately two orders of magnitude reduced in strength.\\ \par
\begin{comment}
Whether this is, in fact, due to the inter-atomic hybridization between the Mn $d$-orbitals and the low lying empty $f$-orbitals of Y remains to be investigated.
\end{comment}
Earlier reports of changes in spectral shapes in resonant diffraction have occurred in systems with either atomic motion or macroscopic changes like spin rotation \cite{Staub2017}. In the absence of any of these observable changes, an observable change in the spectral shapes due to magnetoelectric multipoles is the most probable explanation. More investigations are needed to understand this phenomenon, complemented by dedicated theoretical and computational studies, ultimately to the comprehensive understanding of diffraction anomalous fine structure (DAFS) over large energy ranges in correlated electron materials. \\ \par
\section{Summary}
We investigate the $(0,0,1)$ Bragg reflection below $T_N$ in a single crystal of hexagonal YMnO$_3$ using resonant X-ray diffraction (RXD). Following a detailed examination of the dependence of diffraction intensity on X-ray polarization and azimuthal angle, we can conclude that this reflection, which is forbidden according to the $P6_3cm$ space-group, originates from an antiferromagnetic canting of the Mn$^{3+}$ magnetic moments perpendicular to the crystallographic \emph{ab} plane. We also observe that the shape of RXD spectra changes for different temperatures. Using \textit{ab initio} calculations and phenomenological arguments, we discuss this behavior from the perspective of the interference between scattering from the magnetic dipole and parity-odd atomic multipoles on Mn ions. A detailed microscopic theory on the behavior of magnetoelectric multipoles at temperatures above absolute zero is necessary to validate our hypothesis and, in general, to expand the scope of this method in the broader field of multiferroics.
\section*{Acknowledgements}
The authors are grateful to N. A. Spaldin and M. Fiebig for insightful discussions and comments on the manuscript. We thank J. -G. Park and Seongsu Lee for providing the structural data published in Ref. \onlinecite{Lee2008}, and A. Mu\~noz for permission to reuse data from Ref. \onlinecite{Munoz2000}. The RXD experiments were carried out at X11MA beamline of the Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland. The authors thank the X11MA beamline staff for experimental support. The financial support of the Swiss National Science Foundation (SNSF) is gratefully acknowledged (Projects No. CRSII2\_147606 and No. 200020\_159220). E.M.B. and U.S. acknowledge financial support from NCCR MUST (No. 51NF40-183615) and NCCR MARVEL (No. 182892), a research instrument of the SNSF, and funding from the European Community’s Seventh Framework Program (FP7/2007-2013) under Grant No. 290605 (COFUND:PSI-FELLOW). F.L. thanks Barbara Scherrer, former member of the division Nonmetallic Inorganic Materials of the Department of Materials of the ETH Zurich, for her assistance concerning thermogravimetry with the system NETZSCH STA 449 C Jupiter.
\bibliographystyle{unsrt}
\setcitestyle{numbers}
|
1,477,468,751,388 | arxiv | \section{Introduction}
\subfile{sections/Introduction.tex}
\section{Methods}
\subfile{sections/Methods.tex}
\section{Results}\label{sec:results}
\subfile{sections/Results.tex}
\section{Discussion}
\subfile{sections/Discussion.tex}
\section{Conclusions}
\subfile{sections/Conclusion.tex}
\section{Disclaimer}
This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal
liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not
necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344.
\section{Funding Sources}
This work represents a multi-institutional effort. Funding sources include: Lawrence Livermore National Laboratory internal funds; the National Nuclear Security Administration; GlaxoSmithKline, LLC; and federal funds from the National Cancer Institute, National Institutes of Health, and the Department of Health and Human Services, under Contract No. 75N91019D00024.
\section{Appendix}
\subfile{sections/Appendix.tex}
\subsection{Data}
\subfile{sections/Data.tex}
\subsection{Experimental design for regression pharmacokinetic models}
To evaluate AMPL's performance, we built a total of 11,552 models on 15 pharmacokinetic datasets and 26 bioactivity datasets. These models include 9,422 regression models and 2,130 classification models.
We evaluated a variety of deep learning model types and architectures and compared them to baseline random forest models. We explored the performance of four types of features: ECFP fingerprints, MOE descriptors, Mordred descriptors, and graph convolution-based latent vectors. For the neural network models, we searched over many combinations of learning rates, numbers of layers, and nodes per layer. For each combination of neural network hyperparameters, we trained for up to 500 epochs and used a validation set performance metric ($R^2$ for regression, $ROC AUC$ for classification) to choose an early stopping epoch for the final model. For random forest models, the only hyperparameter varied was the maximum tree depth, as previous experiments showed that other model hyperparameters had a minimal effect for our datasets. The complete set of hyperparameters varied was as follows:
\begin{itemize}
\item Splitter Types: scaffold and random
\item Fraction for train set: 0.7
\item Fraction for validation set: 0.1
\item Fraction for holdout set: 0.2
\item Feature types: ECFP, MOE, mordred, and graph convolution
\item Model types: neural network and random forest
\item Neural network learning rates: 0.0001, 0.00032, 0.001, 0.0032, 0.01
\item Maximum number of epochs: 500
\item Number of layers: 1,2
\item Layer size options: 1024,256,128, 64, 32,16,8,4,1
\item Maximum final layer size: 16
\item Dropout rate: 0.1
\end{itemize}
\subsection{Analysis of modeling performance}
To identify which featurization type generated the most predictive models for each model type, models with the best validation set $R^2$ score were selected for each model/splitter/dataset combination. The number of ``best" models for which each feature type yielded the highest test set $R^2$ score is plotted in Figure \ref{fig:regression_feat_perf}.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/r2_score_pk_feature_perf.pdf}
\caption{Number of times each featurization type produces the best model for the 15 PK datasets}
\label{fig:regression_feat_perf}
\end{figure}
Figure \ref{fig:regression_feat_perf} shows that the chemical descriptors generated by the commercial MOE software outperformed those produced by the open source Mordred package in most cases. DeepChem's graph convolution networks outperform all other feature types for neural network models.
The model/featurization combination with the most accurate predictions on the holdout set is shown in Figure \ref{fig:regression_test_model_feat_perf}. MOE featurization with random forest models most frequently outperformed other featurization/model type combinations for both types of splitters.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figures/r2_score_pk_model_feat_test_perf.pdf}
\caption{Number of times each featurization type/ model type combination produces the best model for the 15 PK datasets}
\label{fig:regression_test_model_feat_perf}
\end{figure}
Figure \ref{fig:regression_model_perf} confirms that random forest models tend to outperform neural network models for the evaluated datasets.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/r2_score_pk_model_perf.pdf}
\caption{Number of times each model type produces the best model for the 15 PK datasets}
\label{fig:regression_model_perf}
\end{figure}
\subsection{Investigation into neural network performance}
Neural networks are known to perform more poorly on smaller datasets, so we wanted to examine the relationship between the size of a dataset and the test set $R^2$ values for the best random forest and neural network models for that dataset. Figure \ref{fig:regression_num_samples_perf} shows the test set $R^{2}$ values for the best neural network and random forest models for each dataset, where best is defined as the model with the highest validation set $R^2$ value. The figure shows that as the dataset size increases, the $R^2$ score for the test set increases as well.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/r2_score_pk_regression_perf_num_samples.pdf}
\caption{Plot of best test set $R^2$ values versus the dataset size for neural network and random forest models}
\label{fig:regression_num_samples_perf}
\end{figure}
The pattern is true for the overall best model, regardless of type, for both regression and classification, as shown in Figure \ref{fig:numcpds_perf}. These results indicate that we will need to augment our datasets to further improve model performance. We plan to explore multiple avenues to address this requirement: conducting additional experiments, running simulations, sourcing public data, building multi-task models, and experimenting with transfer learning approaches.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/numcpds_perf.png}
\caption{Per-dataset model accuracy versus dataset size}
\label{fig:numcpds_perf}
\end{figure}
We also examined the architectures that yielded the best model for each feature type for the neural network models. Our hypothesis was that larger datasets would perform better with larger networks. Figure \ref{fig:params_samples} shows number of parameters in the hidden layers of the model versus the size of the dataset. The color indicates the dataset and the shape indicates the featurizer type. The number of parameters for the 2-layer networks was calculated by multiplying the first and second layers together. We can see a clear lower bound in the number of parameters for the best network for all featurizer types as the dataset size increases.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/r2_score_pk_regression_num_params_num_samples.pdf}
\caption{Number of hidden layer parameters versus number of samples for the best model for each dataset/featurizer combination}
\label{fig:params_samples}
\end{figure}
\subsection{Summary of model performance}
Figure \ref{fig:regression_perf_random} and Figure \ref{fig:regression_perf_scaffold} show the full set of test set $R^2$ values for the best model for each molecular featurization representation and model type for random and scaffold splits respectively (picked as before by the best validation set $R^2$ value). Random sampling inflates the $R^2$ values of the holdout set, which is as expected since there is greater structural overlap between the set of compounds in the training and holdout set.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/r2_score_Random_Split_pk_regression_perf.pdf}
\caption{Performance accuracy for regression for random split}
\label{fig:regression_perf_random}
\end{figure}
For scaffold split-generated holdout sets, there is a very clear pattern between dataset size and $R^2$ value, although the complexity of the predicted property and quality of the dataset also obviously has an effect.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/r2_score_Scaffold_Split_pk_regression_perf.pdf}
\caption{Performance accuracy for regression for scaffold split}
\label{fig:regression_perf_scaffold}
\end{figure}
\subsection{Model tuning results}
To evaluate whether hyperparameter search improves model performance, the test set $R^2$ for a baseline model was compared with the test set $R^2$ from the best-performing model selected by looking at the validation set $R^2$ value. Small datasets and ECFP-based models, which showed poor neural network performance overall, showed little to no improvement, while better-performing datasets and featurizers showed greater improvement with hyperparameter search. This suggests that data augmentation will be necessary to improve prediction performance on the smaller problematic datasets, and that ECFP is a poor featurizer no matter the hyperparameters.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/r2_score_hyperparam_improvement.pdf}
\caption{Histogram of improvement in $R^2$ values for the test set for the four featurizers for neural network models}
\label{fig:hyperparam_perf}
\end{figure}
\subsection{Classification experiments}
A set of classification model experiments were also conducted for a panel of 28 bioactivity datasets, without any hyperparameter tuning. In total 2,130 neural network and random forest models were generated. A dose concentration threshold was used to label active and inactive compounds on a per-dataset basis using thresholds provided by domain experts at GlaxoSmithKline. The classes were extremely unbalanced, which partially explains the high ROC-AUC scores.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/classification_perf_vexp.pdf}
\caption{Performance accuracy for classification}
\label{fig:class_perf}
\end{figure}
\subsection{Uncertainty quantification}
To explore the utility of the uncertainty quantification values produced by neural network and random forest models, a case study is presented for three representative PK parameter datasets: rat plasma clearance (\textit{in vivo}), human microsomal clearance, and human plasma protein binding HSA. These datasets were selected to represent small, medium, and large sized datasets with low, medium, and high $R^2$ values.
\subsubsection{Precision-recall plot analysis}
Precision-recall curves measure the fraction of low error predictions made at varying UQ thresholds. Precision is defined as the fraction of predictions with UQ values less than the UQ threshold, with error less than some predefined threshold. For this analysis we use mean logged error and define ``low-error" as samples with logged error below the mean (log served to normalize the distribution). Recall reports the fraction of low-error samples which pass the UQ filter threshold. Overall, we would like to use the UQ value as a threshold to identify low error samples at a higher rate than in the overall test set. Table \ref{tab:errors} shows the percentage of low error samples in the test set as a whole for each dataset/model/featurizer combination.
\begin{table}[]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|l|c|}
\hline
\textbf{Dataset} & \textbf{Model and featurizer type} & \textbf{\makecell{Percent of total \\low error samples}} \\ \hline
Rat Plasma Clearance (\textit{In Vivo}) & Neural network + ECFP & 41.4\% \\ \hline
Rat Plasma Clearance (\textit{In Vivo}) & Neural network + GraphConv & 41.8\% \\ \hline
Rat Plasma Clearance (\textit{In Vivo}) & Neural network + MOE & 42.9\% \\ \hline
Rat Plasma Clearance (\textit{In Vivo}) & Neural network + Mordred & 40.5\% \\ \hline
Rat Plasma Clearance (\textit{In Vivo}) & Random forest + ECFP & 42.5\% \\ \hline
Rat Plasma Clearance (\textit{In Vivo}) & Random forest + MOE & 41.7\% \\ \hline
Rat Plasma Clearance (\textit{In Vivo}) & Random forest + Mordred & 42.0\% \\ \hline
Human Microsomal Clearance & Neural network + ECFP & 41.0\% \\ \hline
Human Microsomal Clearance & Neural network + GraphConv & 41.0\% \\ \hline
Human Microsomal Clearance & Neural network + MOE & 39.0\% \\ \hline
Human Microsomal Clearance & Neural network + Mordred & 39.8\% \\ \hline
Human Microsomal Clearance & Random forest + ECFP & 39.5\% \\ \hline
Human Microsomal Clearance & Random forest + MOE & 38.5\% \\ \hline
Human Microsomal Clearance & Random forest + Mordred & 39.6\% \\ \hline
Human Plasma Protein Binding HSA & Neural network + ECFP & 43.4\% \\ \hline
Human Plasma Protein Binding HSA & Neural network + GraphConv & 43.0\% \\ \hline
Human Plasma Protein Binding HSA & Neural network + MOE & 43.1\% \\ \hline
Human Plasma Protein Binding HSA & Neural network + Mordred & 43.5\% \\ \hline
Human Plasma Protein Binding HSA & Random forest + ECFP & 42.0\% \\ \hline
Human Plasma Protein Binding HSA & Random forest + MOE & 42.8\% \\ \hline
Human Plasma Protein Binding HSA & Random forest + Mordred & 42.5\% \\ \hline
\end{tabular}%
\caption{Percent of total low-error samples in the test set for the specified dataset, model/featurizer combinations}
\label{tab:errors}
}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/InVivo_Clearance_rat_precision_recall.pdf}
\caption{Precision-recall plot for rat plasma clearance (\textit{in vivo}), varying UQ value}
\label{fig:invivo_rat_pr_uq}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Microsomal_Clearance_human_precision_recall.pdf}
\caption{Precision-recall plot for human microsomal clearance, varying UQ value}
\label{fig:microsomal_human_pr_uq}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Plasma_Protein_Binding_HSA_human_precision_recall.pdf}
\caption{Precision-recall plot for human plasma protein binding HSA, varying UQ value}
\label{fig:ppb_human_pr_uq}
\end{figure}
In general, a low UQ threshold with accurate uncertainty would correspond to a precision of 1, which means confident predictions correspond to low-error predictions. To have the greatest utility, the curve should keep fairly high precision as the recall increases. UQ successfully filters out low confidence predictions in some cases but performance varies widely with the model/featurization type and the dataset. Figures \ref{fig:invivo_rat_pr_uq}, \ref{fig:microsomal_human_pr_uq} and \ref{fig:ppb_human_pr_uq} show that precision drops quickly as recall increases and for some models precision is poor even when applying the lowest UQ threshold. Nevertheless, for each dataset there exists a UQ threshold for at least one model which could be used to increase the fraction of low error predictions over the baseline percentages shown in Table \ref{tab:errors}. For example, Figure \ref{fig:ppb_human_pr_uq} suggests that applying a UQ threshold could increase precision to 65\% from around 42\% with a recall of 10\%. Later it is shown that for the human plasma protein binding HSA dataset, this could still yield a collection of compounds with a diverse range of response values.
\subsubsection{Calibration curves}
To further investigate how error changes as the uncertainty increases, we plotted calibration curves of mean error per uncertainty bucket, with the 95\% confidence interval of error shown for each bucket as error bars. We would like uncertainty to serve as a proxy for error, so we would hope to see the mean error for the samples in a bucket to increase as the UQ threshold for that bucket increased. Results for neural network and random forest models built on MOE feature vectors and neural network graph convolution models are shown to demonstrate the variation in performance.
For rat plasma clearance (\textit{in vivo}), there is an overall upward trend for all three calibration curves, but it is not completely monotonically increasing for any of them. This is the smallest dataset of our case study, so increasing the bucket size may improve the choppiness of these curves, but overall UQ does not look like it would be a reliable proxy for error for this dataset.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/InVivo_Clearance_rat_NN_moe_CI.pdf}
\caption{Mean error per uncertainty bucket for rat plasma clearance (\textit{in vivo}) neural network model with MOE features}
\label{fig:invivo_nn_moe_ci}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/InVivo_Clearance_rat_RF_moe_CI.pdf}
\caption{Mean error per uncertainty bucket for rat plasma clearance (\textit{in vivo}) random forest model with MOE features}
\label{fig:invivo_rf_moe_ci}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/InVivo_Clearance_rat_NN_graphconv_CI.pdf}
\caption{Mean error per uncertainty bucket for rat plasma clearance (\textit{in vivo}) neural network model with Graph Convolution features}
\label{fig:invivo_nn_gc_ci}
\end{figure}
Human microsomal clearance shows greater variation in the calibration curves. For MOE features with a neural network model, shows an inverse pattern where the error actually decreases as the uncertainty increases.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Microsomal_Clearance_human_NN_moe_CI.pdf}
\caption{Mean error per uncertainty bucket for human microsomal clearance neural network model with MOE features}
\label{fig:microsomal_nn_moe_ci}
\end{figure}
For MOE features with a random forest model, there seems to be no correlation, except for in the very highest bucket.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Microsomal_Clearance_human_RF_moe_CI.pdf}
\caption{Mean error per uncertainty bucket for human microsomal clearance random forest model with MOE features}
\label{fig:microsomal_rf_moe_ci}
\end{figure}
The graph convolution model, conversely, shows an upward trend, although it is not monotonically increasing.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Microsomal_Clearance_human_NN_graphconv_CI.pdf}
\caption{Mean error per uncertainty bucket for human microsomal clearance neural network model with Graph Convolution features}
\label{fig:microsomal_nn_gc_ci}
\end{figure}
These curves show that the featurizer and model type have a strong effect on the relationship between UQ and error.
For human plasma protein binding HSA, which is the largest dataset with over 123,000 compounds, all calibration curves display the desired behavior: error increases as uncertainty increases and the 95 percent confidence intervals are small.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Plasma_Protein_Binding_HSA_human_NN_moe_CI.pdf}
\caption{Mean error per uncertainty bucket for human plasma protein binding HSA neural network model with MOE features}
\label{fig:ppb_nn_moe_ci}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Plasma_Protein_Binding_HSA_human_RF_moe_CI.pdf}
\caption{Mean error per uncertainty bucket for human plasma protein binding HSA random forest model with MOE features}
\label{fig:ppb_rf_moe_ci}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Plasma_Protein_Binding_HSA_human_NN_graphconv_CI.pdf}
\caption{Mean error per uncertainty bucket for human plasma protein binding HSA neural network model with Graph Convolution features}
\label{fig:ppb_nn_gc_ci}
\end{figure}
\subsubsection{Examining the relationship between UQ and predicted value}
Since the UQ values quantify the variation in predictions, the relationship between UQ and the predicted values were checked for evidence of a correlation by examining plotted UQ versus predicted values.
Rat plasma clearance (\textit{in vivo}) shows a somewhat negative relationship, where the variation in predictions decreases as the magnitude of the predicted value increases. We found a similar though much less pronounced trend when examining error versus predicted value, so it looks like overall the model is predicting better for compounds with higher clearance values.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/InVivo_Clearance_rat_NN_moe_std_pred.pdf}
\caption{Uncertainty value versus Predicted for rat plasma clearance (\textit{in vivo}) neural network model with MOE features}
\label{fig:invivo_nn_moe_std_pred}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/InVivo_Clearance_rat_RF_moe_std_pred.pdf}
\caption{Uncertainty value versus Predicted for rat plasma clearance (\textit{in vivo}) random forest model with MOE features}
\label{fig:invivo_rf_moe_std_pred}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/InVivo_Clearance_rat_NN_graphconv_std_pred.pdf}
\caption{Uncertainty value versus Predicted for rat plasma clearance (\textit{in vivo}) neural network model with Graph Convolution features}
\label{fig:invivo_nn_gc_std_pred}
\end{figure}
For human microsomal clearance, MOE feature vectors yield models where the UQ is strongly biased by the predicted value, especially for the neural network model, as seen in Figure \ref{fig:microsomal_nn_moe_std_pred}. Error versus predicted value does not show this trend, so this is likely indicating that UQ contains no real information value for this model.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Microsomal_Clearance_human_NN_moe_std_pred.pdf}
\caption{Uncertainty value versus Predicted for human microsomal clearance neural network model with MOE features}
\label{fig:microsomal_nn_moe_std_pred}
\end{figure}
This trend exists for the MOE random forest model as well, although it levels off, suggesting slightly less biased UQ values.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Microsomal_Clearance_human_RF_moe_std_pred.pdf}
\caption{Uncertainty value versus Predicted for human microsomal clearance random forest model with MOE features}
\label{fig:microsomal_rf_moe_std_pred}
\end{figure}
The graph convolution model displays a more balanced relationship between UQ and predicted value, which mirrors what we saw in the previous two sub-sections, that this model's UQ is more informative of error than the MOE models' UQ.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Microsomal_Clearance_human_NN_graphconv_std_pred.pdf}
\caption{Uncertainty value versus Predicted for human microsomal clearance neural network model with Graph Convolution features}
\label{fig:microsomal_nn_gc_std_pred}
\end{figure}
Human plasma protein binding HSA, which showed the best calibration curves, also shows the least correlation between UQ and predicted value. UQ has a wide range of values for all predicted values.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Plasma_Protein_Binding_HSA_human_NN_moe_std_pred.pdf}
\caption{Uncertainty value versus Predicted for human plasma protein binding HSA neural network model with MOE features}
\label{fig:ppb_nn_moe_std_pred}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Plasma_Protein_Binding_HSA_human_RF_moe_std_pred.pdf}
\caption{Uncertainty value versus Predicted for human plasma protein binding HSA random forest model with MOE features}
\label{fig:ppb_rf_moe_std_pred}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/Plasma_Protein_Binding_HSA_human_NN_graphconv_std_pred.pdf}
\caption{Uncertainty value versus Predicted for human plasma protein binding HSA neural network model with Graph Convolution features}
\label{fig:ppb_nn_gc_std_pred}
\end{figure}
\subsubsection{Correlation between UQ and error}
While these plots provide useful methods for visualizing the behavior of uncertainty quantification, we wanted to identify a value that could summarize if we could trust a given model's UQ results. Since we want the certainty of the model to be reflected in accurate predictions, we calculated the Spearman correlation coefficient between between binned prediction error and UQ. Results are shown in Figure \ref{fig:error_uq_corr}. Correlations range from -0.088 to 0.33. While these correlations are fairly low, all p-values are $<0.05$, and all but one are $<<0.01$, so there is significance to the weak correlations identified.
\begin{figure}[htp]
\centering
\includegraphics[width=\columnwidth]{figures/pk_binned_error_vs_uncert.pdf}
\caption{Spearman correlation coefficient between error and uncertainty values}
\label{fig:error_uq_corr}
\end{figure}
\end{document}
\subsection{Data curation}
AMPL includes several modules to curate data into machine learning-ready datasets. Functions are provided to represent small molecules with canonicalized SMILES strings using RDKit \cite{landrum_fingerprints_nodate} and the MolVS package \cite{matt_swain_2017_260237}, by default stripping salts and preserving isomeric forms. Data curation procedures are provided with AMPL as Jupyter notebooks \cite{noauthor_jupyterlab_nodate}, which can be used as examples for curating new datasets. Procedures allow for averaging response values for compounds with replicate measurements and filtering compounds with high variability in their measured response values. AMPL also provides functions to assess the structural diversity of the dataset, using either Tanimoto distances between fingerprints, or Euclidean distances between descriptor feature vectors.
Data ingestion and curation-related parameters include:
\begin{itemize}
\item Unique human readable name for training file
\item Data privilege access group
\item Parameter for overriding the output files/dataset object names
\item ID for the metadata + dataset
\item Boolean flag for using an input file from the file system
\item Name of column containing compound IDs
\item Name of column containing SMILES strings
\item List of prediction task names
\item Number of classes for classification
\item User specified list of names of each class
\item Boolean switch for using transformation on regression output. Default is True
\item Response column normalization type
\item Minimum number of dataset compounds considered adequate for model training. A warning message will be issued if the dataset size is less than this.
\end{itemize}
\subsection{Featurization}
AMPL provides an extensible featurization module which can generate a variety of molecular feature types, given SMILES strings as input. They include:
\begin{itemize}
\item Extended connectivity fingerprints (ECFP) with arbitrary radius and bit vector length \cite{rogers_extended-connectivity_2010}
\item Graph convolution latent vectors, as implemented in DeepChem \cite{duvenaud_convolutional_2015}
\item Chemical descriptors generated by the Mordred open source package \cite{moriwaki_mordred:_2018}
\item Descriptors generated by the commercial software package Molecular Operating Environment (MOE) \cite{noauthor_chemical_nodate}
\item User-defined custom feature classes
\end{itemize}
Because some types of features are expensive to compute, AMPL supports two kinds of interaction with external featurizers: a dynamic mode in which features are computed on-the-fly and a persistent mode whereby features are read from precomputed tables and matched by compound ID or SMILES string. In the persistent mode, when SMILES strings are available as inputs, the featurization module matches them against the precomputed features where possible, and computes features dynamically for the remainder. Because precomputed feature tables may span hundreds or thousands of feature columns for millions of compounds, the module uses the feather format \cite{noauthor_feather_nodate} to speed up access.
Featurized datasets for feature types that support persistent mode (currently, all except ECFP fingerprints and graph convolution format) are saved in the filesystem or remote datastore, so that multiple models can be trained on the same dataset. This also facilitates reproducibility of model results.
Chemical descriptor sets such as those generated by MOE often contain descriptors that are exact duplicates or simple functions of other descriptors. In addition, large blocks of descriptors may be strongly correlated with one another, often because they scale with the size of the molecule. The featurization module deals with this redundancy by providing an option to remove duplicate descriptors and to scale a subset of descriptors by the number of atoms in the molecule (while preserving the atom count as a distinct feature). Factoring out the size dependency often leads to better predictivity of models.
The featurization module can be easily extended to handle descriptors generated by other software packages, latent vectors generated by autoencoders, and other types of chemical fingerprints. In most cases, this can be accomplished by writing a small function to invoke the external feature generation software, and by adding an entry to a table of descriptor types, listing the generated feature columns to be used. In more complicated cases, one may need to write a custom subclass of one of the base featurization classes.
Featurization-relevant input parameters include:
\begin{itemize}
\item Type of molecule featurizer
\item Feature matrix normalization type
\item Boolean flag for loading in previously featurized data files
\item Type of transformation for the features
\item Radius used for ECFP generation
\item Size of ECFP bit vectors
\item Type of autoencoder, e.g. molvae, jt
\item Trained model HDF5 file path, only needed for MolVAE featurizer
\item Type of descriptors, e.g. MOE, Mordred
\item Max number of CPUs to use for Mordred descriptor computations. None means use all available
\item Base of key for descriptor table file
\end{itemize}
\subsection{Dataset partitioning}
AMPL supports several options for partitioning datasets for model training and evaluation, By default, datasets are split into 3 parts: a training set, a validation set (for parameter selection), and a holdout test set (for evaluation). Alternatively, AMPL offers a k-fold cross-validation option, to assess the performance impact of sampling from the modeled dataset. Under k-fold cross-validation, the holdout test set is selected first, and the remainder is divided into k-fold sets for training and validation.
AMPL offers a number of dataset splitting algorithms, which offer different approaches to the problem of building models that generalize from training data to novel chemical space. It supports several of the methods included in DeepChem, including random splits, Butina clustering, Bemis-Murcko scaffold splitting, and a simple algorithm based on fingerprint dissimilarity \cite{wu_moleculenet:_2018}. In addition, we implemented temporal splitting and a modified version of the asymmetric validation embedding (AVE) debiasing algorithm \cite{wallach_most_2018}. We compared random splitting with Bemis-Murcko scaffold splitting for our benchmarking experiments.
Input parameters related to data splitting include:
\begin{itemize}
\item Type of splitter to use: index, random, scaffold, Butina, ave\_min, temporal, fingerprint, or stratified
\item Boolean flag for loading in previously-split train, validation, and test CSV files
\item UUID for CSV file containing train, validation, and test split information
\item Choice of splitting type between k-fold cross validation and a normal train/valid/test split
\item Number of k-folds to use in k-fold cross validation
\item Type of splitter to use for train/validation split if temporal split used for test set (random, scaffold, or ave\_min)
\item Cutoff Tanimoto similarity for clustering in Butina splitter
\item Cutoff date for test set compounds in temporal splitter
\item Column in dataset containing dates for temporal splitter
\item Fraction of data to put in validation set for train/valid/test split strategy
\item Fraction of data to put in held-out test set for train/valid/test split strategy
\end{itemize}
\subsection{Model training and tuning}
AMPL includes a train/tune/predict framework to create high-quality models. This framework supports a variety of model types from two main libraries: scikit-learn \cite{sklearn} and DeepChem \cite{deepchem}. Currently, specific input parameters are supported for:
\begin{itemize}
\item Random forest models from scikit-learn
\item XGBoost models \cite{xgboost}
\item Fully connected neural network models
\item Graph convolution neural network models \cite{graphconv}
\end{itemize} As with the featurization module, AMPL supports integration of custom model sub-classes. Parameters for additional models can be easily added to the parameter parser module.
Model-relevant input parameters include:
\begin{itemize}
\item Type of model to fit (neural network, random forest, or xgboost)
\item Prediction type (regression or classification)
\item Singletask or multitask model
\item Number of decision trees in the forest for random forest models
\item Max number of features to split random forest nodes
\item Number of estimators to use in random forest models
\item Batch size for neural network model
\item Optimizer type for neural network model
\item Optimizer specific for graph convolutional models, defaults to ``adam"
\item Model batch size for neural network model
\item List of hidden layer sizes for neural network model
\item List of dropout rates per layer neural network model
\item List of standard deviations per layer for initializing weights for neural network model
\item The type of penalty to use for weight decay, either ``l1" or ``l2"
\item The magnitude of the weight decay penalty to use
\item List of initial bias parameters per layer for neural network model
\item Learning rate for dense neural network models
\item Epoch for evaluating baseline neural network model performance, if desired
\item Maximum number of training epochs for neural network model
\item Type of score function used to choose best epoch and/or hyperparameters
\item Boolean flag for computing uncertainty estimates for regression model predictions
\end{itemize}
\subsubsection{Epoch selection for neural network models}
Early stopping is an essential strategy to avoid overfitting of neural networks, thus the number of training epochs is one of the key hyperparameters that must be optimized. To implement early stopping, AMPL trains neural network models for a user-specified maximum number of epochs, evaluating the model on the validation set after each epoch, and identifies the epoch at which a specified performance metric is maximized. By default this metric is the coefficient of determination $R^2$ for regression models, and the area under the receiver operating characteristic curve (ROC AUC) for classification models.
\subsubsection{Model persistence}
Serialized models are saved after training and prediction generation are complete, along with detailed metadata to describe the model. This supports traceability and reproducibility, as well as model sharing. AMPL supports saving models and results either using the file system or optionally through a collection of database services. The metadata can be stored in a mongoDB database \cite{mongo} or as JSON files. AMPL has functions for saving models and loading pre-built models for prediction generation.
\subsection{Model performance metrics}
AMPL calculates a variety of performance metrics for predictions on the training, validation and test sets. Metrics may be saved in a mongoDB database or in JSON files.
For regression models, we calculate:
\begin{itemize}
\item Coefficient of determination ($R^2$). This is calculated using sklearn's metrics function. Note that this score can be negative if the model is arbitrarily worse than random.
\begin{equation}
R^2(y, \hat{y}) = 1 -\frac{\sum_{i=1}^{n} (y_i - \hat{y_i})^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2}
\end{equation}
\item Mean Absolute Error (MAE). An advantage of MAE is that it has a clear interpretation, the average absolute difference between the measured value $y_i$ and predicted value $\hat{y_i}$. This works well for cellular activity assay datasets, which use log normalized dose concentration value with similar concentration ranges across different assays. PK parameters are measured on different scales for some assays, which prevents comparison between assays with this metric.
\begin{equation}
\mathrm{MAE} = \frac{\sum_{i=1}^{n} |y_i - \hat{y_i}|}{n}
\end{equation}
\item Mean Square Error (MSE). This is a risk metric corresponding to the expected value of the squared error (or loss).
\begin{equation}
\mathrm{MSE}(y, \hat{y}) = \frac{1}{n} \sum_{i=0}^{n - 1} (y_i - \hat{y_i})^2
\end{equation}
\end{itemize}
For classification models, we calculate:
\begin{itemize}
\item Area Under the Receiver Operating Characteristics Curve (ROC AUC). The ROC curve plots the True Positive Rate versus the False Positive Rate as a binary classifier's discrimination threshold is varied. The ROC AUC score is calculated by finding the area under the ROC Curve. This value can range from 0 -- 1, where 1 is the best score.
\item Precision (Positive Predictive Value)
\begin{equation}
\mathrm{Precision} = \frac{TP}{TP+FP}
\end{equation}
where TP = number of true positives and FP = number of false positives
\item Recall (True positive rate/ sensitivity)
\begin{equation}
\mathrm{Recall} = \frac{TP}{TP+TN}
\end{equation}
where TP = number of true positives and TN = number of true negatives
\item Area under the precision-recall curve (PRC-AUC). The precision-recall curve plots precision versus recall as a binary classifier's discrimination threshold is varied. It is a good measure of success of prediction when classes are very imbalanced. High scores show that the classifier is returning accurate results (high precision), as well as returning a majority of all positive results (high recall).
\item Negative Predictive Value (NPV)
\begin{equation}
\mathrm{NPV} = \frac{TN}{TN+FN}
\end{equation}
where TN = number of true negatives and FN = number of false negatives
\item Cross entropy (log loss)
\begin{equation}
\mathrm{Cross\ entropy} = -\sum_{c=1}^{M}{y_{o,c}log(p_{o,c})}
\end{equation}
\item Accuracy
\begin{equation}
\mathrm{Accuracy} = \frac{TP+TN}{TP+TN+FP+FN}
\end{equation}
where terms are defined as above.
\end{itemize}
\subsection{Uncertainty quantification}
Uncertainty quantification (UQ) attempts to measure confidence in a model's prediction accuracy by characterizing variance in model predictions. Some common objectives for UQ are to use it to guide active learning or to weight model ensembles. AMPL generates UQ values for both random forest and neural network models.
\subsubsection{Uncertainty quantification for random forest}
Generating a value quantifying uncertainty is straightforward for random forest and is taken to be the standard deviation of predictions from individual trees. This quantifies how variable these predictions are, and thus how uncertain the model is in its prediction for a given sample.
\subsubsection{Uncertainty quantification for neural networks}
Our neural network-based UQ uses the Kendall and Gal method\cite{kendall_what_2017} as implemented in DeepChem. This method combines aleatoric and epistemic uncertainty values.
Aleatoric uncertainty cannot be reduced by adding more data but can be estimated. It is estimated by modifying the loss function of the model to predict both the response variable and the error of the model.
Epistemic uncertainty arises because of limited data. It represents the uncertainty of the model. Normally this is calculated in a bootstrapped manner, as in the case of a random forest. However, since training neural networks is expensive, an alternate approach is to train one network to generate a set of predictions by applying a set of dropout masks during prediction. Prediction variability is then quantified to assess epistemic uncertainty.
\subsection{Visualization and analysis}
Plots generated by AMPL's visualization and analysis module are shown in the Results section. Additional options include plots of predicted vs. actual values, learning curves, ROC curves, precision vs. recall curves, and 2-D projections of feature vectors using UMAP \cite{mcinnes_umap_2018}. The module also includes functions for characterizing and visualizing chemical diversity. Chemical diversity analysis is crucial for analyzing domain of applicability, bias in dataset splitting, and novelty of \textit{de novo} compounds. This module supports a wide range of input feature types, distance metrics, and clustering methods.
\subsection{Hyperparameter optimization}
A module is available to support distributed hyperparameter search for HPC clusters. This module currently supports linear grid, logistic grid, and random hyperparameter searches, as well as iteration over user-specified values. To run the hyperparameter search, the user specifies the desired range of configurations in a JSON file. The user can either specify a single dataset file or a CSV file with a list of datasets. The script generates all valid combinations of the specified hyperparameters, accounting for model and featurization type, and submits jobs for each combination to the HPC job scheduler. The module includes an option to generate a pre-featurized and pre-split dataset before launching the model training runs, so that all runs operate on the same dataset split. The user can specify a list of layer sizes and dropouts to combine, along with the maximum final layer size and a list of the numbers of possible layers for a given model, and the module combines these different options based on the input constraints to generate a variety of model architectures. The search module can check the model tracker database to avoid retraining models that are already available. It also provides users the option to exclude hyperparameter combinations that lead to overparameterized models, by checking the number of weight and bias parameters for a proposed neural network architecture against the size of the training dataset. Finally, the search module throttles job submissions to prevent the user from monopolizing the HPC cluster.
Input parameters for hyperparameter search include:
\begin{itemize}
\item Boolean flag indicating whether we are running the hyperparameter search script
\item UUID of hyperparam search run model was generated in
\item Comma-separated list of number of layers for permutation of NN layers
\item Comma-separated list of dropout rates for permutation of neural network layers
\item The maximum number of nodes in the last layer
\item Comma-separated list of number of nodes per layer for permutation of neural network layers
\item Maximum number of jobs to be in the queue at one time for an HPC cluster
\item Scaling factor for constraining network size based on number of parameters in the network
\item Boolean flag directing whether to check model tracker to see if a model with that particular param combination has already been built
\item Path where pipeline file you want to run hyperparam search from is located
\item Type of hyperparameter search to do. Options are grid, random, geometric, and user\_specified
\item CSV file containing list of datasets of interest
\end{itemize}
\subsection{Running AMPL }
There are three ways to run AMPL:
\begin{itemize}
\item Using a config file: Create a JSON file with desired model parameters and run full pipeline via command line
\item Using command line arguments: Specify model parameters via standard command line arguments
\item Interactively in a Jupyter notebook using an argparse.Namespace object or a dictionary
\end{itemize}
\end{document}
\section{Benchmarking of AMPL on public datasets}
AMPL is open source and available for download at \\\texttt{http://github.com/ATOMconsortium/AMPL}. To support reproducibility of this pipeline, we provide model-building examples for three public datasets in AMPL's open source repository. These datasets include:
\begin{itemize}
\item Delaney et al. solubility dataset \cite{delaney}
\item Wenzel et al. human liver microsome intrinsic clearance \cite{clearance}
\item Drug Target Commons KCNH2 (hERG) inhibition assay \cite {herg}
\end{itemize}
Since the data from our main benchmarking experiments are proprietary, we also benchmarked AMPL on these publicly-available datasets. Results are shown below.
\begin{table*}[h!]
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Dataset} & \textbf{Model and featurizer type} & \textbf{Train set $R^2$} & \textbf{Validation set $R^2$} & \textbf{Test set $R^2$}\\ \hline
Delaney solubility & Neural network + ECFP & 0.66 & 0.21 & 0.29 \\ \hline
Delaney solubility & Neural network + GraphConv & 0.76 & 0.55 & 0.54 \\ \hline
Delaney solubility & Neural network + Mordred & 0.79 & 0.67 & 0.74 \\ \hline
Delaney solubility & Random forest + ECFP & 0.91 & 0.27 & 0.37 \\ \hline
Delaney solubility & Random forest + Mordred & 0.99 & 0.72 & 0.73 \\ \hline
Wenzel microsomal clearance & Neural network + ECFP & 0.19 & 0.07 & 0.054 \\ \hline
Wenzel microsomal clearance & Neural network + GraphConv & 0.11 & 0.064 & 0.067 \\ \hline
Wenzel microsomal clearance & Neural network + Mordred & 0.40 & 0.21 & 0.13 \\ \hline
Wenzel microsomal clearance & Random forest + ECFP & 0.90 & 0.17 & 0.21 \\ \hline
Wenzel microsomal clearance & Random forest + Mordred & 0.92 & 0.15 & 0.12 \\ \hline
KCNH2 (hERG) inhibition & Neural network + ECFP & 0.30 & 0.22 & 0.15 \\ \hline
KCNH2 (hERG) inhibition & Neural network + GraphConv & 0.28 & 0.19 & 0.18 \\ \hline
KCNH2 (hERG) inhibition & Neural network + Mordred & 0.24 & 0.20 & 0.19 \\ \hline
KCNH2 (hERG) inhibition & Random forest + ECFP & 0.90 & 0.36 & 0.38 \\ \hline
KCNH2 (hERG) inhibition & Random forest + Mordred & 0.94 & 0.39 & 0.36 \\ \hline
\end{tabular}
\end{adjustbox}
\caption{$R^2$ scores for public dataset models}
\label{tab:public}
\end{table*}
\end{document} |
1,477,468,751,389 | arxiv | \section*{Abridged English version}
\selectlanguage{francais}
\section{D\'eriv\'ee stochastique dynamique}
\label{}
On note $I:=]a,b[$ o\`u $a<b$ et $J:=[a,b]$ l'adh\'erence de $I$
dans $\mathbb{R}$. Soit $\mathbb{K}$ un corps et $d\in\mathbb{N}^*$. On se donne un espace
probabilis\'e $(\Omega,\mathcal{A},P)$ sur lequel existent une famille
croissante de tribus $(\mathcal{P}_t)_{t\in J}$ et une famille
d\'ecroissante de tribus $(\mathcal{F}_t)_{t\in J}$. Suivant Yasue
\cite{ya1}, on introduit la
\begin{defi}
On note $\EuScript{C}^1_{\mathbb{K}}(J)$ l'ensemble des processus $X$ d\'efinis sur
$J\times \Omega$, \`a valeurs dans $\mathbb{K}^d$ et tels que : $X$ soit
$(\mathcal{P}_t)$ et $(\mathcal{F}_t)$ adapt\'{e}, pour tout $t\in J$ $X_t\in
L^2(\Omega)$, l'application $t\to X_t$ de $J$ dans $L^2(\Omega)$
est continue, pour tout $t\in I$ les quantit\'es
$DX_t=\lim_{h\rightarrow 0^+}h^{-1} E[X_{t+h}-X_t\mid {\mathcal P}_t
]$, et $D_* X_t=\lim_{h\rightarrow 0^+} h^{-1} E[X_t-X_{t-h}\mid
{\mathcal F}_t ]$, existent dans $L^2(\Omega)$, et enfin les
applications $t\to DX_t$ et $t\to D_*X_t$ sont continues de
$I$ dans $L^2(\Omega)$.\\
Le compl\'et\'e de $\EuScript{C}^1_{\mathbb{K}} (J)$ pour la norme $\parallel
X\parallel=\sup_{t\in I} (\parallel X_t\parallel_{L^2(\Omega)}
+\parallel DX_t\parallel_{L^2(\Omega)} +\parallel D_*
X(t)\parallel_{L^2(\Omega)} )$, est encore not\'e $\EuScript{C}^1_{\mathbb{K}}(J)$,
et simplement $\EuScript{C}^1(J)$ quand $\mathbb{K}=\mathbb{R}$.
\end{defi}
Les quantit\'{e}s $D$ et $D_*$ sont introduites par Edward Nelson dans
sa th\'{e}orie dynamique des diffusions browniennes
(\textit{cf}\cite{ne1}). Soit $\iota$ l'injection $\d \iota :
\left\{\begin{array}{ccc}
C^1(J) & \longrightarrow & \EuScript{C}^1 (J) \\
f & \longmapsto & \iota(f) : (\omega,t)\mapsto f(t)
\end{array}\right.$. On note : $\mathcal{P}_{det}^k:=\iota(C^k(J))$. Le probl\`{e}me d'extension
consiste \`{a} trouver un op\'erateur $\mathcal{D}: \EuScript{C}^1(I)\to \EuScript{C}^1_{\mathbb{C}}(I)$
satisfaisant :
\begin{itemize}
\item[(i)] (Recollement) $\d \mathcal{D}\iota(f)_t=\frac{df}{dt}(t)$ sur $\Omega$,
\item[(ii)] ($\mathbb{R}$-lin\'earit\'e) $\mathcal{D}$ est $\mathbb{R}$-lin\'eaire,
\item[(iii)] (Reconstruction) Si l'on note ${\mathcal D}X =A(DX ,D_* X)+iB(DX,D_* X),$ o\`u $A$ et $B$ sont des $\mathbb{R}$-formes
lin\'eraires, on suppose que l'application de $\mathbb{R}^2$ dans $\mathbb{R}^2$ : $(x,y)
\mapsto (A(x,y),B(x,y))$ est inversible.
\end{itemize}
L'op\'{e}rateur $\mathcal{D}$ \'{e}tend donc la d\'{e}rivation classique
(\textit{cf}(i)) en un op\'{e}rateur lin\'{e}aire (\textit{cf}(ii)) sur
$\EuScript{C}^1(J)$ et la connaissance de $\mathcal{D}X$ induit celle de $DX$ et
$D_*X$ (\textit{cf}(iii)). On obtient :
\begin{lemm} Les seuls op\'{e}rateurs $\EuScript{C}^1(I)\to \EuScript{C}^1_{\mathbb{C}}(I)$ v\'{e}rifiant (i), (ii) et (iii)
sont: $${\mathcal D}_{\mu} =\d {D+D_* \over 2} +i\mu {D-D_* \over 2} ,\ \mu =\pm 1
.$$
\end{lemm}
On \'{e}crira $\mathcal{D}:=\mathcal{D}_1$ et
$\overline{\mathcal{D}}:=\mathcal{D}_{-1}$. La d\'{e}finition des it\'{e}r\'{e}s de
$\mathcal{D}$ et $\overline{\mathcal{D}}$ n\'{e}cessite l'extension de ces
op\'{e}rateurs aux processus \`{a} valeur complexe. Dans la suite, on
\'{e}tend $\mathcal{D}$ et $\overline{\mathcal{D}}$ par $\mathbb{C}-$lin\'{e}arit\'{e}
aux processus complexes, \textit{i.e.} pour tous $X,Y\in\EuScript{C}^1(J)$,
$\mathcal{D}(X+iY)=\mathcal{D}X+i\mathcal{D}Y$. . On note $\EuScript{C}^n(J)$ l'ensemble
des processus $X\in \EuScript{C}^1 (J)$ tels que pour tout
$p\in\{1,\cdots,n\}$, $\mathcal{D}^p X_t$ existe en tout point de $I$.
On donne \`{a} la d\'{e}finition (\ref{bonnesdiffusions}) un ensemble
$\Lambda$ qui permet de montrer que $\EuScript{C}^1(J)$ n'est pas trivial. En
effet $\mathcal{P}_{det}^1\varsubsetneq\Lambda\subset\EuScript{C}^1(J)$
(\textit{cf} \cite{stoc} p.26). Le calcul de $\mathcal{D}^p$ combine
de fa\c{c}on non triviale les quantit\'{e}s $D$ et $D_*$. \`A titre
d'exemple, on obtient sur $ \EuScript{C}^2(J)$, $\d
\mathcal{D}^2=\frac{DD_*+D_*D}{2}+i\frac{D^2-D_*^2}{2}$. La partie
r\'{e}elle de $\mathcal{D}^2$ co\"{\i}ncide donc avec
l'acc\'{e}l\'{e}ration postul\'{e}e par Nelson comme quantit\'{e}
la plus pertinente pour d\'{e}crire une notion
d'acc\'{e}l\'{e}ration pour une diffusion brownienne (cf
\cite{ne1} p.$82$).
\section{Proc\'edure de plongement stochastique}
En utilisant $\mathcal D$, on construit des analogues stochastiques
d'op\'{e}rateurs diff\'{e}rentiels non lin\'{e}aires.
\begin{defi}[Plongement stochastique]\label{stochastisation}
\label{stoca} On appelle plongement stochastique, relatif \`{a}
l'extension $\mathcal{D}_{\mu}$, d'un op\'{e}rateur $O$ qui s\'{e}crit
sous la forme : $O=a_0 (\cdot,t)+a_1 (\cdot,t)\frac{d}{dt} +\dots
+ a_n (\cdot,t)\frac{d^n}{dt^n}$ avec $a_i \in C^1(\mathbb{R}^d\times J)$,
$n\in\mathbb{N}^*$, l'op\'erateur $\mathcal{O} =a_0 (\cdot,t) +a_1 (\cdot,t)
{\mathcal D}_{\mu} +\dots + a_n (\cdot,t) {{\mathcal D}_{\mu}^n }$
agissant sur $\EuScript{C}^n(J)$.\\
Un op\'erateur $O$ \'{e}crit sous la forme : $ O=\frac{d}{dt}\circ
a(\cdot,t),$ $a\in C^1(\mathbb{R}^d\times J)$, est plong\'e en $
\mathcal{O}=\mathcal{D}_{\mu}\circ a(\cdot,t)$ agissant sur un
sous-ensemble de $\EuScript{C}^1(J)$ d\'ependant de certaines propri\'et\'es
de $a$.
\end{defi}
Un op\'erateur de la forme $ O=\frac{d}{dt}\circ a(\cdot,t)$ peut
se r\'e\'ecrire $\partial_x a(\cdot,t)\frac{d}{dt}$ qui se plonge
alors en $\partial_x a(\cdot,t)\mathcal{D}$. Ce dernier n'est \'egal
\`a $ \mathcal{O}=\mathcal{D}_{\mu}\circ a(\cdot,t)$ que dans certains cas
(cf \cite{stoc} p.$52$). Ceci montre en particulier que le
plongement stochastique n'est pas une application, il d\'epend du
choix d'\'ecriture de l'op\'erateur.
La notion de plongement d'op\'erateur s'\'etend de fa\c{c}on
naturelle \`a celle de plongement d'\'equation d\'efinie par un
op\'erateur $O$ d'ordre $n$ :
$O\cdot(x,\frac{dx}{dt},\cdots,\frac{d^kx}{dt^k})=0$. On d\'{e}finit
l'\'equation plong\'ee par :
$\mathcal{O}\cdot(X,\mathcal{D}_{\mu}X,\cdots,\mathcal{D}_{\mu}^kX)=0$ o\`u
$X\in \EuScript{C}^{n+k}(J)$. On s'occupe d\'{e}sormais du cas
lagrangien.
\begin{defi}On appelle lagrangien admissible une fonction $L: \mathbb{R}^d\times \mathbb{C}^d \to \mathbb{C}$ de classe $C^1$ en
sa premi\`ere variable $x$ et holomorphe en sa deuxi\`eme variable
$y$, et r\'eelle quand $y$ est r\'{e}elle. L'\'equation
\begin{equation}\label{EL}
\frac{d}{dt}\partial_yL\left(x(t),\frac{dx}{dt}(t)\right)=\partial_xL\left(x(t),\frac{dx}{dt}(t)\right)
\end{equation}
s'appelle \'equation d'Euler-Lagrange.
\end{defi}
\begin{lemm}Soit $L$ un lagrangien admissible. Le plongement stochastique de (\ref{EL}) est donn\'{e} par
\begin{equation}\label{ELS}
\mathcal{D}\partial_yL\left(X_t,\mathcal{D}X_t\right)=\partial_xL\left(X_t,\mathcal{D}X_t\right).
\end{equation}
\end{lemm}
On sait que l'\'equation (\ref{EL}) provient d'un principe de
moindre action (cf \cite{ar} p.$84$). Existe-il un principe de
moindre action stochastique permettant l'obtention de l'\'equation
(\ref{ELS}) ? Nous montrons dans \cite{stoc} chap.$7$ que tel est
bien le cas et on donne un lemme montrant la coh\'erence de la
proc\'{e}dure de plongement vis-\`a-vis des principes de moindre
action ainsi d\'{e}finis.
\section{\'Equation de Newton Plong\'{e}e et \'{e}quation de Schr\"{o}dinger}
Consid\'{e}rons le lagrangien admissible
$L(x,z)=\frac{1}{2}(z_1^2+\cdots+z_d^2)-U(x)$ o\`{u} $(x,z)\in
\mathbb{R}^d\times\mathbb{C}^d$ et $U$ est une fonction de classe $C^1$.
L'\'{e}quation (\ref{EL}) associ\'{e}e est l'\'{e}quation de Newton $\d
\frac{d^2x}{dt^2}(t)=-\nabla U(x(t))$. L'\'{e}quation de Newton
plong\'{e}e est alors
\begin{equation}
\mathcal{D}^2 X_t=-\nabla U(X_t)
\end{equation}
et co\"{\i}ncide avec l'\'{e}quation d'Euler-Lagrange
plong\'{e}e (\ref{ELS}). On se propose d'\'{e}tudier un r\'{e}sultat sur la
densit\'{e} d'un processus solution de cette \'{e}quation.
On donne dans \cite{stoc} p.$24$, suivant \cite{mns} et
\cite{thieu}, un espace sur lequel nous pourrons calculer les
d\'eriv\'ees du premier ordre $D$ et $D_*$ et les d\'eriv\'ees du
second ordre $D^2$, $DD_*$, $D_*D$ et $D_*^2$. Prenons $I=]0,1[$.
Soit $(W_t)_{t\in J}$ un mouvement brownien standard dans $\mathbb{R}^d$
d\'efini sur un espace probabilis\'{e} filtr\'e
$(\Omega,\mathcal{A},(\mathcal{P}_t)_{t\in J},P)$.
\begin{defi}\label{bonnesdiffusions}
On d\'esigne par $\Lambda$ l'espace des diffusions $X$
satisfaisant les conditions suivantes :
\begin{itemize}
\item[(i)] $X$ est solution sur $J$ de l'EDS :
$dX_t=b(t,X_t)dt+\sigma(t,X_t)dW_t,\quad X_0=X^0$
o\`u $X^0\in L^2(\Omega)$, $b:J\times \mathbb{R}^d\to\mathbb{R}^d$ et
$\sigma:J\times \mathbb{R}^d\to\mathbb{R}^d\otimes\mathbb{R}^d$ sont des fonctions
mesurables v\'erifiant l'hypoth\`ese : il existe une constante $K$
telle que pour tous $x,y\in\mathbb{R}^d$ :\\
$\sup_t \left(\left|\sigma(t,x)-\sigma(t,y)\right|+\left|b(t,x)-b(t,y)\right|\right)\leq
K\left|x-y\right|$ et
$\sup_t \left(\left|\sigma(t,x)\right|+\left|b(t,x)\right|\right)\leq
K(1+\left|x\right|)$,
\item[(ii)] Pour tout $t\in J$, $X_t$ poss\`ede une densit\'e $p_t(x)$ en
$x\in\mathbb{R}^d$,
\item[(iii)] En posant $a_{ij}=(\sigma\sigma^*)_{ij}$, pour tout $i\in\{1,\cdots,n\}$, pour tout $t_0>0$, pour tout
ouvert born\'e $\Xi\subset\mathbb{R}^d,\quad
\int_{t_0}^1 \int_{\Xi} \left|\partial_j(a_{ij}(t,x)p_t(x))\right|dxdt <
+\infty$,
\item[(iv)] les fonctions $b$ et $\d (t,x)\to \frac{1}{p_t(x)}\partial_j(a_{kj}(t,x)p_t(x))$
appartiennent \`{a} $C^1(I\times\mathbb{R}^d)$, sont born\'{e}es et toutes leurs d\'{e}riv\'{e}es
du premier et second ordre sont born\'ees.
\end{itemize}
\end{defi}
On notera $\Lambda_{\sigma}$ (resp. $\Lambda^g$) le sous-ensemble
de $\Lambda$ form\'{e} par les diffusions dont le coefficient est
constant \'{e}gal \`{a} $\sigma$ (resp. dont le drift est un gradient), et
on pose $\Lambda_{\sigma}^g:=\Lambda_{\sigma}\cap\Lambda^g$.
\begin{theoreme}
Soit $X\in \Lambda$ et $f\in C^{1,2}(I\times \mathbb{R}^d)$ telle que
$\partial_t f$, $\nabla f$ et $\partial_{ij}f$ sont born\'{e}es. On
obtient, en adoptant la convention d'Einstein sur la sommation des
indices
\begin{eqnarray}
(\mathcal{D}X_t)_k & = & \left(b-\frac{1}{2p_t}\partial_j(a^{kj}p_t)+\frac{i}{2p_t}\partial_j(a^{kj}p_t)\right)(t,X_t),\\
\mathcal{D} f(t,X_t) & = & \left(\partial_t f + \mathcal{D}
X_t\cdot \nabla f +\frac{i}{2}a^{kj}\partial_{kj}f\right)(t,X_t)
.\label{deriv_fonc}
\end{eqnarray}
\end{theoreme}
On pose : $\mathcal{S}=\{X\in\Lambda_d \, \mid\, \mathcal{D}^2X_t=-\nabla
U(X(t))\}$, et pour $X\in\Lambda$ dont le drift est $b$ et la
fonction de densit\'{e} $p_t(x)$, $\Theta_X=(\mathbb{R}^+\times\mathbb{R}^d)\setminus
\{(t,x),\, \mid\, p_t(x)=0\}$.
Si $X\in \Lambda_{\sigma}^g$ alors il existe des fonctions $R$ et
$S$
diff\'{e}rentiables sur $\Theta_X$ telles que\\
$\d \mathcal{D}X_t=(\nabla S+i\nabla R)(X_t)$ car
$\mathcal{D}X_t=\left(b-\frac{\sigma^2}{2}\nabla
\log(p_t)+i\frac{\sigma^2}{2}\nabla \log(p_t)\right)(X_t)$ et $b$
est un gradient. On choisit $R(t,x)=\frac{\sigma^2}{2}\nabla
\log(p_t(x))$. Les fonctions $R$ et $S$ sont \'{e}galement
introduites par Nelson dans \cite{ne1} p.$107$.\\
On pose $A=S-iR$ et $\d \Psi_X(t,x)=e^{\frac{A(t,x)}{\sigma^2}}$.
\begin{theoreme}
\label{schro} Si $X\in\mathcal{S}\cap\Lambda_{\sigma}^g$, alors
$p_t(x)=|\Psi_X(t,x)|^2$ et $\Psi$ satisfait sur $\Theta_X$
l'\'{e}quation de Schr\"odinger lin\'{e}aire : $\d i\sigma^2\partial_t\Psi
+\frac{\sigma^4}{2}\Delta\Psi=U\Psi$.
\end{theoreme}
\textbf{D\'{e}monstration.} Des expressions $\d
\Psi_X(t,x)=e^{\frac{A(t,x)}{\sigma^2}}$ et
$R(t,x)=\frac{\sigma^2}{2}\nabla \log(p_t(x))$, on d\'{e}duit\\
$|\Psi_X(t,x)|^2=p_t(x)$. L'\'{e}quation de Newton plong\'{e}e peut
s'\'{e}crire $\overline{\mathcal{D}}^2X_t=-\nabla U(X_t)$ car $U$ est
r\'{e}el.\\
Or $\overline{\mathcal{D}}X_t=\nabla
A(t,X_t)=-i\sigma^2\frac{\nabla\Psi}{\Psi}(t,X_t)$. Donc
$-i\sigma^2\overline{\mathcal{D}}\frac{\nabla\Psi}{\Psi}(t,X_t)=-\nabla
U(X_t)$ et avec (\ref{deriv_fonc}) il vient\\
$i\sigma^2\left(\partial_t
\frac{\partial_k\Psi}{\Psi}+\overline{\mathcal{D}}X(t)\cdot\nabla
\frac{\partial_k\Psi}{\Psi} -i
\frac{\sigma^2}{2}\Delta\frac{\partial_k\Psi}{\Psi}\right)(t,X_t)=\partial_kU(X_t)$.
Le lemme de Schwarz donne :\\
$\overline{\mathcal{D}}X(t)\cdot \nabla\frac{\partial_k\Psi}{\Psi}=
-\frac{i\sigma^2}{2}\partial_k\sum_{j=1}^d\left(\frac{\partial_j\Psi}{\Psi}\right)^2$,
et $\Delta \frac{\partial_k\Psi}{\Psi}=\partial_k\sum_{j=1}^d
\left[\frac{\partial_j^2\Psi}{\Psi}-\left(\frac{\partial_j\Psi}{\Psi}\right)^2\right]$,
et par cons\'{e}quent :\\
$\ i\sigma^2\partial_k\left(\frac{\partial_t\Psi}{\Psi} -i
\frac{\sigma^2}{2}\frac{\Delta\Psi}{\Psi}\right)(t,X_t)=\partial_kU(X_t)$.
En int\'{e}grant sur $\Theta_X$ les fonctions des deux membres de la
derni\`{e}re \'{e}quation, il appara\^{\i}t des constantes qu'on peut rendre
nulles en ajoutant une fonction de $t$ convenable dans $S$. Le
r\'{e}sultat s'en d\'{e}duit. $\square$
La partie r\'{e}elle de l'\'{e}quation de Newton plong\'{e}e co\"{\i}ncide avec
l'\'{e}quation de Newton stochastique propos\'{e}e par Nelson dans sa
th\'{e}orie dynamique des diffusions browniennes (\cite{ne1} p.$83$).
Sa partie imaginaire correspond \`{a} l'\'{e}quation $(D^2-D_*^2)X=0$.
Nous conjecturons que cette derni\`{e}re impose que le drift de $X$
doit \^{e}tre un gradient, et donc qu'il n'est pas utile de le
supposer dans le th\'{e}or\`{e}me (\ref{schro}).
|
1,477,468,751,390 | arxiv | \section{INTRODUCTION}
Evolutionary game theory is a broadly used framework to understand how cooperation emerges among selfish individuals who would prefer defection individually \cite{sigmund2010calculus}. This conflict is the key obstacle when life steps onto a higher level at different stages of evolution \cite{nowak2004emergence,maynard_95,nowak2006evolutionary}. In the last decades, several mechanisms have been identified to explain this process \cite{nowak2006five}. One of them is network reciprocity which has collected significant research interest due to its broad occurrence in realistic situations \cite{szabo_pr07,perc2017statistical,roca_plr09,perc2013evolutionary}.
To explore the possible consequences of permanent and limited interactions, evolutionary graph theory was proposed \cite{lieberman2005evolutionary,allen2014games}, and the evolution of cooperation has been studied on various graphs including
isothermal graphs \cite{allen2019evolutionary}, temporal graphs \cite{li2020evolution}, heterogeneous graphs \cite{mcavoy2020social}, multilayer graphs \cite{su2022evolutionmultilayer}, and directed graphs \cite{su2022evolutionasymmetric}. It is generally believed that structured populations often promote cooperation \cite{lieberman2005evolutionary,su2022evolutionmultilayer,nowak1992evolutionary,ohtsuki2006simple}, but not always \cite{hauert2004spatial,su2019spatial}.
The core assumption of evolutionary dynamics is that individuals tend to imitate the strategy with a higher payoff. The general sensitivity of individuals to this difference is characterized by the strength of selection. Accordingly, models can use strong \cite{nowak1992evolutionary}, intermediate \cite{szabo1998evolutionary,szabo2002phase,wang2021public,wang2022modeling,wang2022between}, or weak selection scenarios \cite{lieberman2005evolutionary}. On the one hand, lab experiments indicated an intermediate selection strength in human populations \cite{rand_pnas13,zisis_srep15}. On the other hand, one may claim that the weak selection assumption is less relevant because it almost neglects the driving force of evolution. However, the rationality behind the weak selection assumption is that various factors contribute to an individual's fitness, and the fruit of game interactions is just one of these factors \cite{ohtsuki2006simple}. Furthermore, this assumption makes calculations analytically feasible, thus becoming an attractive playground for theoretical approaches \cite{ohtsuki_jtb06,wild_jtb07}. A key question for these calculations is to identify the threshold of ``dilemma strength'' over which cooperation is favored. For two-player games \cite{lieberman2005evolutionary}, calculations are now available for any population structure \cite{allen2017evolutionary}.
More generally, remarkable results have also been obtained for the analytical threshold favoring cooperation in multiplayer games. For the public goods game \cite{hauser2019social}, Li {\it et al.} \cite{li2014cooperation} deduced the threshold favoring cooperation on random regular graphs, and Su {\it et al.} \cite{su2018understanding,su2019spatial} deduced the threshold favoring cooperation on transitive graphs. Some scholars also argue that it is natural to study multiplayer games on hypergraphs \cite{burgio2020evolution,alvarez2021evolutionary}, but such a perspective remains to be explored.
While the selection strength determines an individual's sensitivity to a higher payoff when considering alternative strategies, a player's willingness to change an actual strategy is another independent factor. This aspect was studied from different angles, such as strategy learning capacity \cite{szolnoki_epl07,chen_xj_ijmpc08,szolnoki_csf20b}, behavioral inertia \cite{szabo1998evolutionary,szolnoki_pre09,liu2010effects,zhang2011inertia,du2012effects,chang2018cooperation}, overconfidence \cite{johnson2011evolution,li2016coevolution,szolnoki2018reciprocity}, and stubbornness \cite{cimpeanu_srep22,szolnoki_pre14b,cimpeanu_kbs21}. Conceptually, it can be related to the self-loops of the nodes \cite{tkadlec_ncom21,tkadlec_pcbi20} (but this work does not consider self-loops at the graph level but studies focal weight independently). The mutual idea of these concepts is to introduce the focal agent's weight in the strategy updating. When this weight is high, agents have higher inertia/overconfidence; thus, they are more reluctant to change strategy. Importantly, there is a significant difference between the selection strength and the focal weight: while the consequence of selection strength could be bidirectional and improves (weakens) reproduction activity for a higher (lower) payoff, the impact of focal weight on strategy update is unidirectional by simply decreasing its probability.
At first, one might expect that introducing the same focal weight value for all individuals seems to be a strategy-neutral modification. As a result, its consequence on the competition of strategies is not apparent. Indeed, some previous works revealed that moderate focal weight in strategy updating could promote cooperation \cite{liu2010effects,du2012effects,chang2018cooperation}. These studies, however, applied numerical simulations on structured populations or theoretical analysis on well-mixed populations. In this work, we provide a theoretical analysis of a finite structured population by utilizing the so-called identity-by-descent (IBD) method of evolutionary graph theory \cite{allen2014games,su2019spatial}.
Our principal goal is to analytically explore the impact of the focal weight concept on the evolution of cooperation in structured populations. From this viewpoint to use the so-called death-birth strategy update is a logical choice because, traditionally, this protocol completely ignores the status of the focal player, which can be considered as a zero-weight limit. In the generalized case, by introducing a nonzero weight, we can gradually leave the classic dynamics and reveal the consequences of the modified dynamical rule. Technically, we apply the similar concept introduced by Su {\it et al.} \cite{su2019spatial} who considered the focal weight concept during the interactions. In our case, however, the focal weight determines the strategy update probability, not the payoff values originating from interactions. In the following, we define our model where the introduction of focal weight can be considered as an extension of the classic death-birth dynamical rule.
\section{MODEL}\label{def}
\subsection{Joint transitive graphs and random walks}\label{graphs}
The population structure can be described by an interaction graph $\mathcal{G}_I$ and a dispersal graph $\mathcal{G}_R$. They are joint, which means they share the same node set $V=\{1,2,\dots,N\}$. Each node represents a player, where the population size is $N$. The joint interaction and dispersal graphs are both transitive: for all nodes $i$ and $j$, there is an isomorphism that transforms $i$ into $j$ \cite{su2019spatial,taylor2007evolution,debarre2014social}. Intuitively speaking, the nodes are uniform in perceiving the whole network structure. Common transitive graphs include but are not limited to ring networks, lattices with periodic boundary conditions, and fully connected populations.
Players play the games on the interaction graph $\mathcal{G}_I$ and update strategies on the dispersal graph $\mathcal{G}_R$. This work focuses on strategy updating; hence we simplify the interaction graph by assuming it is unweighted. Due to transitiveness, each node has the same degree; hence they all have $k$ neighbors on $\mathcal{G}_I$. Given a node, each link on the interaction graph has the same weight of $1/k$.
On the contrary, we assume a weighted dispersal graph. For a node $i$, each link to a neighbor $j$ on $\mathcal{G}_R$ may have a different weight, denoted by $e_{ij}$, yielding $\sum_{j\in V}e_{ij}=1, \forall i\in V$. In addition, we assume symmetry ($e_{ij}=e_{ji}, \forall i,j\in V$, i.e., all graphs are undirected in this work) and self-loop is excluded ($e_{ii}=0, \forall i\in V$) both for the dispersal graph and the unweighted interaction graph.
To quantify the payoff values obtained from the game, we define $(n,m)$-random walk on the joint graphs with $n$ steps on $\mathcal{G}_I$ and $m$ steps on $\mathcal{G}_R$ (no sequential requirement) \cite{su2019spatial,debarre2014social}. The probability that an $(n,m)$-random walk ends at the starting node is denoted by $p^{(n,m)}$. The probability that an $(n,m)$-random walk ends at a cooperative player is denoted by $s^{(n,m)}$. The expected payoff of players where an $(n,m)$-random walk ends is denoted by $\pi^{(n,m)}$. Because of transitivity, we can use the same notations of $p^{(n,m)}$, $s^{(n,m)}$, and $\pi^{(n,m)}$ for all nodes over stationary distribution. Next, we define the applied game which determines the payoff values of players.
\subsection{Playing games on the interaction graph}
According to the evolutionary protocol, we randomly select a player to update its strategy.
The selected focal player plays $k$ games with its neighbors on the interaction graph. This work focuses on the simplest two-player game, the donation game \cite{ohtsuki2006simple}. In each donation game, players can adopt one of the two strategies: cooperation ($C$) or defection ($D$). Cooperation means donating $c$ to the recipient, and the other player receives an enlarged benefit $b$ ($b>c$). Alternatively, a defector player donates nothing to the partner. The total payoff of a player is the average over the $k$ games with its neighbors.
By using the terminology of random walks, we can write the expected payoff of the focal player where an $(n,m)$-random walk ends as
\begin{equation}\label{pitwoplayer}
\pi^{(n,m)}=-cs^{(n,m)}+bs^{(n+1,m)}.
\end{equation}
Here the first term is the donation of the focal player while the second term is the benefit originating from neighbors on the interaction graph.
\subsection{Updating strategies on the dispersal graph}
Having determined the payoff values of involved players, the focal player updates its strategy by the generalized death-birth rule with the consideration of fitness. We assume the fitness $F_i$ of player $i$ is calculated by $F_i=1-\delta+\delta\pi_i$ \cite{su2019spatial,su2018understanding}, where $\pi_i$ is the payoff of player $i$ and $\delta$ is the strength of selection. This work assumes weak selection in the $\delta\to 0$ limit.
Importantly, we propose a new parameter, $w$ ($0\leq w<1$), to measure the focal player's weight in the strategy updating process. When comparing fitness, the focal player measures its own fitness with weight $w$, and the fitness of other players with weight $1-w$. Intuitively, a greater $w$ implies less motivation to change the strategy.
According to the extended dynamical rule, the strategy updating probability depends not only on the neighbors' fitness, but also on the fitness of the focal player via appropriate weight factors. More precisely, the focal player $i$ copies the strategy of neighboring player $j$ with a probability
\begin{equation}\label{groupupdate}
P(i\gets j)=\frac{(1-w)e_{ij}F_j}{wF_i+(1-w)\sum_{l\in V}e_{il}F_l},~~~~\mbox{for }j\in V.
\end{equation}
Otherwise, player $i$ does not change its strategy. As we argued previously, Eq.~(\ref{groupupdate}) gives back the classic death-birth rule in the $w=0$ limit where the state of focal player $i$ has no role. In the other $w=1$ limit, the focal player keeps its original strategy and the system remains trapped in the initial state. Between these extreme cases, we can explore how a non-zero weight value (i.e., a certain unwillingness of players to change strategies) may influence the evolution of cooperation in a structured population when $0<w<1$.
\section{Theoretical analysis}
\subsection{The general condition of cooperation success}\label{seccondi}
In the following, we employ the previously mentioned IBD method to determine the necessary condition for successful cooperator spread \cite{nowak2010evolution,allen2014games}. In the low mutation limit $\mu\to 0$, where $\mu$ denotes the mutation rate, the condition favoring cooperation over defection has the following form \cite{nowak2010evolution}:
\begin{equation}\label{bd}
\left\langle\frac{\partial}{\partial\delta}(\mathcal{B}_i-\mathcal{D}_i)\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}>0,
\end{equation}
where the focal player $i$ is the only initial cooperative player in the system. Here, $\mathcal{B}_i$ denotes the probability that player $i$ reproduces its strategy, and $\mathcal{D}_i$ denotes the probability that player $i$ is replaced. Moreover, $\left\langle\cdot\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}$ means the average over the stationary distribution under neutral drift with a single cooperator player $i$.
The main goal of our theoretical analysis is to provide an analytical threshold for cooperation success under the generalized death-birth rule. This can be done by calculating the condition~(\ref{bd}). Evidently, a lower threshold means an easier condition for cooperation to spread.
In addition, we utilize the low mutation expansion taken from Ref.~\cite{allen2014games}, which is generally valid on transitive graphs. Namely,
\begin{equation}\label{equa}
s^{(n,m)}-s^{(n,m+1)}=\frac{\mu}{2}(Np^{(n,m)}-1)+\mathcal{O}(\mu^2),
\end{equation}
where the last term $\mathcal{O}(\mu^2)$ can be neglected.
\subsection{Application to the generalized death-birth rule}
As we stressed, in the generalized death-birth rule, we consider the status of the focal player via a weight factor, which has importance when directly calculating the condition~(\ref{bd}). But first, we need to clarify the following terms.
The ``Death" of player $i$ can be described as follows. When choosing the focal player, player $i$ is selected with a probability $1/N$. After, player $i$ adopts the strategy of a neighboring player $j$ with the probability given by Eq.~(\ref{groupupdate}).
Alternatively, the ``Birth" process of player $i$ is the following. To be the focal player, a neighbor $j$ of player $i$ is selected with probability $1/N$. Then, the focal player $j$ adopts the strategy of player $i$ with the probability given by Eq.~(\ref{groupupdate}).
Therefore, $\mathcal{D}_i$ and $\mathcal{B}_i$ can be written as
\begin{subequations}\label{bdgroup}
\begin{align}
\mathcal{D}_i&=\frac{1}{N}\frac{\sum_{j\in V}(1-w)e_{ij}F_l}{wF_i+(1-w)\sum_{l\in V}e_{il}F_l}, \\
\mathcal{B}_i&=\frac{1}{N}\sum_{j\in V}\frac{(1-w)e_{ji}F_i}{wF_j+(1-w)\sum_{l\in V}e_{jl}F_l}.
\end{align}
\end{subequations}
By using these two terms and considering that the fitness is $F=1-\delta+\delta\pi$, the requested condition~(\ref{bd}) can be calculated as follows:
\begin{align}\label{condigroup}
&\left\langle\frac{\partial}{\partial\delta}(\mathcal{B}_i-\mathcal{D}_i)\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}>0\nonumber
\\
\Leftrightarrow&~\frac{1-w}{N}\left(
\left\langle\pi_i\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}
-w\left\langle \sum_{j\in V}e_{ji}\pi_j\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}\right.
\nonumber\\
&\left.-(1-w)\left\langle \sum_{j\in V}e_{ji}\sum_{l\in V}e_{jl}\pi_l\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}
\right)\nonumber
\\
&-\frac{1-w}{N}\left(-w
\left\langle\pi_i\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}
+w\left\langle \sum_{l\in V}e_{il}\pi_l\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}
\right)>0\nonumber
\\
\Leftrightarrow&\left\langle\pi_i\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}
-\frac{2w}{1+w}\left\langle \sum_{j\in V}e_{ij}\pi_j\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}\nonumber\\
&-\frac{1-w}{1+w}\left\langle \sum_{j,l\in V}e_{ji}e_{jl}\pi_l\right\rangle_{\begin{smallmatrix}\delta=0\\s_i=C\end{smallmatrix}}>0.
\end{align}
Player $i$ being the starting node of the random walk, Eq.~(\ref{condigroup}) can be written as
\begin{equation}\label{walkcondigroup}
\pi^{(0,0)}-\frac{2w}{1+w}\pi^{(0,1)}-\frac{1-w}{1+w}\pi^{(0,2)}>0,
\end{equation}
which is a specific form of the condition~(\ref{bd}). To get an explicit form for the threshold value, we first need to transform the expression of Eq.~(\ref{equa}) in the following way:
\begin{align}\label{equagroup}
&~s^{(n,m)}-\frac{2w}{1+w}s^{(n,m+1)}-\frac{1-w}{1+w}s^{(n,m+2)}\nonumber\\
=&~\frac{2w}{1+w}\left(s^{(n,m)}-s^{(n,m+1)}\right)\nonumber\\
&+\frac{1-w}{1+w}\left(s^{(n,m)}-s^{(n,m+1)}+s^{(n,m+1)}-s^{(n,m+2)}\right)\nonumber\\
=&~\frac{\mu}{2}\left(Np^{(n,m)}+\frac{1-w}{1+w}Np^{(n,m+1)}-\frac{2}{1+w}\right)\nonumber\\
&+\frac{2}{1+w}\mathcal{O}(\mu^2),
\end{align}
where the last term proportional to $\mathcal{O}(\mu^2)$ can be neglected. In the following, we utilize the simplified payoff structure of the donation game.
\subsection{Theoretical threshold for donation game}\label{secdgtheo}
To obtain the requested threshold value for the donation game, we start from Eq.~(\ref{walkcondigroup}), transform $\pi^{(n,m)}$ to $s^{(n,m)}$ by using Eq.~(\ref{pitwoplayer}), and substitute $s^{(n,m)}$ with $p^{(n,m)}$ by using Eq.~(\ref{equagroup}). That is,
\begin{align}\label{calcugrouptwo}
&~\pi^{(0,0)}-\frac{2w}{1+w}\pi^{(0,1)}-\frac{1-w}{1+w}\pi^{(0,2)}>0\nonumber
\\
\Leftrightarrow&
~\left(-cs^{(0,0)}+bs^{(1,0)}\right)-\frac{2w}{1+w}\left(-cs^{(0,1)}+bs^{(1,1)}\right)\nonumber\\
&-\frac{1-w}{1+w}\left(-cs^{(0,2)}+bs^{(1,2)}\right)>0\nonumber
\\
\Leftrightarrow&
-c\left(s^{(0,0)}-\frac{2w}{1+w}s^{(0,1)}-\frac{1-w}{1+w}s^{(0,2)}\right)\nonumber\\
&+b\left(s^{(1,0)}-\frac{2w}{1+w}s^{(1,1)}-\frac{1-w}{1+w}s^{(1,2)}\right)>0\nonumber
\\
\Leftrightarrow&
-c\left(Np^{(0,0)}+\frac{1-w}{1+w}Np^{(0,1)}-\frac{2}{1+w}\right)\nonumber\\
&+b\left(Np^{(1,0)}+\frac{1-w}{1+w}Np^{(1,1)}-\frac{2}{1+w}\right)>0.
\end{align}
It is easy to see that $p^{(0,0)}=1$, because one stays at the original position in the absence of movement. Similarly, $p^{(1,0)}=p^{(0,1)}=0$, because self-loop is not allowed, one cannot leave from and return to the initial node within a single step.
The calculation of $p^{(1,1)}$, however, is case-dependent. The general case where the two graphs overlap in an arbitrary way will be discussed in Sec.~\ref{extension}. Here, we consider a common situation when a node shares the same neighbors on the interaction and dispersal graphs. To characterize the local structure we calculate the so-called Simpson degree \cite{allen2013spatial}. As we noted, there are $k$ neighbors to choose from on the interaction graph, and all of them are chosen with the same $1/k$ probability. In our present case, these neighbors are also neighbors on the dispersal graph. The probability of a step between node $i$ and a neighboring $l$ is $e_{li}$. Therefore, $p^{(1,1)}=\sum_{l\in V}{1/k\times e_{li}}=1/k$. By these $p^{(n,m)}$ values, we can calculate the threshold value, yielding:
\begin{equation}\label{pointgrouptwo}
\frac{b}{c}>\frac{N-2+Nw}{N-2k-Nw}k\equiv
\left(\frac{b}{c}\right)^*,
\end{equation}
where the expression on the right-hand side of ``$>$'' is denoted by $(b/c)^*$, and ``$>$" holds if $(b/c)^*>0$. Here, the value of $(b/c)^*$ identifies the threshold over which cooperation is favored. When $(b/c)^*<0$, the result of Eq.~(\ref{pointgrouptwo}) should be $b/c<(b/c)^*$, which means cooperation is unreachable because $b/c>0$ always holds.
\begin{figure}
\centering
\includegraphics
[width=0.47\textwidth]
{fig1.pdf}
\caption{The $(b/c)^*$ threshold for the success of cooperation as a function of $w$ described by Eq.~(\ref{pointgrouptwo}) with $k=4$. Panel~(a) shows a system of $N=36$ players. The threshold has a tipping point at $w^\star=1-2k/N=7/9$. If $w<w^\star$, then $(b/c)^*>0$, and cooperation is favored when $b/c>(b/c)^*$. If $w>w^\star$, then $(b/c)^*<0$; therefore, cooperation can never be reached. Panel~(b) shows the cases of $N=400$ and $N\to \infty$. When $N=400$, we have $w^\star=0.98$. In the range of $w<0.9$, cooperation is favored if $b/c>(b/c)^*$. The curves are close to each other, signaling that a population of $N=400$ players represents a sufficiently large system size for the approximation.}
\label{figtwoanaly}
\end{figure}
To give a deeper insight into how the threshold values depend on the weight factor, we consider a specific topology of square lattices with the von~Neumann neighborhood ($k=4$). The $w$ dependence of $(b/c)^*$ is shown in Fig.~\ref{figtwoanaly}, as calculated by Eq.~\eqref{pointgrouptwo}. Panel~(a) depicts the case where the population size is $N=36$. Here, we can detect a tipping point in $(b/c)^*$ at $w=w^\star$. When $w<w^\star$, the evolution favors cooperation if $b/c>(b/c)^*$. As panel (a) shows, the threshold $(b/c)^*$ increases by increasing $w$, which means favoring cooperation becomes more difficult for a larger focal weight. When $w>w^\star$, cooperation would be favored if $b/c<(b/c)^*$, which means cooperation is unreachable due to the $b>c>0$ constraint of the donation game. At these parameters, the tipping point is at $w^\star=7/9$, marked by a vertical dotted line in Fig.~\ref{figtwoanaly}(a).
The position of the $w^\star$ tipping point, where the value of $(b/c)^*$ flips from positive to negative infinity, can be given by the following form:
\begin{equation}\label{tipping}
w^\star=1-\frac{2k}{N}.
\end{equation}
This formula suggests that $w^\star<1$ always holds when $k<N/2$. In this case, there is always a tipping point if cooperation is favored under the classic death-birth process (i.e., $(b/c)^*>0$ at $w=0$). Otherwise, when $k>N/2$, we have $w^\star<0$ and cooperation is unreachable, both for the traditional ($w=0$) and the generalized ($w>0$) updating rules.
Next, we apply a sufficiently large system size $N=400$, as depicted in Fig.~\ref{figtwoanaly}(b). Here the tipping point is at $w^\star=0.98$, which is very close to the $w=1$ limit case. It practically means that cooperation can be reached for almost all $w$ values. The solid line of this system size shows that the $(b/c)^*$ threshold value increases monotonously by increasing $w$. Therefore, increasing the focal weight, or strengthening players' willingness to keep their original strategies, makes cooperation harder.
For comparison, we also present the threshold values for the $N\to \infty$ limit. The analytical formula of the $(b/c)^*$ threshold value for $N\to \infty$ can be written as
\begin{equation}\label{largepointgrouptwo}
\left(\frac{b}{c}\right)^*_{N\to\infty}=\frac{1+w}{1-w}k,
\end{equation}
which is a generalization of the well-known $b/c>k$ rule \cite{ohtsuki2006simple}. The dashed line in Fig.~\ref{figtwoanaly}(b) shows this function. We can see that the curves of $N=400$ and $N\to \infty$ are close to each other for almost all $w$ values, indicating that $N=400$ is sufficient (especially when $w$ is small) to be considered as a large population for the weak-selection limit, especially when $w$ is small. For example, the difference is less than 5\% when $w<0.6$.
\section{Numerical simulation}\label{numeric}
To support our analytical predictions, we present the results of Monte Carlo (MC) simulations also using $L \times L$ square lattice topology with periodic boundary conditions. Accordingly, the total size of the population contains $N=L^2$ players. Following the standard simulation protocol, we randomly assign each agent's strategy by cooperation or defection, which provides $\rho_C\approx 0.5$ portion of cooperative agents when we launch the evolution.
During an elementary step, we randomly select a focal agent who plays the donation game with the four nearest neighbors. The average of the resulting payoff values determines the fitness of this agent according to the weak selection approach. The payoff and fitness values of neighbors are calculated in the same manner. After, the focal player adopts the strategy of a neighbor with the probability determined by the extended death-birth rule of Eq.~\eqref{groupupdate}. For a full MC step, we repeat the above-described procedure $N$ times, which ensures that each agent is selected once on average.
An independent run contains up to $4\times 10^5$ full MC steps. This relaxation time can be considered sufficiently long \cite{allen2017evolutionary} because, under a weak selection strength, the system can easily reach full cooperation ($\rho_C=1$) or full defection ($\rho_C=0$) absorbing states. If the system does not fix within the period mentioned above, then we take the portion of cooperative agents $\rho_C$ at the last time step as the result. To get reliable statistics, we perform independent runs $10^4-10^6$ times (depending on the system size) and average them. The resulting $\langle \rho_C\rangle$ values obtained for different weight factors and selection strengths are summarized in Fig.~\ref{figtwonume}.
\begin{figure}
\centering
\includegraphics
[width=0.48\textwidth]
{fig2.pdf}\\
\caption{MC simulations on square lattices of different sizes where the $b/c$ control parameter is varied by keeping $c=1$ fixed. The fractions of cooperators are plotted for different focal weight values, as indicated in the legend. Panel~(a) shows the results of $L=6$ linear size, where we average over $10^6$ independent runs. Panel~(b) depicts the results obtained for $L=20$, where we average over $10^4$ independent runs. In both panels we use $\delta=0.01$ selection strength, but the inset of panel~(a) shows results obtained for $\delta=0.0001$. Dashed vertical lines represent the position of theoretical threshold level for cooperation success. The numerical results confirm the analytical predictions for all weight values.}\label{figtwonume}
\end{figure}
Figure~\ref{figtwonume}(a) shows the results obtained for a $6\times 6$ lattice, where we applied $w=0$, $0.4$, $0.6$, and $0.8$ weight factors. The benefit to cost portion is varied by increasing $b$ while $c=1$ remained fixed. As expected, by increasing $b/c$, the portion of cooperators grows for small weight factors. The critical $(b/c)^*$ value is identified where $\langle \rho_C\rangle$ exceeds 0.5. For comparison, we also mark by vertical dashed lines the positions of $(b/c)^*$ threshold values obtained from Eq.~\eqref{pointgrouptwo}. These values are $(b/c)^*=4.85$, $14.23$, and $34.75$ for $w=0$, $0.4$, and $0.6$, respectively. For completeness, we also study the $w=0.8$ case, which is beyond the $w^\star$ tipping point for this system size. Our theory predicts $(b/c)^*=-314$ here. Indeed, as the inset of panel~(a) illustrates, $\langle \rho_C\rangle$ decreases by increasing $b/c$, and the evolution favors cooperation only when $b/c<(b/c)^*$. We can see that the cooperation-favored parameter areas are consistent with those shown in Fig.~\ref{figtwoanaly}(a) for the same parameter values.
According to our theory, a $20\times 20$ system size can be considered comparable to the large population limit where the tipping point is very close to $w=1$ and the system behavior is qualitatively similar for all the mentioned $w$ values. To check this, we also present results obtained for this system size. Our observations, shown in Fig.~\ref{figtwonume}(b), confirm that the system behaves similarly for all studied $w$ values and $\rho_C$ always increases as we increase $b/c$. The theoretical threshold values are $(b/c)^*=4.06$, $9.62$, $16.79$, and $39.89$, for $w=0$, $0.4$, $0.6$, and for $w=0.8$, respectively. Vertical dashed lines mark these values, which are consistent with the $b/c$ values where $\langle \rho_C\rangle$ exceeds 0.5 in our numerical simulations.
The reason why we can link the theoretical $(b/c)^*$ value to the location where $\rho_C$ exceeds 0.5 is the following. It is a well-known result that in the $\delta=0$ limit under neutral drift the system eventually terminates in one of the homogeneous absorbing states, and the probability of reaching the full cooperation state depends on the initial $N_C/N$ portion of cooperators \cite{cox_ap83,cox_ap86,nowak2004emergence}. For example, in our theoretical analysis, we analyze the case of starting with $N_C=1$ cooperative agent, which means the system achieves full cooperation with probability $1/N$ when $\delta=0$; hence, the sign of cooperation success is when $\langle \rho_C\rangle>1/N$. Meanwhile, the system's property (whether favoring cooperation) is independent of the initial state. Therefore, once we deduce the condition of cooperation success, such a condition indicates the probability of starting with $N_C$ cooperators ending in full cooperation $\langle \rho_C\rangle>N_C/N$ \cite{chen2013sharp}. In our numerical simulation, we initially assign each agent's strategy by random and $N_C/N\approx 0.5$; therefore, $\langle \rho_C\rangle>0.5$ is the direct sign of the evolution favoring cooperation.
\section{Extension to different graphs and games}\label{extension}
Until this point, we assumed that the interaction and dispersal graphs overlap where square lattices with the von Neumann neighborhood provided a testable topology. In the following, we leave this strong restriction to check the robustness of our observations and to see how the focal weight changes the system behavior. Moreover, we also consider the model in alternative games.
First, we show the robustness to different graphs. We still keep the basic assumptions: the interaction graph is unweighted, the self-loop is excluded, and the joint graphs are transitive. In this way, the calculation $p^{(0,0)}=1$, $p^{(1,0)}=0$, $p^{(0,1)}=0$ is invariant. To determine $p^{(1,1)}$, however, remains an open task. Therefore, the general solution of Eq.~(\ref{calcugrouptwo}) is
\begin{equation}\label{pointgrouptwogeneral}
\left(\frac{b}{c}\right)^*=\frac{(1+w)N-2}{(1-w)Np^{(1,1)}-2},
\end{equation}
where $0\leq p^{(1,1)}\leq 1$ because $p^{(1,1)}$ denotes a probability. The general form of the tipping point $w^\star$ is
\begin{equation}\label{tippinggeneral}
w^\star=1-\frac{2}{Np^{(1,1)}}.
\end{equation}
From Eq.~(\ref{pointgrouptwogeneral}) and Eq.~(\ref{tippinggeneral}), the tipping point $w^\star$ exists between 0 and 1 when $p^{(1,1)}>2/N$. In this case, the threshold $(b/c)^*$ increases with $w$ in $0\leq w< w^\star$ and flips from positive to negative infinity at $w=w^\star$. When $p^{(1,1)}<2/N$, we have $w^\star<0$, and cooperation is never favored.
In sum, the statement holds given $0\leq p^{(1,1)}\leq 1$, which verifies the robustness of the conclusions to dispersal graphs with arbitrary edge weights. The robustness also holds on interaction and dispersal graphs overlapping in arbitrary ways.
Second, we show the robustness to different games. We investigate the conclusions in arbitrary two-player prisoner's dilemmas, depicted by four parameters $R$, $S$, $T$, and $P$, where $T>R>P>S$. In agreement with the general notation, the payoff of a cooperative player is $R$ if the other player cooperates and $S$ if the other player defects. Also, the payoff of a defective player is $T$ if the other player cooperates and $P$ if the other player also defects. In particular, we have $R=b-c$, $S=-c$, $T=b$, $P=0$ for the donation game.
According to the Structure Coefficient Theorem proposed by Tarnita {\it et al.} \cite{tarnita2009strategy}, the condition of evolution favoring cooperation is $\sigma R+S>T+\sigma P$, or
\begin{equation}
\frac{R-P}{T-S}>\frac{1}{\sigma},
\end{equation}
where $\sigma$ is the structure coefficient independent of the payoff values. By substituting the condition (\ref{pointgrouptwogeneral}) we obtained in the donation game, we can determine the structure coefficient $\sigma$,
\begin{equation}
\sigma=\frac{(b/c)^*+1}{(b/c)^*-1}=\frac{1+p^{(1,1)}+w(1-p^{(1,1)})-\frac{4}{N}}{1-p^{(1,1)}+w(1+p^{(1,1)})}.
\end{equation}
When $p^{(1,1)}>1/(N-1)$, $1/\sigma$ increases with $w$, and cooperation is disfavored by increasing $w$. According to the rank $T>R>P>S$, we have $(R-P)/(T-S)<1$, which means cooperation is never favored if $1/\sigma>1$. To ensure $1/\sigma<1$, we have $p^{(1,1)}>2/(N(1-w))>2/N>1/(N-1)$. To sum up, in arbitrary two-player prisoner's dilemmas, cooperation is either disfavored as $w$ increases or unreachable at any $w$.
\section{Conclusion}\label{conclusion}
It is a frequently used assumption in evolutionary game dynamics that individuals prefer learning those strategies which provide higher fitness. However, driven by cognitive biases, one's fitness in perceptions when learning strategies could vary. In this work, we generalize the death-birth learning process with the consideration of a focal weight which makes it possible not to completely ignore the focal player's status in the death-birth protocol. More precisely, a higher weight provides an extra significance to the fitness of a focal player, which can reduce the frequency of changing strategies for both cooperation and defection. Despite the strategy-neutral character of this extension, we found that the usage of focal weight actually favors defection and hinders cooperation during the evolution. This is supported by the fact that the threshold $(b/c)^*$ of cooperation success increases as we enlarge the focal weight $w$.
Our theoretical analysis revealed a non-trivial tipping point of weight factor $w^\star=1-2k/N$, over which $(b/c)^*$ flips from positive to negative infinity and cooperation becomes unreachable. Importantly, such a tipping point always exists in a finite population when $k<N/2$. To find a simple testable example, we considered a square lattice topology with periodic boundary conditions where MC simulations can be executed. Our numerical calculations confirmed the theoretical predictions and underlined the validity of the results.
Furthermore, we verified the robustness of our observations from two perspectives. First, our conclusions remain valid for dispersal graphs with arbitrary edge weights and when the interaction and dispersal graphs overlap in an arbitrary way. Second, the conclusions do not change in arbitrary two-player prisoner's dilemmas.
Last, our results, valid in the weak selection limit, contradict those observations obtained for spatial populations under intermediate or strong selection strength. It was generally reported that if we introduce a sort of behavioral inertia, which helps players to maintain their strategies longer, then such modification of the microscopic dynamics can support cooperation significantly \cite{szolnoki_pre09,liu_yk_pa13,szolnoki2018reciprocity,zhang_yl_pre11,chen_xj_ijmpc08,jia_dy_pa18,szolnoki_csf20,chang_sh_pa18}. This phenomenon can be explained by the fact that cooperation and defection spread with significantly different speeds in a structured population. While cooperators advance slowly because they need to build a protective domain, defection can invade fast because it can enjoy the company of akin players. When we introduce inertia, propagation is slowed down for both cases, but in a biased way: defector propagation suffers more, resulting in a cooperator-supporting mechanism. These diverse conclusions provide an example when the evolutionary outcomes of the strong and weak selection limits are not comparable \cite{roca_plr09,fu_pre09b,li_c_pone13,zhong_wc_bs13}.
A.S. was supported by the National Research, Development and Innovation Office (NKFIH) under Grant No. K142948.
|
1,477,468,751,391 | arxiv | \section{Introduction}
There is an increasing concern amongst researches that machine learning models developed with the best of intentions may exhibit biases, promote inequality, or perform unfairly for unprivileged groups. With the increasing usage of such models, a considerable amount of research has been conducted to recognize and address these issues and their social impact \cite{mehrabi2019survey, sun2019mitigating}. When models show signs of bias, then these models are referred as "unfair".
Sentiment detection is an important building block for multiple applications such as content moderation, product recommendation, misinformation detection and recently language generation \cite{park2018reducing, hutchinson2020unintended, raisi2019reduced,wolf2019transformers}. To build these models, large training corpora from varied sources are used. Unfortunately, training data might already contain some bias and that bias can transmit to the learning phase; thus unprivileged groups get unfairly impacted. While the developers of such models often assess success by measuring the accuracy factor, a few have examined the fairness aspect.
Due to the aforementioned popularity, large companies such as Google \footnote{Google Cloud \url{https://cloud.google.com/natural-language}}, Amazon \footnote{Amazon Comprehend \url{https://aws.amazon.com/comprehend/}} and IBM \footnote{Ibm Watson \url{https://www.ibm.com/watson}} provide black-box models that can be easily incorporated into any applications to provide a sentiment score for a given text. Although sentiment detection plays a significant role in such tasks, potential discriminatory treatments might exist for different populations and since these providers are perceived as trustworthy, a severe impact could hurt large populations \cite{o2016weapons}. For example, when a model often predicts a text snippet as toxic when a female pronoun is present but fails to do so for the male pronoun, this could amplify stereotypes, disenfranchise certain groups, and yield systemic misogyny (or conversely misandry) \cite{singh2020female}.
An important reason for success in multimedia computing is having multiple models (often involving different datasets) combined together to achieve better results. This approach has been widely used for combining multiple weak forecast models, such as in weather forecast. Similarly, for machine learning, "ensemble learning" and "multi-modal fusion" methods show astonishing results in terms of co-learning and accuracy enhancement \cite{mendes2012ensemble, atrey2010multimodal}. Recently, ensemble deep learning models have been utilized for various applications such as image, video, and speech recognition \cite{lee2017ensemble, deng2014ensemble}.
Recognizing the importance of fairness in such applications, multiple researchers have proposed various metrics and bias mitigation methods \cite{mehrabi2019survey, sun2019mitigating, nozza2019unintended, alasadi2019toward}. Many such methods either massage the incoming data (pre-processing approaches) or change the optimization parameters within the white-box machine learning model (in-processing) \cite{calmon2017optimized}.
As both of these are not easily possible in multiple applications (e.g., news sentiment detection using Google API), we focus on a post-processing approach for combining the results from multiple black box APIs (which we also refer to as "modalities" in this work).
In this paper, we examine the fairness aspect of three popular sentiment detection black-box models on crime news headlines in which misogynistic and/or misandristic bias might exist and we propose a method to mitigate this bias by combining the output from different models. Specifically, we examine the sentiment detection across \textbf{"gendered interaction"} in news, such as \emph{"a woman hurts a man in a bus"} vs \emph{"a man hurts a woman in a bus"}. Here, "gendered interaction" indicates that there are two actors both with clearly identified and differing gender, and there is an interaction taking place between them. In such settings, a model that produces a positive score for the first sentence but a negative score for the second is considered biased towards women (perpetrators) and unfair for men (victims) and vice versa. We apply these tests on crime news headlines dataset that has been collected specifically for this study. Experimental results show each of the publicly available APIs has inherent gender bias and also inaccuracies. On the positive side, the proposed "Flexible Fair Regression" approach was found to be useful to ameliorate both fairness and accuracy concerns.
Our main contributions in this paper are:
\begin{itemize}
\item To examine the fairness aspect of publicly available sentiment detection APIs that have been used extensively in various applications. We report that each of these models have inherent gender bias.
\item To propose an optimization method "Flexible Fair Regression" to easily allow balancing between bias and accuracy when combining the outputs from multiple (semi-accurate and semi-biased) black box models.
\item To share the newly created approach and resulting dataset for quantifying bias in "gendered interaction" scenarios
\footnote{\url{https://github.com/abdulazizasz/fairness\_sentiment}}.
\end{itemize}
Note that we consider the use of binary gender as a limitation of this work. The use of gender neutral pronouns and those inclusive of non-binary identities is still not common enough in news headlines and hence the problem of bias with binary gendered pronouns remains an important challenge \cite{badjatiya2019stereotypical, nozza2019unintended}.
The rest of the paper is organized as follows. Section 2 provides an overview of the related work in bias detection and mitigation. Then, Section 3 describes the methods including the bias measurement approach and the proposed bias mitigation strategy. The experimental setup and results are provided in Sections 4 and 5. Finally, in Section 6, a summary and future directions are shared.
\vspace{-10pt}
\section{Related Work}
There is significant research work devoted to fairness in algorithms and they can be divided into two categories: (1) bias measurement, and (2) bias mitigation.
For the first category, a commonly used strategy is to compare the treatment differences of different privileged groups \cite{kiritchenko2018examining, rudinger2017social}. Muliple efforts use the approach of "word-swapping" where different sensitive variables are swapped in a fixed context. For example, in \cite{park2018reducing} authors use a template "\textit{I hate <identity> people}" by replacing the sensitive variable with different identities such as (gay, jewish, african, etc). This process has been used to measure bias in sentiment detection
\cite{shen2018darling, kiritchenko2018examining}, coreference resolution \cite{rudinger2018gender} and language models \cite{huang2019reducing}. This technique provides a practical approach to examine the treatment for different sentences when a sensitive variable plays an important role in forming the sentence. Another set of metrics that have been widely used for classification tasks are measuring the differences of accuracy, False Positive Rate (FPR), False Negative Rate (FNR) and so on \cite{park2018reducing, dixon2018measuring}. Unfortunately, these metrics are only applicable for classification tasks, whereas in sentiment detection the output value is typically a continuous score, therefore different statistical measures must be utilized to examine unfair treatment.
Multiple studies have examined unfair treatments in settings in which a single sensitive variable exists in a text such as ("I hate women", or "Jews are bad"). Additionally, Rudinger et al., \cite{rudinger2018gender} in coreference resolution examine the bias of an inferred pronoun in a text that has (two actors). However, none have studied the bias issue in sentiment detection in cases where there are two different actors involved in an interaction.
For the second category, bias mitigation strategies can be used in different levels: pre-processing, in-processing and post-processing. Calmon et al., \cite{calmon2017optimized} propose a de-biasing method which uses a probabilistic transformation that edits the features and labels in the data with group fairness. Another pre-processing method presented by Zemel et al., \cite{zemel2013learning} focuses on learning a fair representation technique that finds a latent representation which encodes the data well but obfuscates information about sensitive attributes. Kamishima et al., \cite{kamishima2012fairness} propose an in-processing algorithm known as Prejudice Remover to decrease bias by adding a discrimination–aware regularization term to the learning objective function. Celis et al., \cite{celis2019classification} put forward the idea of a meta fair classifier that takes fairness metric as part of the input and returns a classifier optimized with respect to a fairness metric. Other efforts such as \cite{pleiss2017fairness, hardt2016equality} bring forward the idea of calibrated equalized odds which is a post-processing technique that optimizes over calibrated classifier score.
Past work on multimodal fusion has
proposed methods to enhance the accuracy by combining multiple modalities in data-levels, feature-levels or the decision-levels \cite{mendes2012ensemble, atrey2010multimodal}. These methods are also feasible in tasks when dealing with different black-box models that have different levels of accuracy and different level of bias. Although these approaches have shown great success in the past, they are yet to be studied with the goal of fairness enhancement. Fairness in multimedia computing is a relatively nascent but fast-growing \cite{alasadi2019toward, singh2020legal} field and this work helps motivate and ground the need for fairness considerations in fusion research.
The approach in this paper is inspired by multi-modal fusion proposals for co-learning with weak learners to enhance the accuracy \cite{mendes2012ensemble, atrey2010multimodal, deng2014ensemble, lee2017ensemble}. It also adapts the in-processing technique for bias mitigation using a regularizer to create a post-processing approach which can work well with multiple black box models in sentiment detection case.
\section{Methodology}
\subsection{Preliminaries}
We formulate the problem of fair sentiment detection as the following. We have $k$ independent black-box models and their sentiment scores $x_k \in [-1, 1] $ and a ground truth score $y \in [-1, 1]$. Besides that, let there be a sensitive variable $S$ that can divide the dataset $D = \{x_i, y_i\}^{N}_{i=1}$ into different groups, e.g., $S_{male}$, $S_{female}$. To simplify the settings, we can combine multiple $k$ column vectors modalities $\{x_{1}, \ x_{2}, \ ... x_{k} \}$ in a matrix $X$.
In such a setting the goal of the algorithm is to minimize the loss as measured via a combination of accuracy error, bias penalty, and over-fitting penalty.
We describe the operationalization of the different terms above in the following subsections.
\subsection{Measuring Bias}
To measure the bias/fairness, we examine the mean difference of the scores for each black-box models with respect to a sensitive variable $S$, e.g., for a binary variable ${S^+, S^-}$ as follows:
$$ Mean \ Difference \ (x, k) \ = \frac{\sum\limits_{x_i \in S^+} x^{k}_{i}}{| S^+ |} - \frac{\sum\limits_{x_i \in S^-} x^{k}_{i}}{| S^- |} $$
A fair black-box model will result in zero score which means the same sentiment score has been produced regardless of the sensitive variable. Since we are focusing on the scores deviation among different groups we can use the Mean Absolute Difference/Error (MAE). Other methods proposed in the literature for measuring the bias can also be useful candidates (e.g. Correlation and delta of prediction accuracy \cite{calders2013controlling}) in other settings.
\subsection{Balancing Accuracy and Fairness}
In this project, we use linear regression to find the best parameter to combine these independent black-box models $X$ with respect to the target scores $y$. The linear regression can be formulated as an optimization problem to find the parameter $w$ that minimizes the following function:
\begin{equation}
\begin{aligned}
& \underset{w}{\textbf{minimize}}
& & MSE(w) = \frac{1}{N} \sum_{i=1}^{N} (w * x_i - y_i)^2 \\
\end{aligned}
\end{equation}
Many variants of linear regression add a regularizer function to the regression, which keeps a check on the number of parameters being used in the modeling \cite{hoerl1970ridge}.
Further, recent efforts have proposed adding a "fairness regularizer" to the regression \cite{berk2017convex}. Here, we adapt that approach to define a regularizer which penalizes the function when the sentiment scores differ between different groups. For a binary variable, a model regularizes the difference between $S^+$ and $S^-$.
In other words, the modified objective function is trying to find the optimal parameter $w$ that minimizes the Mean Squared Error (MSE) along with the minimum bias between different groups. To simplify the objective function, a bias matrix $\Delta$ which contains the sentiment score difference for each modality is calculated. For instance, for a modality $k$ and a binary sensitive variable $S$, the bias vector can be calculated as follows:
$$\delta_{k} = | x_{S+}^{k} -x_{S-}^{k} |$$
Similarly, calculating the bias for other modalities and combining them in one matrix $\Delta$ yields:
$$
\begin{aligned}
& \Delta =
\begin{bmatrix}
& & \\
\delta_{i}^{1} & . . . & \delta_{i}^{k} \\
& &
\end{bmatrix}
\end{aligned}
$$
Using the $w$ and $\Delta$ the "fairness" penalty function $P$ is:
$$
P(w) = \frac{1}{N} \sum_{i=1}^{N} (w * \delta_i)^2
$$
Now the optimization function for "Flexible Fair Regression" is:
\begin{equation}
\begin{aligned}
& \underset{w}{\textbf{minimize}}
& & L(w) = MSE(w) + \beta P(w) + \lambda \| w \|^2 \\
\end{aligned}
\end{equation}
where $ \| w \|^2$ is the $L_2$ norm. Hyper parameters $\lambda$ and $\beta$ are used to control over-fitting and the fairness trade-off. We call this approach "flexible fair regression" as it supports fairness in regression, and allows a system designer to flexibly pick the relative importance and thresholds that they want to assign to fairness and accuracy.
\section{Experimental Setup}
In this section, we describe the process of the data-collection, annotations, baselines, and the bias-reduction results.
\begin{figure*}[h]
\centering
\includegraphics[width=1.00\linewidth]{template_process.png}
\caption{Constructing a template, the two versions and getting the APIs scores}
\Description{}
\end{figure*}
\subsection{Dataset}
To construct a dataset, we collected crime news headlines from Google News API \footnote{https://news.google.com}. To do so, we used the API search criteria to retrieve only the news headlines that contain abusive verbs such as (\emph{kill, murder, slap}, etc) along with at least two different subjects (\emph{man, woman}). Using a carefully designed list of abusive verbs \cite{wiegand2018inducing}, we collect a large number of data-points (crime news). Since in this project we are tackling the predictive learning problem as a linear regression, we need to label and assign a sentiment score to the collected dataset. Utilizing Figure-Eight \footnote{www.figure-eight.com}, we asked 10 annotators to label and score each sentence (template) and to avoid bias in the annotation process, we anonymize the subjects in the sentence. Then, every annotator is presented with an anonymized template (see Fig. 1) and is asked to provide two pieces of information following the Valence-Arousal model \cite{russell1980circumplex}: (1) A valence label to a sentence such as "Positive or Negative" (2) An arousal score on a scale from 1 to 10. Thus, a sentence with a positive label and a score of 10 will have a sentiment score close to +1 whereas a sentence with a negative label and an arousal score of 5 will have a sentiment score -0.5. This process yielded scores in the range of [-1, 1].
Removing inconsistent annotators, and using a seed of 200 templates, we did gender swapping in the sentences where the first sentence has men as perpetrators and women as victims and vice-versa. By applying different gender identities such as ("man-woman", "male-female", etc), we used 25 of such terms provided by \cite{gonen2019lipstick}.
Thus, our corpus contains 10000 news headline sentences. The resulting dataset was then scored using Google, Amazon, and IBM APIs (see Fig. 1). Finally, we split the dataset into training and testing sets in a ratio of 70:30.
A sample of a dataset is shown in Table 1. For each black box model, we have a sentiment score for the two versions of the template along with the gender and the ground-truth score. From Table 1, we get the scores for each template by taking the average between two genders since we are aiming at finding the optimal score for each template regardless of the gender (see Table 2).
\begin{table}
\caption{A sample from the dataset}
\label{tab:freq}
\begin{tabular}{cccccc}
\toprule
Sentence & $k_1$ & $k_2$ & $k_3$ & $S$ & $y$ \\
\midrule
\textbf{man hurts woman in ..} & $-0.9$ & $-0.5$ & $0.6$ & $m$ & $-0.7$\\
\textbf{woman hurts man in ..} & $-0.7$ & $-0.8$ & $-0.9$ & $f$ &$-0.7$\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\caption{Training dataset for each template regardless of the gender}
\label{tab:freq}
\begin{tabular}{ccccc}
\toprule
Sentence & $k_1$ & $k_2$ & $k_3$ & $y$ \\
\midrule
\textbf{"[S1] hurts [S2] in ..} & $-0.8$ & $-0.65$ & $0.15$ & $-0.7$\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Baselines}
Since the scores that are generated from these black-box models are not always accurate, a fusion process is used to increase the accuracy by combining different modalities. Related works (e.g., \cite{atrey2010multimodal, mendes2012ensemble}) provide various methods that are practical for fusing independent modalities. In this project we are experimenting with three such methods:
\textbf{Unweighted Average} is the basic fusion process that assumes that independent modalities are equal in terms of accuracy. Thus, the predicted sentiment score is calculated as follows:
$$
\hat{y_i} = \frac{1}{N} ( x_i^{google} + x_i^{amazon} + x_i^{ibm} )
$$
\textbf{Weighted Average} weights each modality based on its accuracy (for the training set):
$$
\hat{y_i} = w_{Google} * x_i^{google} + w_{Amazon} * x_i^{amazon} + w_{Ibm} * x_i^{ibm}
$$
where $\sum_{k=1}^{3} w_k = 1$.
\textbf{Multiple Regression} is similar to our proposed method but without the "fairness" penalty term. In other word, black-box models outputs can be treated as features for another learning model.
All of the above methods are only used to optimize for the accuracy part and not considering the fairness aspect.
\textbf{Fairness Optimization} is an additional baseline that optimizes (only) for fairness. We weight each modality by its fairness scores in the training data. The "fairness" weight for a modality $k$ is calculated as follows:
\begin{equation}
w^{k} = \sum_{i=1}^{N}\ \mathds{1} \{ | x_{male_i}^{k} - x_{female_i}^{k} | \leq \tau\}\
\end{equation}
Here, if the sentiment scores differ by less than $\tau=10\%$, we consider it as a fair treatment for that template.
\section{Results And Discussion}
\subsection{Auditing Sentiment Detection APIs}
As a first step, we analyzed the results from the different black box models (sentiment detection APIs) to see if they are accurate and if there is a difference in the results obtained for sentences which are same, except for the genders represented for the perpetrators and the victims.
To evaluate accuracy (compared to ground truth labels obtained from multiple human labelers) we use Root Mean Squared Error (RMSE). To measure bias we use Mean Absolute Error (MAE). Table 3 shows the errors in accuracy and bias levels for each individual modality. The range for sentiment scores was [-1, 1] and we see that there are noteworthy accuracy and bias issues with each modality. Further, a pairwise t-test comparing the mean sentiment scores across genders yielded statistically significant differences in all three modalities. These issues motivate the need to combine the outputs from different modalities to improve both accuracy and fairness.
\begin{table}
\caption{Accuracy and Fairness in the original dataset}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
API & Acc. Error & Bias \\
\midrule
Google & $0.5611$ & $0.0590$ \\
Amazon & $0.6939$ & $0.0581$ \\
IBM & $0.7441$ & $0.0545$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Improving Accuracy and Fairness}
\begin{table}
\caption{Mitigation Methods Analysis}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Model Name & Acc. Error & Bias\\
\midrule
Multiple Regression & 0.5362 & 0.0738 \\
Unweighted Average & 0.6302 & 0.0435 \\
Weighted Average & 0.6153 & 0.0447 \\
Fairness Optimization & 0.7051 & 0.0173 \\
\textbf{ Our Method } & 0.6026 & 0.0400 \\
\bottomrule
\end{tabular}
\end{table}
We implemented the Flexible Fair Regression approach (Eq. 2) on the created dataset using Python.
Table 4 provides the summary of the results. We can easily see the trade-off between the accuracy error and bias among these models (lower is better in both cases). Multiple Regression is performing well in terms of accuracy (Low RMSE) but has to contend with higher value for bias. Both Weighted Average and Unweighted Average methods yield higher errors in terms of accuracy than Multiple Regression but yield lower levels of bias.
Note that it is also possible to optimize only for fairness (Fairness Optimization), and this can reduce the bias to very low value (close to zero). However, this comes with the price of a high accuracy error (see Fig. 2).
Lastly, our method allows for flexible trade-off between accuracy and fairness (see Eq. 2). The trade-off depends on the choice of weight parameter $\beta$ and different values of $\beta$ yield different points on the purple curve in Fig. 3. The axes for the figure are accuracy error and bias levels. Lower is better for both of them and hence, the points on the lower left corner are ideal. As shown, all the points of the purple curve (our approach with different $\beta$ values) either coincide with other baselines or strictly dominate them (i.e., yield better performance in terms of both accuracy and fairness).
\begin{figure*}[h]
\centering
\includegraphics[width=1\linewidth]{merged.png}
\caption{The accuracy error (RMSE) and the bias score (MAE) is shown for different baselines methods along with our method. The lower the bar the better. \emph{"UA: Unweighted Average, WA: Weighted Average, FO: Fairness Optimization, ML: Multiple Regression, OM: Our Method"}}
\Description{}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{search1.png}
\caption{Using the intersected point, we choose the closest $\beta$ which is $\beta= .002$ in Our Method}
\Description{}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{search2.png}
\caption{Trade-off between accuracy and fairness by manipulating the parameter $\tau$ (e.g., $\tau=10\%$) }
\Description{}
\end{figure}
The Multiple Regression and Fairness Optimization approaches can be considered extreme cases of our flexible fair regression approach, such that they optimize only for accuracy or only for fairness. Any point on the purple curve (varying values of $\beta$) will provide a trade-off between these two extremes.
To find a specific suitable candidate for $\beta$ value, we use Figure 3 to jointly consider the achievements of Fairness Optimization (FO) and Multiple Regression (MR). Hence, an ideal solution ("Utopia Point" marked as "X" in Fig. 3; unlikely to be achievable in practice), will yield accuracy error as low as MR and bias as low as FO (see Fig. 3).
Hence, we consider the point closest (minimum distance) to the "Utopia Point" to be a suitable candidate to pick the $\beta$ value. In the current work, $\beta$ value of 0.002 is the closest point, which yields the results shown in Table 4 and Figure 2. Note that this result is pareto-optimal in the sense that there is no other feasible point that is lower in \textit{both} accuracy error and bias. This can be seen from the points in Fig. 3 (which also includes points for models which use just one modality) and Table 4.
Another possible approach to joint optimization of two factors is to budget a fixed "cost of fairness" \cite{berk2017convex}. For example, losing a portion of accuracy could lead to a gain in fairness. In Figure 4, we provide an illustration of trade-off; a ${\sim}10\%$ accuracy loss could yield a ${\sim}38\%$ reduction in bias. Hence, a practitioner with an assigned $10\%$ accuracy budget could gain up to $38\%$ in terms of fairness. Other plausible budgets and impact can also be easily computed using this approach to allow for such decision making.
\section{Conclusion}
In this work, we deploy a regularized objective function that combines independent black box models to ensure an accurate and a fair learning model for sentiment detection. Since we are dealing with disparate and independent black-box models, a fusion process helps combine multiple sources' results and build a more robust score for each template regardless of the subjects' genders. The proposed approach yields a family of pareto-optimal solutions compared to other baseline approaches. Further, our "fairness" penalty function performs well in terms of bias reduction and is more flexible compared to other baselines.
An important limitation of this work is the focus on binary genders. Another critical challenge is that of constructing large and practical dataset, i.e., templates that cover enough amount of context in which abusive verbs occur. In this study, we only investigate how sentiment detection APIs are dealing with crime news headlines.
Additionally, the annotation process might result in biased scores; thus, to mitigate this bias, we take the average of the scores from different annotators and removed inconsistent annotators. Lastly, since we are solving an optimization function, a convexity of equation is assumed and a solver has been used to find the optimal minima.
Despite the limitations, this paper marks a significant step forward toward fairness and accuracy in sentiment detection literature. The paper advances the fairness literature to consider multiple actor "gendered interactions", which has use cases in news analysis, abuse detection, and misinformation detection. The public dataset and the proposed flexible approach can allow for fairness in a wide variety of scenarios where semi-accurate and semi-fair black box models need to be combined to obtain fair yet accurate predictions.
\section*{Acknowledgments}
This material is in part based upon work supported by
the National Science Foundation under Grant SES-1915790.
\bibliographystyle{ACM-Reference-Format}
|
1,477,468,751,392 | arxiv | \section{Introduction}
Let $F$ be a countable field and let $\phi \in F[x]$ have zero constant term.
Given a measure preserving action $T$ of the additive group of $F$ on a probability space $(X,\mathscr{B},\mu)$, a set $B \in \mathscr{B}$ and $\epsilon > 0$, we will show that, for any $\epsilon > 0$ the set
\begin{equation*}
\{ u \in F : \mu(B \cap T^{\phi(u)}B) \ge \mu(B)^2 - \epsilon \}
\end{equation*}
of strong recurrence times is large, in the sense of being $\ip^*_r$ up to a set of zero Banach density.
(These notions of size are defined below.)
In fact, we prove a more general result regarding strong recurrence for commuting actions of countable fields along polynomial powers.
This strengthens and extends recent results from \cite{MWcountableFields} regarding actions of fields having finite characteristic.
Here are the relevant definitions.
\begin{definition}
Let $G$ be an abelian group.
An \define{IP set} or \define{finite sums set} in $G$ is any subset of $G$ containing a set of the form
\begin{equation*}
\fs(x_1,x_2,\dots) := \bigg\{ \sum_{n \in \alpha} x_n : \emptyset \ne \alpha \subset \mathbb{N}, |\alpha| < \infty \bigg\}
\end{equation*}
for some sequence $n \mapsto x_n$ in $G$.
Given $r \in \mathbb{N}$, an \define{IP$_r$ set} in $G$ is any subset of $G$ containing a set of the form
\begin{equation*}
\fs(x_1,x_2,\dots,x_r) := \bigg\{ \sum_{n \in \alpha} x_n : \emptyset \ne \alpha \subset \{ 1,\dots,r \} \bigg\}
\end{equation*}
for some $x_1,\dots,x_r$ in $G$.
A subset of $G$ is \define{$\ip^*$} if its intersection with every $\ip$ set in $G$ is non-empty, and \define{$\ip^*_r$} if its intersection with every $\ip_r$ set is non-empty.
The term $\ip$ was introduced in \cite{MR531271}, the initials standing for ``idempotence'' or ``infinite-dimensional parallelopiped'' and $\ip^*_r$ sets were introduced in \cite{MR833409}.
The \define{upper Banach density} of a subset $S$ of $G$ is defined by
\begin{equation*}
\upperdens(S) = \sup \left\{ \upperdens_\Phi(S) : \Phi \textup{ a F\o{}lner sequence in } G \right\}
\end{equation*}
where
\begin{equation*}
\upperdens_\Phi(S) = \limsup_{N \to \infty} \frac{|S \cap \Phi_N|}{|\Phi_N|}
\end{equation*}
and a \define{F\o{}lner sequence} is a sequence $N \mapsto \Phi_N$ of finite, non-empty subsets of $G$ such that
\begin{equation*}
\lim_{N \to \infty} \frac{|(g + \Phi_N) \cap \Phi_N|}{|\Phi_N|} = 1
\end{equation*}
for all $g$ in $G$.
Lastly, $S \subset G$ is said to be \define{almost $\ip^*$} (written $\aip^*)$ if it is of the form $A \setminus B$ where $A$ is $\ip^*$ and $\upperdens(B) = 0$, and said to be \define{almost $\ip^*_r$} (written $\aip^*_r$) if it is of the form $A \backslash B$ where $A$ is $\ip^*_r$ and $\upperdens(B) = 0$.
\end{definition}
Although when $G = \mathbb{Z}$ any $\ip$ set with non-zero generators is unbounded, this is not the case in general.
For example, if $G = \mathbb{Q}$ then the $\ip$ set generated by the sequence $n \mapsto 1/n^2$ remains bounded.
To state our result we recall some definitions from \cite{MR2145566}.
Fix a countable field $F$.
By a \define{monomial} we mean a mapping $F^n \to F$ of the form $(x_1,\dots,x_n) \mapsto a x_1^{d_1} \cdots x_n^{d_n}$ for some $a \in F$ and integers $d_1,\dots,d_n \ge 0$ not all zero.
Let $V$ and $W$ be finite-dimensional vector spaces over $F$.
A mapping $F^n \to W$ is a \define{polynomial} if it is a linear combination of vectors with monomial coefficients.
A mapping $V \to W$ is a \define{polynomial} if, in terms of a basis of $V$ over $F$, it is a polynomial mapping $F^n \to W$.
Here is our main result.
\begin{theorem}
\label{thm:fieldsPolyRec}
Let $W$ be a finite-dimensional vector space over a countable field $F$ and let $T$ be an action of the additive group of $W$ on a probability space $(X,\mathscr{B},\mu)$.
For any polynomial $\phi : F^n \to W$, any $B \in \mathscr{B}$ and any $\epsilon > 0$ the set
\begin{equation}
\label{eqn:fieldLargeRec}
\{ u \in F^n : \mu(B \cap T^{\phi(u)} B) > \mu(B)^2 - \epsilon \}
\end{equation}
is $\aip^*_r$ for some $r \in \mathbb{N}$.
\end{theorem}
Our result implies in particular that \eqref{eqn:fieldLargeRec} is syndetic.
In fact, as we will show in Section~\ref{sec:proof}, we have generalized \cite[Corollary~5]{MWcountableFields}, where, in the finie characteristic case, the set \eqref{eqn:fieldLargeRec} is shown to belong to every essential idempotent ultrafilter on $F$.
This latter notion of largeness, introduced in \cite{MR2353901}, lies between syndeticity and $\aip^*_r$.
The conclusion of Theorem~\ref{thm:fieldsPolyRec} is of an additive nature: the notion of being $\aip^*_r$ is only related to the additive structure of $F^n$.
It is natural to ask, when $n = 1$, whether \eqref{eqn:fieldLargeRec} is also large in terms of the multiplicative structure of $F$.
We address this question in Section~\ref{sec:multiplicative}, proving that in fact \eqref{eqn:fieldLargeRec} intersects any multiplicatively central set that has positive upper Banach density.
Multiplicatively central sets are defined in Section~\ref{sec:multiplicative} and upper Banach density is as defined above.
This result is proved in Section~\ref{sec:proof}.
In Section~\ref{sec:iprSets} we prove the facts we will need about $\ip^*_r$ sets.
Finally, in Section~\ref{sec:multiplicative} we relate the largeness of the set \eqref{eqn:fieldLargeRec} to the multiplicative structure of $F$ in the case $n = 1$.
We would like to thank R. McCutcheon for communicating to us his result used at the end of Section~\ref{sec:proof}.
\section{Finite IP sets}
\label{sec:iprSets}
Let $\mathscr{F}$ be the collection of all finite, non-empty subsets of $\mathbb{N}$.
Write $\alpha < \beta$ for elements of $\mathscr{F}$ if $\max \alpha < \min \beta$.
A subset of $\mathscr{F}$ is an $\fu$ set if it contains a sequence $\alpha_1 < \alpha_2 < \cdots$ from $\mathscr{F}$ and all finite unions of sets from the sequence.
Write $\mathscr{F}_r$ for all finite, non-empty subsets of $\{ 1,\dots, r\}$.
A subset of $\mathscr{F}_r$ (or of $\mathscr{F}$) is an $\fu_s$ set if it contains sets $\alpha_1 < \cdots < \alpha_s$ from $\mathscr{F}_r$ (or from $\mathscr{F}$) and all finite unions.
For any $\ip_r$ set $A \supset \fs(x_1,\dots,x_r)$ in an abelian group $G$ there is a map $\mathscr{F}_r \to G$ given by $\alpha \mapsto \sum \{x_i : i \in \alpha \}$, and for any $\ip$ set in $G$ there is a map $\mathscr{F} \to G$ defined similarly.
Furstenberg and Katznelson \cite{MR833409} showed that any $\ip_r^*$ set $A$ in $\mathbb{Z}$ satisfies
\begin{equation*}
\liminf_{N \to \infty} \frac{|A \cap \{1 ,\dots, N\}|}{N} \ge \frac{1}{2^{r-1}}
\end{equation*}
so for any $r \in \mathbb{N}$ one can construct an $\ip^*$ set that is not $\ip^*_r$.
The set $k\mathbb{N}$, with $k$ large enough, is one such example.
As the following example shows, by removing well-spread $\ip_r$ sets from $\mathbb{Z}$, it is possible to construct a set that is $\ip^*$ but never $\ip^*_r$.
\begin{example}
Let $A_r$ be the $\ip_r$ set with generators $x_1 = \cdots = x_r = 2^{2^r}$ so that $A_r = \{ i \cdot 2^{2^r} : 1 \le i \le r \}$.
Let $A$ be the union of all the $A_r$.
We claim that $A$ cannot contain an $\ip$ set, from which it follows that $\mathbb{N}\backslash A$ is $\ip^*$.
Since $A$ contains $\ip_r$ sets for arbitrarily large $r$ we also have that $\mathbb{N} \backslash A$ is not $\ip^*_r$ for any $r$.
Suppose that $x_n$ is a sequence generating an $\ip$ set in $A$.
If one can find $x_i \in A_r$ and $x_j \in A_s$ with $r < s$ then $x_j + x_i$ does not belong to $A$ because the gaps in $A_s$ are larger than the largest element in $A_r$.
On the other hand, if all $x_i$ belong to the same $A_r$ then some combination of them is not in $A$ because the gap between $A_r$ and $A_{r+1}$ is too large.
\end{example}
A family $\mathscr{S}$ of subsets of $G$ is said to have the \define{Ramsey property} if $S_1 \cup S_2$ belonging to $\mathscr{S}$ always implies that at least one of $S_1$ or $S_2$ contains a member of $\mathscr{S}$.
It follows from the reformulation of
Hindman's theorem \cite{MR0349574}, stated below, that the collection of all $\ip$ subsets of a group $G$ has the Ramsey property.
A \define{coloring} of a set $A$ is any map $c : A \to \{ 1,\dots, k \}$ for some $k \in \mathbb{N}$.
Given a coloring of $A$, a subset $B$ is then called \define{monochromatic} if $c$ is constant on $B$.
\begin{theorem}[{\cite[Corollary~3.3]{MR0349574}}]
For any coloring of $\mathscr{F}$ one can find $\alpha_1 < \alpha_2 < \cdots$ in $\mathscr{F}$ such that the collection of all finite unions of the sets $\alpha_i$ is monochromatic.
\end{theorem}
Given a family $\mathscr{I}$ of subsets of $G$, the \define{dual family} of $\mathscr{S}$ is the collection $\mathscr{S}^*$ of subsets of $G$ that intersect every member of $\mathscr{S}$ non-emptily.
Taking $\mathscr{S}$ to consist of all $\ip$ sets, one can deduce that the intersection of an $\ip^*$ set with an $\ip$ set contains an $\ip$ set and that the intersection of two $\ip^*$ sets is again $\ip^*$.
The collection of all $\ip_r$ sets does not have the Ramsey property, but there is a suitable replacement that allows one to deduce results about $\ip^*_r$ sets similar to the ones for $\ip^*$ sets mentioned above.
\begin{proposition}
For any $s$ and $k$ in $\mathbb{N}$ there is an $r$ such that any $k$-coloring of any $\ip_r$ set yields a monochromatic $\ip_s$ set.
\begin{proof}
Suppose to the contrary that one can find $s$ and $k$ in $\mathbb{N}$ such that, for any $r$ there is a $k$-coloring of an $\ip_r$ set $A_r$ having no monochromatic $\ip_s$ subset.
This coloring of $A_r$ gives rise to a coloring $c_r$ of $\mathscr{F}_r$ via the canonical map $\mathscr{F}_r \to A_r$. That no $A_r$ contains a monochromatic $\ip_s$ set implies that no $\mathscr{F}_r$ contains a monochromatic $\fu_s$ set. We now use Hindman's theorem to reach a contradiction.
Let $\alpha_i$ be an enumeration of $\mathscr{F}$. We construct a coloring $c : \mathscr{F} \to \{ 1,\dots, k \}$ by induction on $i$. To begin note that $\alpha_1 \in \mathscr{F}_r$ whenever $r > \max \alpha_1$ so we can find a strictly increasing sequence $r(1,n)$ in $\mathbb{N}$ such that $c_{r(1,n)}(\alpha_1)$ takes the same value for all $n$. Put $c(\alpha_1) = c_{r(1,n)}(\alpha_1)$. Now, assuming that we have found a strictly increasing sequence $r(i,n)$ such that, for each $1 \le j \le i$ the color $c_{r(i,n)}(\alpha_j)$ is constant in $n$ and equal to $c(\alpha_j)$, choose a strictly increasing subsequence $r(i+1,n)$ of $r(i,n)$ such that $c_{r(i+1,n)}(\alpha_{i+1})$ is constant and let this value be $c(\alpha_{i+1})$. The colors of $\alpha_1,\dots,\alpha_i$ are unchanged and the induction argument is concluded.
By Hindman's theorem we can find $\beta_1 < \cdots < \beta_s$ in $\mathscr{F}$ such that $B = \fu(\beta_1,\dots,\beta_s)$ is monochromatic, meaning $c$ is constant on $B$.
Choose $i$ such that $B \subset \{ \alpha_1,\dots, \alpha_i \}$ and then choose $n$ so large that $r(i,n) > \max \beta_s$.
It follows that $B \subset \mathscr{F}_{r(i,n)}$ is monochromatic because $c_{r(i,n)}(\beta) = c(\beta)$ for all $\beta \in B$.
Thus $\mathscr{F}_{r(i,n)}$ contains a monochromatic $\fu_s$ set, which is a contradiction.
\end{proof}
\end{proposition}
With this version of partition regularity for $\ip_r$ sets we can deduce some facts about $\ip_r^*$ sets.
\begin{proposition}
Given any $s \in \mathbb{N}$ there is some $r \in \mathbb{N}$ such that any $\ip_s^*$ set intersects any $\ip_r$ set in an $\ip_s$ set.
\begin{proof}
Let $A$ be an $\ip_s^*$ set and choose by the previous proposition some $r$ such that any two-coloring of an $\ip_r$ set yields a monochromatic $\ip_s$ set. Let $B$ be an $\ip_r$ set.
One of $B \cap A$ and $B \backslash A$ contains an $\ip_s$ set.
It cannot be $B \backslash A$ because $A$ is $\ip_s^*$ and disjoint from it.
Thus $A \cap B$ contains an $\ip_s$ set as desired.
\end{proof}
\end{proposition}
\begin{proposition}
\label{prop:ipstarFilter}
Given any $r,s$ in $\mathbb{N}$ there is some $\alpha(r,s) \in \mathbb{N}$ such that if $A$ is $\ip_r^*$ and $B$ is $\ip^*_s$ then $A \cap B$ is $\ip^*_{\alpha(r,s)}$.
\begin{proof}
Let $A$ be $\ip^*_r$ and let $B$ be $\ip^*_s$ with $r \ge s$.
Choose $q$ so large that $A \cap C$ contains an $\ip_r$ set whenever $C$ is an $\ip_q$ set.
This is possible by the previous result.
Since $A \cap C$ contains an $\ip_r$ set and $r \ge s$ the set $(A \cap C) \cap B$ must be non-empty.
Since $C$ was arbitrary $A \cap B$ is an $\ip_q^*$ set.
Put $\alpha(r,s) = q$.
\end{proof}
\end{proposition}
\section{Proof of Theorem~\ref{thm:fieldsPolyRec}}
\label{sec:proof}
First we note that we may assume, by restricting our attention to the sub-$\sigma$-algebra generated by the orbit of $B$, that the probability space $(X,\mathscr{B},\mu)$ is separable.
We begin with a corollary of the Hales-Jewett theorem.
For any $n \in \mathbb{N}$ write $[n] = \{ 1,\dots,n\}$.
Write $\mathcal{P}A$ for the set of all subsets of a set $A$.
Recall that, given $k,m \in \mathbb{N}$, a \define{combinatorial line} in $[k]^{[m]}$ is specified by a partition $U_0 \cup U_1$ of $\{1,\dots,m\}$ with $U_1 \ne \emptyset$ and a function $\varphi : U_0 \to [k]$, and consists of all functions $[m] \to [k]$ that extend $\varphi$ and are constant on $U_1$.
With these definitions we can state the Hales-Jewett theorem.
\begin{theorem}[\cite{MR0143712}]
For every $d,t \in \mathbb{N}$ there is $r = \hj(d,t) \in \mathbb{N}$ such that for any $t$-coloring of $[d]^{[r]}$ one can find a monochromatic combinatorial line.
\end{theorem}
\begin{corollary}
\label{cor:hjSets}
For any $d,t \in \mathbb{N}$ there is $r \in \mathbb{N}$ such that any $t$-coloring
\begin{equation*}
(\mathcal{P}\{1,\dots,r\})^d \to \{1,\dots,t\}
\end{equation*}
contains a monochromatic configuration of the form
\begin{equation}
\label{eqn:hjSets}
\{ (\alpha_1 \cup \eta_1,\dots,\alpha_d \cup \eta _d) : (\eta_1,\dots,\eta_d) \in \{ \emptyset, \gamma \}^d \}
\end{equation}
for some $\gamma,\alpha_1,\dots,\alpha_d \subset \{1,\dots,r\}$ with $\gamma$ non-empty and $\gamma \cap \alpha_i = \emptyset$ for each $1 \le i \le d$.
\end{corollary}
\begin{proof}
Let $r = \hj(2^d,t)$.
Define a map $\psi : [2^d]^{[r]} \to (\mathcal{P}[r])^d$ by declaring $\psi(w) = (\alpha_1,\dots,\alpha_d)$ where $\alpha_i$ consists of those $j \in [r]$ for which the binary expansion of $w(j)-1$ has a 1 in the $i$th position.
Combinatorial lines in $[2^d]^{[r]}$ correspond via this map to configurations of the form \eqref{eqn:hjSets} in $(\mathcal{P}[r])^d$.
\end{proof}
We use the above version of the Hales-Jewett theorem to derive the following topological recurrence result.
Given $n \in \mathbb{N}$ and a ring $R$, by a \define{monomial mapping} from $R^n$ to $R$ we mean any map of the form $(x_1,\dots,x_n) \mapsto ax_1^{d_1} \cdots x_n^{d_n}$ for some $a \in R$ and some $d_1,\dots,d_n \ge 0$ not all zero.
\begin{proposition}[cf {\cite[Theorem~7.7]{MR2757532}}]
\label{lem:iprOnMetric}
Let $R$ be a commutative ring and let $T$ be an action of the additive group of $R$ on a compact metric space $(X,\mathsf{d})$ by isometries.
For any monomial mapping $\phi : R^n \to R$, any $x \in X$ and any $\epsilon > 0$ there is $r \in \mathbb{N}$ such that the set
\begin{equation*}
\{ u \in R^n : \mathsf{d}(T^{\phi(u)} x, x) < \epsilon \}
\end{equation*}
is $\ipr{r}^*$.
\end{proposition}
\begin{proof}
Write $\phi(x_1,\dots,x_n) = a x_1^{d_1} \cdots x_n^{d_n}$ for some $a \in R$ and some $d_i \ge 0$ not all zero.
Let $d = d_1 + \cdots + d_n$.
Put $e_0 = 0$ and $e_i = d_1 + \cdots + d_i$ for each $1 \le i \le n$.
Fix $x \in X$ and $\epsilon > 0$.
Let $V_1,\dots,V_t$ be a cover of $X$ by balls of radius $\epsilon/2^d$.
Let $r = r(d,t)$ be as in Corollary~\ref{cor:hjSets}.
Fix $u_1,\dots,u_r$ in $R^n$.
Given $\alpha \subset \{ 1,\dots,r \}$ write $u_\alpha$ for $\Sigma \{ u_i : i \in \alpha \}$ and $u_\alpha(i)$ for the $i$th coordinate of $u_\alpha$.
By choosing for each $(\alpha_1,\dots,\alpha_d) \in (\mathcal{P}\{ 1,\dots,r\})^d$ the minimal $1 \le i \le t$ such that
\begin{equation*}
T(a u_{\alpha_1}(1) \cdots u_{\alpha_{e_1}}(1) \cdots u_{\alpha_{e_{n-1}+1}}(n) \cdots u_{\alpha_{e_n}}(n)) x \in V_i
\end{equation*}
we obtain via Theorem~\ref{cor:hjSets} sets $\alpha_1,\dots,\alpha_d,\gamma \subset \{ 1,\dots,r \}$ with $\gamma$ non-empty and disjoint from all $\alpha_i$ which, combined with the expansion
\begin{equation*}
a u_\gamma(1)^{d_1} \cdots u_\gamma(n)^{d_n} = a \prod_{k=1}^{n} \prod_{i=e_{k-1}+1}^{e_{k}} u_\gamma(k) + u_{\alpha_i}(k) - u_{\alpha_i}(k)
\end{equation*}
and the fact that $T$ is an isometry, yields $\mathsf{d}(T^{\phi(u_\gamma)}x,x) < \epsilon$ as desired.
\end{proof}
Let $G$ be an abelian group.
Actions $T_1$ and $T_2$ of $G$ are said to \define{commute} if $T_1^g T_2^h = T_2^h T_1^g$ for all $g,h \in G$.
As we now show, iterating the previous result yields a version for commuting actions of rings.
\begin{corollary}
\label{cor:iprPolys}
Let $R$ be a commutative ring and let $T_1,\dots,T_k$ be commuting actions of the additive group of $R$ on a compact metric space $(X,\mathsf{d})$ by isometries.
For any monomial mappings $\phi_1,\dots,\phi_k : R^n \to R$, any $x \in X$ and any $\epsilon > 0$, there is $r \in \mathbb{N}$ such that
\begin{equation}
\label{eqn:metricPolyReturns}
\{ u \in R^n : \mathsf{d}(T_1^{\phi_1(u)} \cdots T_k^{\phi_k(u)} x, x) < \epsilon \}
\end{equation}
is $\ip^*_r$.
\end{corollary}
\begin{proof}
Fix $1 \le i \le k$.
By applying Proposition~\ref{lem:iprOnMetric} to the $R$ action $r \mapsto T_i^r$, we can find $r_i \in \mathbb{N}$ such that
\begin{equation*}
Z_i = \{ u \in R^n : \mathsf{d}(T_i^{\phi_i(u)}x,x) < \epsilon/k \}
\end{equation*}
is $\ip^*_{r_i}$.
By Proposition~\ref{prop:ipstarFilter}, the intersection $Z_1 \cap \cdots \cap Z_k$ is $\ip^*_r$ for some $r \in \mathbb{N}$.
Since the $T_i$ are isometries, it follows that \eqref{eqn:metricPolyReturns} is $\ip^*_r$ as desired.
\end{proof}
Combining the preceding lemma with the following facts from \cite{MR2145566} will lead to a proof of Theorem~\ref{thm:fieldsPolyRec}.
Let $\phi : V \to W$ be a polynomial and let $T$ be an action of $W$ on a probability space $(X,\mathscr{B},\mu)$.
Assume that $\phi V$ spans $W$.
As in \cite{MR2145566}, say that $f$ in $\lp^2(X,\mathscr{B},\mu)$ is \define{weakly mixing} for $(T,\phi)$ if $\dlim_v \langle T^{\phi (v)} f, g \rangle = 0$ for all $g$ in $\lp^2(X,\mathscr{B},\mu)$, where $\dlim$ denotes convergence with respect to the filter of sets whose complements have zero upper Banach density.
This is the same as strong Ces\`{a}ro convergence along every F\o{}lner sequence in $V$.
Call $f \in \lp^2(X,\mathscr{B},\mu)$ \define{compact} for $T$ if $\{ T^{v} f : v \in V \}$ is pre-compact in the norm topology.
Denote by $\mathscr{H}_\mathrm{wm}(T,\phi)$ the closed subspace of $\lp^2(X,\mathscr{B},\mu)$ spanned by functions that are weakly mixing for $(T,\phi)$, and let $\mathscr{H}_\mathrm{c}(T)$ be the closed subspace of $\lp^2(X,\mathscr{B},\mu)$ spanned by functions compact for $T$.
We have $\lp^2(X,\mathscr{B},\mu) = \mathscr{H}_\mathrm{c}(T) \oplus \mathscr{H}_\mathrm{wm}(T,\phi)$ by \cite[Theorem~3.17]{MR2145566}.
\begin{proof}[Proof of Theorem~\ref{thm:fieldsPolyRec}]
Write $\phi = \phi_1 w_1 + \cdots + \phi_k w_k$ where the $\phi_i$ are monomials $F^n \to F$ and the $w_i$ belong to $V$.
Fix $B$ in $\mathscr{B}$ and $\epsilon > 0$.
Let $f = P1_B$ be the orthogonal projection of $1_B$ on $\mathscr{H}_\mathrm{c}(T)$.
Let $\Omega$ be the orbit closure of $f$ in the norm topology under $T$.
Since $f$ is compact, $\Omega$ is a compact metric space.
Applying Lemma~\ref{cor:iprPolys} to the $F$ actions $x \mapsto T^{xw_i}$ and monomials $\phi_i$ for $1 \le i \le k$, we see that
\begin{equation*}
\{ u \in F^n : \nbar f - T^{\phi(u)} f \nbar < \epsilon/2 \}
\end{equation*}
is $\ip^*_r$.
We have
\begin{equation*}
\langle T^{\phi(u)} 1_B, 1_B \rangle = \langle T^{\phi(u)} f, 1_B \rangle + \langle T^{\phi(u)}(1_B - f), 1_B \rangle
\end{equation*}
so the set
\begin{equation*}
\{ u \in F^n : \langle T^{\phi(u)} 1_B, 1_B \rangle \ge \langle f, 1_B \rangle - \epsilon/2 + \langle T^{\phi(u)}(1_B - f), 1_B \rangle
\end{equation*}
is $\ip^*_r$.
Since $1_B - f$ is weakly mixing for $(T,\phi)$ the set
\begin{equation*}
\{ u \in F^n : \langle T^{\phi(u)} 1_B, 1_B \rangle \ge \langle f, 1_B \rangle - \epsilon \}
\end{equation*}
is $\aip^*_r$.
Thus \eqref{eqn:fieldLargeRec} is $\aip^*_r$ by
\begin{equation*}
\langle f, 1_B \rangle = \langle P1_B, P1_B \rangle \langle 1,1\rangle \ge \langle P1_B, 1 \rangle^2 = \mu(B)^2
\end{equation*}
as desired.
\end{proof}
We obtain as a corollary the following result from \cite{MWcountableFields}.
An ultrafilter $\ultra{p}$ on an abelian group $G$ is \define{essential} if it is idempotent and $\upperdens(A) > 0$ for all $A \in \ultra{p}$.
\begin{corollary}[{\cite[Corollary~5]{MWcountableFields}}]
\label{cor:mw}
Let $F$ be a countable field of finite characteristic and let $p : F \to F^n$ be a polynomial mapping.
For any action $T$ of $F^n$ on a probability space $(X,\mathscr{B},\mu)$, any $B$ in $\mathscr{B}$ and any $\epsilon > 0$ the set
\begin{equation}
\label{eqn:mwPolyRec}
\{ x \in F : \mu(B \cap T^{p(x)}B) \ge \mu(B)^2 - \epsilon \}
\end{equation}
belongs to every essential idempotent ultrafilter.
\end{corollary}
\begin{proof}
It follows from the proof of Theorem~\ref{thm:fieldsPolyRec} that \eqref{eqn:mwPolyRec} is of the form $A \setminus B$ where $A$ is $\ip^*_r$ for some $r \in \mathbb{N}$ and $B$ has zero upper Banach density.
Any $\ip^*_r$ subset of $G$ is $\ip^*$ and therefore belongs to every idempotent ultrafilter on $G$, so $A$ certainly belongs to every essential ultrafilter on $G$.
By the filter property, removing from $A$ a set of zero upper Banach density does not change this fact, because every set in an essential idempotent has positive upper Banach density.
\end{proof}
It was recently shown by R. McCutcheon that there are sets belonging to every essential, idempotent ultrafilter that are not $\aip^*$.
Thus our result constitues a genuine strengthening of Corollary~\ref{cor:mw}.
\section{Multiplicative structure}
\label{sec:multiplicative}
According to Theorem~\ref{thm:fieldsPolyRec} the set \eqref{eqn:fieldLargeRec} is large in terms of the additive structure of $F^n$.
In this section we connect the largeness of \eqref{eqn:fieldLargeRec} when $n = 1$ to the multiplicative structure of $F$ by showing that \eqref{eqn:fieldLargeRec} is almost an $\mc^*$ subset of $F$.
Here $\mc$ stands for \define{multiplicatively central} and a set is $\mc^*$ if its intersection with every multiplicatively central set is non-empty.
To define what a multiplicatively central set is, recall that, given a commutative ring $R$, we can extend the multiplication on $R$ to a binary operation $\conv$ on the set $\beta R$ of all ultrafilters on $R$ by
\begin{equation*}
\ultra{p} \conv \ultra{q} = \{ A \subset R : \{ u \in R : Au^{-1} \in \ultra{p} \} \in \ultra{q} \}
\end{equation*}
for all $\ultra{p},\ultra{q} \in \beta R$.
One can check that this makes $\beta R$ a semigroup.
It is also possible to equip $\beta R$ with a compact, Hausdorff topology with respect to which the binary operation is right continuous.
See \cite{MR2052273} or \cite{MR2893605} for the details of these constructions.
A subset $A$ of $R$ is then called \define{multiplicatively central} or \define{MC} if it belongs to an ultrafilter that is both idempotent and contained in a minimal right ideal of $\beta R$.
The following version of \cite[Theorem~3.5]{MR1305896} relates $\ip_r$ sets in $R$ to multiplicatively central sets.
\begin{proposition}
Let $R$ be a commutative ring and let $A \subset R$ be a multiplicatively central set.
For every $r \in \mathbb{N}$ one can find $x_1,\dots,x_r$ in $R$ such that $\FS(x_1,\dots,x_r) \subset A$.
\end{proposition}
\begin{proof}
Consider the family $T$ of ultrafilters $\ultra{p}$ on $R$ having the property that every set in $\ultra{p}$ contains an $\ip_r$ set for every $r \in \mathbb{N}$.
We claim that $T$ is a two-sided ideal in $\beta R$.
Indeed fix $\ultra{p} \in T$ and $\ultra{q} \in \beta R$.
We need to prove that $\ultra{p} \conv \ultra{q}$ and $\ultra{q} \conv \ultra{p}$ belong to $T$.
For the former, fix $B \in \ultra{p} \conv \ultra{q}$ and $r \in \mathbb{N}$.
We can find $u \in R$ such that $Bu^{-1} \in \ultra{p}$ so $Bu^{-1}$ contains $\FS(x_1,\dots,x_r)$ for some $x_1,\dots,x_r$ in $R$.
This immediately implies that $\FS(x_1u,\dots,x_ru) \subset B$ as desired.
For the latter, fix $B \in \ultra{q} \conv \ultra{p}$ and $r \in \mathbb{N}$.
We can find $x_1,\dots,x_r$ in $R$ such that $\FS(x_1,\dots,x_r) \subset \{ u \in R : Bu^{-1} \in \ultra{q} \}$.
But by the filter property
\begin{equation}
\label{eqn:finiteIntersection}
\cap \{ Bu^{-1} : u \in \FS(x_1,\dots,x_r) \} \in \ultra{q}
\end{equation}
and choosing $a$ from this intersection gives $\FS(ax_1,\dots,ax_r) \subset B$.
Our set $A$ is multiplicatively central so it is contained in some idempotent ultrafilter $\ultra{p}$ that belongs to a minimal right ideal $S$.
Since $T$ is also a right ideal $S \subset T$ and $\ultra{p} \in T$ as desired.
\end{proof}
Note that it is not possible to prove this way that multiplicatively central sets contain $\ip$ sets, as that would require an infinite intersection in \eqref{eqn:finiteIntersection}.
In fact, as shown in \cite[Theorem~3.6]{MR1305896}, there are multiplicatively central sets in $\mathbb{N}$ that do not contain $\ip$ sets.
We say that a subset of $R$ if $\mc^*$ if its intersection with every multiplicatively central set is non-empty.
As noted in \cite{MR2757532}, the preceding result implies that every $\ip^*_r$ set is $\mc^*$.
Call a set $\amc^*$ (with A again standing for ``almost'') if it is of the form $A \setminus B$ where $A$ is $\mc^*$ and $B$ has zero upper Banach density in $(R,+)$.
The following result is then an immediate consequence of Theorem~\ref{thm:fieldsPolyRec}.
\begin{theorem}
Let $F$ be a countable field and let $T$ be an action of the additive group of $F$ on a probability space $(X,\mathscr{B},\mu)$.
For any polynomial $\phi \in F[x]$, any $B \in \mathscr{B}$ and any $\epsilon > 0$ the set
\begin{equation}
\{ u \in F : \mu(B \cap T^{\phi(u)} B) > \mu(B)^2 - \epsilon \}
\end{equation}
is $\amc^*$.
\end{theorem}
We conclude by mentioning that all $\amc^*$ sets have positive upper Banach density in $(F,+)$.
This follows from the fact that every $\mc^*$ set belongs to every minimal multiplicative idempotent, and a straightforward generalization of \cite[Theorem~5.6]{MR982232}, which guarantees the existence of a minimal idempotent for $\conv$ all of whose members have positive upper Banach density in $(F,+)$.
\printbibliography
\end{document}
|
1,477,468,751,393 | arxiv | \section{Introduction}
To give a definition of quantum field theories beyond perturbation theory
(Feynman diagrams) they normally have to be regularized by replacing space and
usually also time by a lattice. Then the functional integral becomes a
well-defined object and thus amenable to numerical methods, usually in the
form of stochastically sampling lattice field configurations by Monte Carlo
methods. Most theories of interest contain fermion fields which lead to some
or all integration variables being anticommuting Grassmann `numbers'. In the
standard approach these integrations {---} possibly after introducing
additional Bose fields {---} are Gaussian and are performed exactly. The
result is an effective action of the bosonic fields alone which become coupled
non-locally. The known Monte Carlo techniques to practically simulate such
systems are mostly based on molecular dynamics and the hybrid Monte Carlo idea
(HMC) {\cite{Gottlieb:1987mq}}, {\cite{Duane:1987de}}. These methods have been
improved and optimized rather successfully over the years by a very large
effort of many members of the lattice community. On the other hand
practitioners know that once fermions are decoupled then HMC for locally
coupled Bose fields is not a very efficient algorithm compared to alternatives
like over-relaxation which are then available, not to mention the
(unfortunately few) cases where cluster methods can be applied. This implies a
large penalty for fermions even in cases where their effects are only small.
At small fermion masses the fermionic forces in HMC tend to grow and the step
size of the molecular dynamics trajectories has to be taken small enough. With
this quasi-continuous evolution one then has to be cautious about possible
long autocorrelations. After all, to the best of the author's knowledge, the
ergodicity of HMC has not been formally proven.
Maybe for the aforementioned reasons among others some part of the community
has remained motivated to look for radically different approaches. A rather
natural idea is to look for a representation of fermions as some sort of `sum
over configurations' more similar to the bosons. One of the pioneering papers
developing such ideas is {\cite{hirsch1982mcs}}. There as in numerous
succeeding attempts one starts from an operator formulation of fermions and
inserts intermediate states in the occupation number basis between factors of
the transfer matrix. In this way occupied sites map out an ensemble of
`world-lines' or a gas of loops of fermions on the lattice. Often the
amplitudes that arise oscillate in sign with the danger of leading to an
unmanageable signal to noise ratio, the infamous fermionic sign problem. The
inclusion of gauge fields in this approach poses additional problems.
A somewhat different approach was successful for {---} but also restricted
to {---} QCD at infinite gauge coupling, $\beta = 0$,
{\cite{Rossi:1984cv}}. In the Euclidean path integral with staggered fermions
but no gauge plaquette term the group valued gauge fields can be integrated
out first. The resulting model of locally paired even Grassmann elements has
contributions that can be viewed as a statistical system of explicitly
color-neutral mesons (dimers) and baryon loops. Both these systems and the
world-line gas \`a la {\cite{hirsch1982mcs}} are difficult to simulate
efficiently by local methods due to constraints which conflict with local
deformations of the configurations. In some cases efficient nonlocal updates
could be devised {\cite{Evertz:2000rk}}, {\cite{Adams:2003cca}}.
A `more Euclidean' version of the idea was proposed in
{\cite{Karowski:1984ih}}. These authors started from the determinant of the
integrated-out staggered fermions and tried to stochastically generate its
expansion into cycles. However the restriction to local updates and the
sign problem, even for $D = 2$ in this case, have limited the use of the
method.
In {\cite{Gattringer:1998cd}} (see also {\cite{Scharnhorst:1996gj}}) a loop or
world-line representation was proposed for the partition function of standard
two-dimensional Euclidean Wilson fermions in an external scalar field. They
were mapped on a certain 8-vertex model. Based on it the Gross-Neveu model was
simulated with local updates in {\cite{Gattringer:2007em}}. In
{\cite{Wolff:2007ip}} the same representation was re-derived directly from the
Grassmann integral for charge self-conjugate (Majorana) Wilson fermions. The
mapping between Wilson fermions and a loop-gas could in addition be made
precise also for a finite torus with (anti)periodic boundary conditions. A
cluster algorithm for the loop-gas was developed in {\cite{Wolff:2007ip}}
which produces almost uncorrelated loop configurations at low cost. In the
sequel Willi Rath and the author have tried to compute correlations based on
these configurations {\cite{WRdip}}. The only solution we have found so-far
proceeds via the numerical generation of the scalar $\sigma$-field that
usually factorizes the Gross-Neveu interaction. Then the close to singular
Dirac operator in this random scalar field has to be inverted and the CPU time
ends up being spent in a very similar fashion as in HMC.
In this paper, as an alternative approach, we adopt the `worm' algorithm of
Prokof'ev and Svistunov (PS) {\cite{prokofev2001wacci}} to lattice fermions of
the Wilson type. To this end we build on the study of the PS algorithm for the
Ising model carried out as a preparation in {\cite{Wolff:2008km}}. While there
the (untruncated) strong coupling expansion is sampled, the fermion loop-gas
corresponds to the quite similar hopping expansion{\footnote{Because of this
strong similarity, we put this paper into one series with
{\cite{Wolff:2008km}}.}}. We here extend the loop-gas formulation of fermions
on a torus in two ways. We generalize {\cite{Wolff:2007ip}} to including two
spinor field insertions at arbitrary lattice sites. It turns out that the PS
algorithm is ideally suited to keep track of the non-local amplitudes involved
due to Fermi statistics. The second non-trivial extension takes this
construction to Majorana fermions in three Euclidean dimensions. While we
understand why the fermionic sign problem mentioned before is absent in two
dimensions if the system size is large in correlation lengths, the full
problem has to be confronted in three dimensions. We indeed find for free
fermions that are implemented numerically in this study, that the PS algorithm
for $D = 2$ is similarly efficient as in the Ising model. While clearly
correct in principle also in $D = 3$ it fails numerically with the present
technique when the continuum limit is approached. We nonetheless find the
three dimensional loop representation theoretically quite interesting. We
think that the free Majorana fermion in $D = 3$ is an excellent study ground
for more clever techniques, for instance cluster improved observables, to
still overcome the sign problem, perhaps along the lines of
{\cite{Chandrasekharan:1999cm}}.
The organization of this paper is as follows. In the next section we set up
our notation for the lattice fermions discussed followed by section 3
introducing dimers that label all possible hopping graphs needed for the PS
simulation. Tools for the simulations are described in 4. In section 5. we define the
kind of observables on the loop ensemble that allow to make contact with
fermionic two-point functions followed in 6. by the description of numerical
results. We end on 7. conclusions including a brief outline how interaction
can be added. In two appendices we collect the free fermion results used as
benchmarks and a geometrical discussion of the fermionic phase factors arising
for each closed loop.
\section{Majorana-Wilson lattice fermions}
We start from a standard Wilson-Dirac fermion with the action
\begin{equation}
S_{\tmop{WD}} = a^D \sum_x \overline{\psi} (\gamma_{\mu}
\tilde{\partial}_{\mu} + m - \frac{r}{2} a \partial^{\ast} \partial) \psi .
\label{SDirac}
\end{equation}
We consider a $D$-dimensional standard hypercubic lattice with spacing $a$ in
all directions and either periodic or antiperiodic boundary conditions for
each direction over the respective periodicity length $L_{\mu}$. The boundary
conditions are coded into a vector $\varepsilon_{\mu}$ with components 0,1 by
the condition
\begin{equation}
\psi (x \pm L_{\mu} \hat{\mu}) = (- 1)^{\varepsilon_{\mu}} \psi (x)
\end{equation}
and similarly for $\overline{\psi}$, and $\hat{\mu}$ is a unit vector in the
positive \ $\mu$ direction.
Unless stated otherwise, the mass $m$ is assumed to be a real $x$-dependent
periodic external field $m (x)$ here. By later integrating over it with a
suitable weight one can, starting from this building block, arrive at
interacting theories like the Gross-Neveu model. The operators $\partial,
\partial^{\ast}, \tilde{\partial}$ are the usual forward, backward, and
symmetrized nearest neighbor differences. The set $\{\gamma_{\mu}, \mu = 0, 1,
\ldots, D - 1\}$ are hermitean Euclidean Dirac matrices. From here on \ we
shall restrict ourselves to the space-time dimensions $D = 2, 3$ with $2
\times 2$ $\gamma$-matrices in both cases. The Wilson term suppresses the
doublers and from here on we set its coefficient to the convenient value $r =
1$.
The action (\ref{SDirac}) is invariant under charge conjugation for any $m
(x)$. It is hence both possible and natural to split the fermion into two
neutral Majorana components by setting
\begin{equation}
\psi = \frac{1}{\sqrt{2}} (\xi_1 + i \xi_2), \hspace{1em} \overline{\psi} =
\frac{1}{\sqrt{2}} (\xi_1^{\top} - i \xi_2^{\top})\mathcal{C}
\end{equation}
with the charge conjugation matrix $\mathcal{C}$ obeying
\begin{equation}
\text{$\mathcal{C}$} \gamma_{\mu} \text{$\mathcal{C}$}^{- 1} = -
\gamma_{\mu}^{\top} = - \gamma_{\mu}^{\ast}, \quad \text{$\mathcal{C}$} = -
\text{$\mathcal{C}$}^{\top} .
\end{equation}
Inserting this into (\ref{SDirac}) we find two identical contributions for
$\xi_{1, 2}$. In our Majorana reduction we consider only one such component in
the following
\begin{equation}
S = \frac{1}{2} a^D \sum_x \xi^{\top} \mathcal{C}(\gamma_{\mu}
\tilde{\partial}_{\mu} + m - \frac{1}{2} a \partial^{\ast} \partial) \xi .
\end{equation}
Note that the matrix in this quadratic form is antisymmetric. By collecting
diagonal and neighbor terms we can rewrite this action as
\begin{equation}
S = \frac{1}{2} \sum_x (D + m) \xi^{\top}_{} (x)\mathcal{C} \xi (x) -
\sum_{x, \mu} \xi^{\top}_{} (x)\mathcal{C}P ( \hat{\mu}) \xi (x + \hat{\mu})
\label{Majo}
\end{equation}
where we now have adopted lattice units ($a = 1$) and have introduced
projectors
\begin{equation}
P (n) = \frac{1}{2} (1 - n_{\mu} \gamma_{\mu}) \hspace{1em} (n^2 = 1)
\end{equation}
for each lattice direction ($n = \pm \hat{\mu})$. Note that the hopping term
of a Majorana fermion is a function of the {\tmem{unoriented}} link because of
the identity
\begin{equation}
\xi^{\top}_{} (x)\mathcal{C}P ( \hat{\mu}) \xi (x + \hat{\mu}) =
\xi^{\top}_{} (x + \hat{\mu})\mathcal{C}P (- \hat{\mu}) \xi (x) .
\label{nonorient}
\end{equation}
For $D = 2$ the form (\ref{Majo}) coincides with the starting point of
{\cite{Wolff:2007ip}}.
To continue we introduce the shorthand notation
\begin{equation}
\overline{\xi} = \xi^{\top} \mathcal{C}.
\end{equation}
We emphasize that for the Majorana fermion this depends on $\xi$ while $\psi,
\overline{\psi}$ were independent Grassmann integration variables. The
partition function is given by
\begin{equation}
Z_0^{^{(\varepsilon)}} = \int D \xi \mathrm{e}^{- S} = \tmop{Pf}
[\mathcal{C}(\gamma_{\mu} \tilde{\partial}_{\mu} + m - \frac{1}{2}
\partial^{\ast} \partial)] \label{Z0eps}
\end{equation}
where the Gaussian integral over Majorana fields has led to a
Pfaffian{\footnote{The order of factors in $D \xi$ is assumed to be such that
this is true without an (irrelevant) extra sign.}} of the antisymmetric
matrix. The result depends on the boundary conditions, of course, which is
exhibited for $Z_0^{(\varepsilon)}$ but left implicit on the right hand side.
In a straight-forward generalization of {\cite{Wolff:2008km}} we now extend
our study to include
\begin{equation}
Z^{(\varepsilon)} (u, v) = \int D \xi \mathrm{e}^{- S} \xi (u) \overline{\xi}
(v), \label{Zuv}
\end{equation}
which is a matrix in spin space. It is closely related to the two point
function{\footnote{The dependence of $G$ on the boundary conditions
$\varepsilon$ is left implicit.}}
\begin{equation}
G (x, y ; m) = \langle \xi (x) \overline{\xi} (y) \rangle =
\frac{Z^{^{(\varepsilon)}} (x, y)}{Z_0^{^{(\varepsilon)}}} .
\end{equation}
As we are considering bilinear fermions in an external field $m (x)$ the
propagator can also be obtained as the solution of a system of linear
equations
\begin{equation}
(\gamma_{\mu} \tilde{\partial}_{\mu} + m - \frac{1}{2} \partial^{\ast}
\partial) G (x, y ; m) = \delta_{x, y} \times 1_{\tmop{spin}} \label{Geq}
\end{equation}
where the Dirac operator acts on $x$.
For constant $m$ such an evaluation can proceed by Fourier expansion and will
serve us as a check below. Otherwise the Pfaffian is a problem similar to the
fermion determinant and methods like HMC are suitable at least for an even
number of flavors {\cite{Korzec:2006hy}}, {\cite{TomPhD}}. Our objective here
is however to develop a simulation method alternative to this approach.
We end this section with the remark that, in contrast to the Ising model,
$Z^{(\varepsilon)} (x, x)$ is not equal to the ordinary partition function.
Instead one may show that for any $m$
\begin{eqnarray}
Z^{(\varepsilon)} (x, x) = \frac{\partial Z_0^{^{(\varepsilon)}}}{\partial m
(x)} & \times 1_{\tmop{spin}} \label{Sdensity} &
\end{eqnarray}
holds. To derive this relation we use that the space of antisymmetric $2
\times 2$ matrices is only one-dimensional, given by multiples of \ the second
Pauli matrix. Hence in $Z^{(\varepsilon)} (x, x)$ the integral containing \
$\xi (x) \xi (x)^{\top}$ must be proportional to $\mathcal{C}^{- 1}$.
\section{Dimer form of Majorana fermions}
We here derive the loop-gas form of the fermion correlation function and
partition function. In principle this may be achieved by using theorems for
the expansion of the Pfaffian together with the sparseness of the matrix
introduced before. This would parallel the approach in
{\cite{Karowski:1984ih}}. Instead we shall extensively manipulate the
representation by a Grassmann integral. This is physically more transparent
and may be seen as deriving the required expansion formulas `on the fly' as
they are needed. The power of Grassmann numbers for such purposes was
emphasized before in {\cite{Samuel:1978zx}}.
\subsection{General structure}
We start from the factorized form
\begin{equation}
Z^{(\varepsilon)} (u, v) = \int D \xi \prod_z \mathrm{e}^{- \frac{1}{2} \varphi
(z) \overline{\xi} (z) \xi (z)} \left[ \prod_{l = \langle x y \rangle}
\mathrm{e}^{\overline{\xi} (x) P ( \widehat{y - x}) \xi (y)} \right] \xi (u)
\overline{\xi} (v)
\end{equation}
with the short hand
\begin{equation}
\varphi (x) = D + m (x) .
\end{equation}
Because each $P$ is a one-dimensional projector and due to the Grassmann
nature of $\xi$ there are only two terms{\footnote{While many of the previous
steps go through also for $D = 4$, we will need a third term here.}} in the
expansion of each link-factor. It can thus be `dimerized'
\begin{equation}
\mathrm{e}^{\overline{\xi} (x) P ( \widehat{y - x}) \xi (y)} = \sum_{k_l = 0,
1} [ \overline{\xi} (x) P ( \widehat{y - x}) \xi (y)]^{k_l}
\end{equation}
leading to
\begin{equation}
Z^{(\varepsilon)} (u, v) = \sum_{\{k_l \}} \int D \xi \prod_z \mathrm{e}^{-
\frac{1}{2} \varphi \overline{\xi}^{} \xi} \left[ \prod_{l = \langle x y
\rangle} [ \overline{\xi} (x) P ( \widehat{y - x}) \xi (y)]^{k_l} \right]
\xi (u) \overline{\xi} (v) . \label{Zuvk}
\end{equation}
For each configuration $\{k_l \}$ we say that links with $k_l = 1$ carry an
(active) dimer. Associated with each site we have only two Grassmann variables
integrated over. This implies numerous constraints on contributing dimer
configurations:
\begin{itemize}
\item at sites $x \notin \{u, v\}$ there can only be either 0 or 2 dimers
adjacent
\item if $u \not= v$, at these two sites there must be exactly \ 1 dimer
touching
\item at $u = v$ there can be no dimer touching.
\end{itemize}
In the simpler case of $Z_0^{^{(\varepsilon)}}$ the analogous expansion
requires 0 or 2 dimers around all sites. Any dimer configuration that obeys
these conditions and contributes to $Z^{(\varepsilon)} (u, v)$ or to
$Z_0^{^{(\varepsilon)}}$ we call {\tmem{admissible}}.
As a consequence of these constraints dimers in admissible configurations have
to form chains. These can never backtrack, intersect or overlap. For \ $u
\not= v$ there must be exactly one chain or string connecting $u$ and $v$
that we call $\sigma$. Apart from it all other chains must form a number of
closed loops $\lambda_j$. For contributions to \ $Z_0^{^{(\varepsilon)}}$
there is no string but only loops and the same is true for $Z^{(\varepsilon)}
(u, u)$ with the additional requirement that no loop passes through $u$.
It is to be emphasized that the string (if present) and the loops including
their number is a unique one-to-one representation of an admissible dimer
configuration,
\begin{equation}
\text{} \{k_l \} |_{\tmop{admissable}} \longleftrightarrow \sigma \cup
\{\lambda_j, j = 1, 2, \ldots, N_{\lambda} \},
\end{equation}
where both the string and the set of loops can also be empty.
\subsection{Amplitudes}
To evaluate a contribution {\tmem{for fixed admissible }}$\{k_l \}$, we can
reorder freely all Grassmann bilinears like hopping terms of links with $k_l =
1$, the local $\varphi \overline{\xi}^{} \xi$ terms and the integration
measure with two spin components $d \xi_1 d \xi_2$ for each site. In this way
the whole Grassmann integral is factorized into one site integrals that are
carried out with the formula
\begin{equation}
\int \text{$d \xi_1 d \xi_2$} \xi \overline{\xi} = 1_{\tmop{spin}},
\end{equation}
where we have chosen $\mathcal{C}_{12} = 1 = -\mathcal{C}_{21}$. At each
monomer site {---} a site with no dimer adjacent {---} the
integrations are saturated by a site-factor in (\ref{Zuvk}) and contribute a
factor $\varphi (z) = D + m (z)$.
Next we consider $\sigma$ and define $| \sigma |$ to be the number of dimers
one has to cross to walk from $u$ to $v$. During the walk one encounters a
sequence of sites $s_i$ separated by lattice unit vectors $n_i$,
\begin{equation}
\tmop{string} \sigma \leftrightarrow \{ \text{$u = s_0, s_1, s_2, \ldots,
s_{| \sigma |} = v$} \}, \hspace{1em} n_{i + 1} = s_{i + 1} - s_i .
\end{equation}
We notice that we may use (\ref{nonorient}) to our convenience along the path.
Then, after carrying out the integrations belonging to all sites $s_i$, there
emerges a product of projectors
\begin{equation}
V (\sigma) = P (n_1) P (n_2) \cdots P (n_{| \sigma |}) .
\end{equation}
Each closed loop $\lambda_j$ can be labeled in exactly the same way except
that now $s_0 = s_{| \lambda |}$ holds (suppressing for the moment the loop
index $j$). If we denote the sequence of unit shifts now by $\{m_1^{}, m_2,
\ldots, m_{| \lambda |} \}$ , then closed loops contribute a scalar factor
\begin{equation}
w (\lambda_j) = - \tmop{tr} [P (m_1) P (m_2) \cdots P (m_{| \lambda |})] .
\label{wlam}
\end{equation}
The minus sign here is the usual one coming from closed fermion loops.
Technically speaking, upon closing the trace, one pair of $\xi,
\overline{\xi}$ appears in the `wrong' order. The cyclicity of the trace
immediately implies, that $w (\lambda_j)$ is independent of where we start
with $m_1$ along the loop. In addition one may use
\begin{equation}
P (n)^{\top} =\mathcal{C} P (- n) \mathcal{C}^{- 1}
\end{equation}
to show also independence of the direction chosen to traverse the loop. Hence
$w (\lambda_j)$ is truly a function of the unoriented loop only.
\subsection{Evaluation of spin factors\label{spinfac}}
Using a bra-ket notation in spin space we write the Wilson projectors as
\begin{equation}
\left. \left. P (m) = | m \right\rangle \langle m |, \quad \langle m | m
\right\rangle = 1, \hspace{1em} m = \pm \hat{\mu} = \pm \hat{0}, \pm \hat{1}
[, \pm \hat{2} \tmop{if} D = 3] . \label{eigenspinor}
\end{equation}
Now the loop factor is composed of scalar products
\begin{equation}
\left. \left. \left. \left. w (\lambda_j) = - \langle m_1 | m_2
\right\rangle \langle m_2 | m_3 \right\rangle \cdots \langle m_{| \lambda |
- 1} | m_{| \lambda |} \right\rangle \langle m_{| \lambda |} | m_1
\right\rangle,
\end{equation}
i.e. factors associated with the sites connecting pairs of links met along the
loop. The modulus of the individual factors is one where successive $m_i$
coincide (straight sections) and $1 / \sqrt{2} = \cos (\pi / 4)$ between
orthogonal links (corners). This is easily seen from a simple example
\begin{equation}
| \langle \hat{0} | \hat{1} \rangle |^2 = \langle \hat{0} |P ( \hat{1}) |
\hat{0} \rangle = \langle \hat{0} | \gamma_0 P ( \hat{1}) \gamma_0 | \hat{0}
\rangle = \langle \hat{0} |P (- \hat{1}) | \hat{0} \rangle = \frac{1}{2}
\end{equation}
and similarly for any other orthogonal pair. With $\pi / 4$ we see the typical
half-angle appearing with spinors. We thus find
\begin{equation}
w (\lambda_j) = 2^{- C (\lambda_j) / 2} \phi (\lambda_j),
\end{equation}
where $C (\lambda_j)$ is the number of corners around the loop and $\phi
(\lambda_j)$ is a phase.
We now discuss a rather direct way to compute $\phi (\lambda_j)$. While it
gives not much geometric insight into its meaning, this derivation will lend
itself to a very direct algorithmic implementation. In appendix \ref{appB} an
alternative more geometrical analysis is presented.
We fix the ambiguous phases of $| \pm \hat{\mu} \rangle$ in a definite way,
knowing that $w$ is independent of this convention. We start from $| \hat{0}
\rangle$ with an arbitrary phase. Next one may demand a maximal number of five
real positive phases
\begin{equation}
\langle \pm \hat{1} | \hat{0} \rangle = \langle - \hat{0} | \hat{1} \rangle
= \langle \pm \hat{2} | \hat{0} \rangle = \frac{1}{\sqrt{2}} .
\end{equation}
This exhausts the free choices and the remaining phases, one in $D = 2$ and
additional six in $D = 3$ can be {\tmem{evaluated}}. One possible way to do so
is to construct all eigenvectors starting from $| \hat{0} \rangle$ with the
help of the projectors:
\begin{equation}
\text{$| \pm \hat{1} \rangle$=} \sqrt{2} P (\pm \hat{1}) | \hat{0} \rangle,
\end{equation}
\begin{equation}
\text{$| - \hat{0} \rangle$=$\sqrt{2} P (- \hat{0})$} | \hat{1} \rangle = -
\gamma_1 | \hat{0} \rangle
\end{equation}
and
\begin{equation}
\text{$| \pm \hat{2} \rangle$=} \sqrt{2} P (\pm \hat{2}) | \hat{0} \rangle .
\end{equation}
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& $| + \hat{0} \rangle$ & $| - \hat{0} \rangle$ & $| + \hat{1} \rangle$
& $| - \hat{1} \rangle$ & $| + \hat{2} \rangle$ & $| - \hat{2}
\rangle$\\
\hline
$\langle + \hat{0} |$ & 1 & - & 1 & 1 & 1 & 1\\
\hline
$\langle - \hat{0} |$ & - & 1 & $1$ & $- 1$ & $i$ & $- i$\\
\hline
$\langle + \hat{1} |$ & 1 & 1 & 1 & - & $z$ & $z^{\ast}$\\
\hline
$\langle - \hat{1} |$ & 1 & $- 1$ & - & 1 & $z^{\ast}$ & $z$\\
\hline
$\langle + \hat{2} |$ & 1 & $- i$ & $z^{\ast}$ & $z$ & 1 & -\\
\hline
$\langle - \hat{2} |$ & 1 & $i$ & $z$ & $z^{\ast}$ & - & 1\\
\hline
\end{tabular}
\caption{Phases in all possible scalar products between
eigenspinors.\label{tab1}}
\end{table}
{\noindent}The implied phases can now be computed by just using the Dirac
algebra and they are collected in table \ref{tab1}. In two dimensions only the
upper left $4 \times 4$ block is relevant, where all phases are real, they
belong to Z(2). In this case the sign for each loop acquires a simple
geometrical interpretation which will be discussed in section \ref{phases}.
In evaluating the phases involving the third dimension we have assumed that
$\gamma_1 \gamma_2 = - i \gamma_0$ holds and set
\begin{equation}
z = \frac{1 + i}{\sqrt{2}} = \mathrm{e}^{i \frac{\pi}{4}} .
\end{equation}
We cannot eliminate the complex phase factors by re-defining the phases of
$\text{$| \pm \hat{2} \rangle$}$. There is another inequivalent irreducible
Dirac representation in $D = 3$ with the complex conjugate phases (for example
from $\gamma_{\mu} \rightarrow - \gamma_{\mu})$. This means that the parity
reflected loop has the opposite phase which is hence a pseudoscalar. The phase
factors for 3-dimensional Wilson fermions are in Z(8) and all values are
actually assumed for relatively short loops if non-planar ones are included.
The group Z(8) is related to the rotation group being reduced to the lattice
symmetries, see appendix \ref{appB}. Simple examples for loops with complex phases
are shown in figure \ref{cloop}. They were extracted from a Monte Carlo
simulation (see below) be `tagging' the phase of configurations and plotting
them.
\begin{figure}[htb]
\centering
\epsfig{file=cloop1.eps,width=6.0cm}
\qquad
\epsfig{file=cloop2.eps,width=3.0cm}
\caption{Closed fermion loops in $D = 3$ with phase $\exp (i \pi / 4)$
(left) and $\exp (i \pi / 2)$ (right).\label{cloop}}
\end{figure}
The spin factor for the open string from $u$ to $v$ is given by
\begin{equation}
\left. \left. V (\sigma) = |n_1 \rangle \langle n_1 | n_2 \right\rangle
\cdots \langle n_{| \sigma | - 1} | n_{| \sigma |} \right\rangle \langle
n_{| \sigma |} |.
\end{equation}
For the leftmost ket and the rightmost bra we introduce the notation
\begin{equation}
|n_1 \rangle = |n (u) \rangle, \hspace{1em} \langle n_{| \sigma |} | =
\langle n (v) | \hspace{1em} (u \not= v)
\end{equation}
such that $n (u)$ is the unit vector pointing {\tmem{out}} of $u$ in the
direction of the unique adjacent dimer $k_l = 1$ while $n (v)$ is the
corresponding unit vector pointing {\tmem{toward}} $v$. Note that in principle
we should write $n (u ; k)$ and $n (v ; k)$ and both vectors are undefined if
$u = v$ holds. The scalar factors have again a modulus $2^{- 1 / 2}$ for each
corner and a phase $\phi (\sigma)$ that may be constructed from table
\ref{tab1},
\begin{equation}
V (\sigma) = 2^{- C (\sigma)} \phi (\sigma) \; |n (u) \rangle \langle n (v)
|.
\end{equation}
We re-emphasize that all objects discussed above including the number of
corners, the string and loop decomposition and the various phases are
(nonlocal) functions of the $k_l$ in an admissible configuration. It would
however clutter our notation too much to always exhibit this explicitly.
\subsection{Boundary conditions\label{bcfac}}
More phase factors can arise from boundary conditions if loops or the string
winds around the torus in antiperiodic directions an odd number of times. We
adopt the convention to label the points on the torus by coordinates $x_{\mu}
= 0, 1, \ldots, L_{\mu} - 1$ and distinguish a $(D-1)$-dimensional sheet of
`boundary' links{\footnote{Of course, the torus has no boundary, hence the
quotes.}} for each direction as follows:
\begin{equation}
l \tmop{is} a \tmop{boundary} \tmop{link} \tmop{in} \tmop{direction} \mu
\leftrightarrow \text{$l = \langle x, x + \hat{\mu} \rangle$ with $x_{\mu} =
L_{\mu} - 1$} .
\end{equation}
For the string $\sigma$ and for each loop $\lambda_j$ we introduce parities
$e_{\mu} (\sigma$) and $e_{\mu} (\lambda_j)$ defined by
\begin{equation}
e_{\mu} (\sigma) = \left\{ \begin{array}{ll}
1 & \tmop{if} \sigma \tmop{contains} \tmop{an} \tmop{odd} \tmop{number}
\tmop{of} \mu - \tmop{boundary} \tmop{links}\\
0 & \tmop{else}
\end{array} \right. \label{blink}
\end{equation}
and for the closed loops $e_{\mu} (\lambda_j)$ is completely analogous. The
overall sign from the boundary conditions is now given by
\begin{equation}
\tmop{sign} = (- 1)^{\varepsilon \cdot \overline{e}} \hspace{1em}
\tmop{with} \hspace{1em} \overline{e}_{\mu} = e_{\mu} (\sigma) + \sum_{j =
1}^{N_{\lambda}} e_{\mu} (\lambda_j) \hspace{1em} (\tmop{mod} 2)
\end{equation}
with the scalar product of the $D$-vectors $\varepsilon_{\mu}$ and
$\overline{e}_{\mu}$ in the exponent.
All contributions to the amplitude of an admissible dimer configuration have
now been identified and will be combined in the next section to represent
fermionic quantities in an ensemble summing and ultimately sampling such
configurations.
\section{Dimer partition function and worm algorithm}
\subsection{Dimer partition function}
Next we formally write down the characteristic function $\Theta (k ; u, v)$
that is unity for admissible configurations and zero for all others. In
principle it has been defined before in words. As a building block we use
\begin{equation}
d (k ; x) = \sum_{l, \partial l \ni x} k_l
\end{equation}
which counts the number of dimers adjacent at $x$. Then we define
\begin{eqnarray}
\Theta (k ; u, v) & = & \delta_{d (k ; u), 1} \delta_{d (k ; v), 1}
\prod_{x \not{\in} \{u, v\}} \left( \delta_{d (k ; x), 0} + \delta_{d (k ;
x), 2} \right) \hspace{1em} \tmop{for} \; u \not= v \\
\Theta (k ; u, v) & = & \prod_x \left( \delta_{d (k ; x), 0} + \delta_{d (k
; x), 2} \right) \hspace{1em} \tmop{for} \; u = v.
\end{eqnarray}
Note that the constraint enforced for $u = v$ here is the one for
contributions to $Z_0^{(\varepsilon)}$ rather than the more restrictive one
for contributions to $Z^{(\varepsilon)} (u, u)$. We now consider the following
partition function
\begin{equation}
\mathcal{Z}= \sum_{u, v, \{k_l \}} \frac{\Theta (k ; u, v)}{\rho^{} (u, v)}
2^{- \overline{C} / 2} \prod_{x, d (k ; x) = 0} \varphi (x) . \label{Zdim}
\end{equation}
Here $\overline{C}$ is the total number of corners
\begin{equation}
\overline{C} = C (\sigma) + \sum_{j = 1}^{N_{\lambda}} C (\lambda_j),
\end{equation}
$\rho$ is an arbitrary symmetric strictly positive lattice-periodic function
similar as in {\cite{Wolff:2008km}}. The product is the weight from all
monomer sites and we here restrict ourselves to
\begin{equation}
\varphi (x) = D - \text{$m (x) > 0$}
\end{equation}
guaranteeing the positivity of the overall weight. Expectation values of
observables $A (k ; u, v)$ in this ensemble are defined by
\begin{equation}
\left\langle \left\langle A \right\rangle \right\rangle =
\frac{1}{\mathcal{Z}} \sum_{u, v, \{k_l \}} A (k ; u, v) \frac{\Theta (k ;
u, v)}{\rho^{} (u, v)} 2^{- \overline{C} / 2} \prod_{x, d (k ; x) = 0}
\varphi (x) . \label{primobs}
\end{equation}
Observables related to the Majorana fermions discussed before can be \ written
as ratios of such expectation values. This will be discussed in the next
section after introducing the simulation algorithm for (\ref{Zdim}).
\subsection{Prokof'ev-Svistunov worm algorithm}
The simulation of the dimer ensemble can be carried out with the worm
algorithm of PS {\cite{prokofev2001wacci}}. It is very similar to the
algorithm described in {\cite{Wolff:2008km}} and we can be brief here about
details. The main difference to the Ising case is that more than 2 dimers
cannot touch and that there is a weight $1 / \sqrt{2}$ for corners which
induces a kind of stiffness (tendency to be straight) of the chains.
We briefly pause here to comment on the notation of hopping parameter
expansion \ in the title of the paper. The factors $\varphi (x)$ appearing for
monomers could be rescaled to unity by absorbing them into $\xi (x)$ early on.
Then each dimer $k_{\langle x y \rangle}$ would be accompanied by a factor
$[\varphi (x) \varphi (y)]^{- 1 / 2}$. For constant $m$ this would equal $[D +
m]^{- 1} = 2 \kappa$ with the conventional hopping parameter $\kappa$. Thus \
$2 \kappa$ is the strict analog of the Ising strong coupling expansion
parameter $\tan \beta$ in {\cite{Wolff:2008km}}. We prefer however to stay
with the unrescaled form which is advantageous for the introduction of
interaction via $m (x)$.
An update microstep of the PS algorithm is now a succession of steps I and II
applied to admissible configurations. In step I we make a Metropolis decision
on a proposal where we pick one of the $2 D$ nearest neighbors of $v$ with
equal probability and call it $v'$ and the connecting link $l$. The proposed
move changes $v \rightarrow v'$ \ flipping at the same time $k_l \rightarrow
k'_l = 1 - k_l$. It brings us from the global configuration $k$ to
configuration $k'$ (differing at exactly one link). Note that $k'$ may be not
admissible, in which case the move will be rejected. We first form an
auxiliary quantity $q$, the ratio of amplitudes after and before the move. We
have to distinguish a number of cases and collect values of $q$ in table
\ref{tabq}. We have written $n (u')$ as a shorthand for what should be $n (u ;
k')$ {\tmem{after}} the move, although we did not move $u$ here. The allowed
moves are illustrated in figure \ref{movefig}. Parts a), b), c) refer to lines
1, 3, 5 of the table. The lines below those refer to the reverse changes. They
correspond to the same graphs read from right to left with the arrow reversed
and interchanged $v \leftrightarrow v'$. The directions of active dimers not
participating in the present update are examples only and can also point
differently.
\begin{figure}[htb]
\centering
\epsfig{file=movea.eps,width=0.6\textwidth}\\[1ex]
\epsfig{file=moveb.eps,width=0.6\textwidth}\\[1ex]
\epsfig{file=movec.eps,width=0.6\textwidth}
\caption{Pictorial representation of elementary moves in the PS algorithm.
Solid lines are active dimers (value one).\label{movefig}}
\end{figure}
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$d (k ; v)$ & $d (k ; v')$ & $d (k', v)$ & $d (k' ; v')$ & $q$\\
\hline
0 & 0 & 1 & 1 & $[\varphi (v) \varphi (v')]^{- 1}$\\
\hline
1 & 1 & 0 & 0 & $\varphi (v) \varphi (v')$\\
\hline
2 & 2 & 1 & 1 & $- [\langle n (v') |v - v' \rangle \langle v - v' |n
(u') \rangle]^{- 1}$\\
\hline
1 & 1 & 2 & 2 & $- \langle n (v) |v' - v \rangle \langle v' - v|n (u)
\rangle$\\
\hline
1 & 0 & 2 & 1 & $\langle n (v) |v' - v \rangle [\varphi (v')]^{- 1}$\\
\hline
1 & 2 & 0 & 1 & $\varphi (v) [\langle n (v') |v - v' \rangle]^{- 1}$\\
\hline
\end{tabular}
\caption{Entries in the first four columns specify the condition for
possible moves, under which the amplitude gets multiplied by $q$ (not including
possible signs from antiperiodic boundary conditions).\label{tabq}}
\end{table}
{\noindent}In all other cases not covered here $q$ is set to zero$.$ In lines
3 and 4 of the table a sign is included for changing the number of fermion
loops $N_{\lambda}$ by one. Finally the modulus of $q$ is used in the
acceptance probability
\begin{equation}
p_{\tmop{acc}} = \min \left( 1, \frac{\rho (u, v)}{\rho (u, v')} | q|
\right)
\end{equation}
while the phase changes will be considered in the next section.
The type II move is as follows. If we encounter a configuration $u, v, \{k_l
\}$ with $u = v$ we `kick' $u = v$ together to a randomly chosen other lattice
site with unchanged $\{k_l \}$ with the probability $0 < p_{\tmop{kick}}
\leqslant 1$. For $u \not= v$ we do nothing in this step, which is the
dominant case. A difference with respect to the Ising case is that while there
also $p_{\tmop{kick}} = 0$ (absence of step II) yields an ergodic algorithm
{\cite{Deng:2007jq}}, this is not so here. For the fermions the jumps are
required to move between different connected components.
Moving only $v$ together with steps II constitutes a correct Monte Carlo
algorithm. We nevertheless found it advantageous to also move $u$ in a
completely analogous fashion. We thus now call the sequence $\mathrm{I}_u -
\tmop{II} - \mathrm{I}_v - \tmop{II}$ a microstep and call $N_x$/2 mircosteps an
iteration if we have $N_x$ lattice sites.
We found the choice of $p_{\tmop{kick}} \in [0.3, 1]$ not critical and use
$p_{\tmop{kick}} = 0.7$ in the following after a few quick experiments.
\section{Fermionic phase and spin factors\label{phases}}
\subsection{Formulae for both $D = 2$ and $D = 3$}
The configurations of the dimer ensemble just discussed correspond to the set
of graphs of the hopping parameter expansion of Majorana fermions. Each
admissible configuration contributes to $Z^{(\varepsilon)}_0$ or to
$Z^{(\varepsilon)} (u, v)$ with a certain amplitude and in the second case
also with a spin matrix. The moduli of the amplitudes have been incorporated
into the generation of configurations. The phases of the amplitudes and the
spin matrices will be taken into account now as observables evaluated as in
(\ref{primobs}).
We first combine all phases discussed in section \ref{spinfac} and \ref{bcfac}
into the total phase
\begin{equation}
\Phi^{(\varepsilon)} (k) = (- 1)^{\varepsilon \cdot \overline{e}} \phi
(\sigma) \prod_{j = 1}^{N_{\lambda}} \phi (\lambda_j) .
\end{equation}
For $u = v$ we set $\phi (\sigma) = 1$, there is no string, only loops.
As mentioned before for $D = 2$ the phase $\Phi$ is just a sign while for $D =
3$ it is an element of Z(8). We now have the connection
\begin{equation}
V Z^{(\varepsilon)}_0 =\mathcal{Z} \left\langle \left\langle \rho (u, u)
\delta_{u, v} \Phi^{(\varepsilon)} (k) \right\rangle \right\rangle
\label{Z0dim}
\end{equation}
with the volume
\begin{equation}
V = \prod_{\mu = 0}^{D - 1} L_{\mu} .
\end{equation}
If we define a spin matrix
\begin{equation}
\mathcal{S}(k ; u, v) = \left\{ \begin{array}{lll}
|n (u) \rangle \langle n (v) | & \tmop{if} & u \not= v\\
\delta_{d (k ; u), 0} [\varphi (u)]^{- 1} 1_{\tmop{spin}} & \tmop{if} & u
= v
\end{array} \right.
\end{equation}
the cases with insertions may be uniformly written as
\begin{equation}
Z^{(\varepsilon)} (x, y) = \rho (x, y)\mathcal{Z} \left\langle \left\langle
\delta_{u, x} \delta_{v, y} \Phi^{(\varepsilon)} (k)\mathcal{S}(k ; u, v)
\right\rangle \right\rangle . \label{Zxy}
\end{equation}
At coinciding arguments $x = y$ the Grassmann integrations are saturated by
the insertion alone, which requires a monomer site with its usual weight
factor to be canceled. In this case one could in principle also relax the
constraint in (\ref{Zxy}) to obtain
\begin{equation}
V Z^{(\varepsilon)} (x, x) =\mathcal{Z} \left\langle \left\langle \rho (u,
u) \delta_{u, v} \Phi^{(\varepsilon)} (k)\mathcal{S}(k ; x, x) \right\rangle
\right\rangle .
\end{equation}
For the fermion correlation function the connection is
\begin{equation}
G (x, y ; m) = \frac{\rho (x, y) \left\langle \left\langle \delta_{u, x}
\delta_{v, y} \Phi^{(\varepsilon)} (k)\mathcal{S}(k ; u, v) \right\rangle
\right\rangle}{(1 / V) \left\langle \left\langle \rho (u, u) \delta_{u, v}
\Phi^{(\varepsilon)} (k) \right\rangle \right\rangle} . \label{Gxy}
\end{equation}
An alternative derivation of the result for coinciding points starts from the
observation
\begin{equation}
- \frac{1}{2} \left\langle \overline{\xi} \xi (x) \right\rangle =
\frac{\partial}{\partial m (x)} \ln Z^{(\varepsilon)}_0 = \frac{1}{\varphi
(x)} \frac{\left\langle \left\langle \rho (u, u) \delta_{u, v}
\Phi^{(\varepsilon)} (k) \delta_{d (k ; x), 0} \right\rangle
\right\rangle}{\left\langle \left\langle \rho (u, u) \delta_{u, v}
\Phi^{(\varepsilon)} (k) \right\rangle \right\rangle}
\end{equation}
and uses (\ref{Z0dim}) and (\ref{Sdensity}).
From the structure of the contributions in the dimer ensemble we may conclude
that the right hand side of (\ref{Gxy}) is rational in the external field $m
(x)$. The denominator has total degree $V$, the numerator $V - 1 - | \sigma
|_{\min}$. Here $| \sigma |_{\min}$ is the minimal number of links to connect
$x$ and $y$ by a string. The degree in each individual $m (x)$ is only linear
both in the numerator and denominator.
The above formulae simplify if translation invariance holds, $m (x) \equiv m$,
$G (x, y ; m)$ $ \rightarrow G (x - y)$, where we also restrict $\rho (x, y) =
\rho (x - y)$ and normalize $\rho (0) = 1$. We then find
\begin{equation}
G (z) = \rho (z) \frac{ \left\langle \left\langle \delta^{(\varepsilon)}_{u
- v, z} \Phi^{(\varepsilon)} (k)\mathcal{S}(k ; u, v) \right\rangle
\right\rangle}{\left\langle \left\langle \delta_{u, v} \Phi^{(\varepsilon)}
(k) \right\rangle \right\rangle} .
\end{equation}
We recognize close similarities with the Ising correlation in
{\cite{Wolff:2008km}} with the novelty of averaging the phasefactor and the
spin matrices. Note that the delta function $\delta^{(\varepsilon)}$ in the
numerator needs to have the same antiperiodicity as the fields $\xi$.
From (\ref{Z0dim}) one may now trivially read off that
\begin{equation}
\frac{Z^{(\varepsilon)}_0}{Z^{(\varepsilon')}_0} = \frac{\left\langle
\left\langle \delta_{u, v} \Phi^{(\varepsilon)} (k) \right\rangle
\right\rangle}{\left\langle \left\langle \delta_{u, v} \Phi^{(\varepsilon')}
(k) \right\rangle \right\rangle} \label{Zrat}
\end{equation}
allows to measure the change in free energy for different boundary conditions.
This type of quantity is theoretically nice, because it is expected to
possess a continuum limit in a finite volume. For the massless all-periodic
case $\varepsilon_{\mu} \equiv 0$, the partition function
$Z_0^{(\varepsilon)}$ vanishes at $m = 0$ because the matrix under the
Pfaffian then has two exact zero modes. The corresponding phasefactor then
averages to zero exactly.
\subsection{$D = 2$ specialties}
Fermions in two Euclidean or one space dimension are simpler and in a way
untypical for the true problem related to the oscillating phase $\Phi$. In the
Euclidean field theory formulation this is seen by the phases from fermion
loops and from spin `essentially canceling' in $D = 2$ (only). In our
realization this is seen as follows. Minus signs appear only at two types of
corners, namely $\langle - \hat{0} | - \hat{1} \rangle$ or $\langle - \hat{1}
| - \hat{0} \rangle$. By drawing closed loops with the intersection properties
relevant here on a planar torus it is not difficult so see that
\begin{itemize}
\item loops winding around the torus in one or both dimensions receive an
even number of such minus signs
\item loops that close trivially and do not wind around the torus receive an
odd number of minus signs from spin phases.
\end{itemize}
In {\cite{Wolff:2007ip}} a more detailed discussion of this and some
illuminating figures with examples can be found. Winding around the torus can
be read off from the crossing of `boundary links' (\ref{blink}). Thus the result
for each closed loop on the two-dimensional torus can be summarized in our
notation as
\begin{equation}
\phi (\lambda_j) = \left\{ \begin{array}{lll}
+ 1 & \tmop{if} & e_{\mu} (\lambda_j) = (0, 0)\\
- 1 & \tmop{else} &
\end{array} \right. \hspace{1em} (D = 2 \tmop{only}) .
\end{equation}
Negative signs {---} remember that the Fermi loop sign has been included
in $\phi$ {---} only come from topologically nontrivial loops that cannot
be contracted to the trivial loop by series of plaquette moves
{\cite{Gattringer:2007em}}, {\cite{Wolff:2007ip}}. The total phase for $u = v$
configurations can now be given as
\begin{equation}
\Phi^{(\varepsilon)} (k) = (- 1)^{\varepsilon \cdot \overline{e} +
\delta_{\overline{e}, (0, 0)} + 1} \hspace{1em} [\tmop{for} u = v, \Theta (k
; u, u) = 1] . \label{Phi2D}
\end{equation}
It depends on $k$ {\tmem{only via}} the topology variable
$\overline{e}_{\mu}$. Using Fourier transformation on Z(2)
[$\sum_{\varepsilon} (- 1)^{\varepsilon \cdot (e - e')} = 4 \delta_{e, e'}$]
one may show the identity
\begin{equation}
1 \equiv \sum_{\varepsilon} z (\varepsilon) \Phi^{(\varepsilon)} (k),
\hspace{1em} z (\varepsilon) = \frac{1}{2} (- 1)^{\delta_{\varepsilon, (0,
0)}}
\end{equation}
for this case. This in turn implies for the average monomer density (with no
phases)
\begin{equation}
\overline{K} = \frac{1}{V} \frac{\left\langle \left\langle \delta_{u, v}
\sum_x \delta_{d (k ; x), 0} \right\rangle \right\rangle}{ \left\langle
\left\langle \delta_{u, v} \right\rangle \right\rangle} \label{Kbar}
\end{equation}
the exact result (for free fermions)
\begin{equation}
\overline{K} = \frac{2 + m}{V} \frac{\partial}{\partial m} \ln
\overline{Z}_0
\end{equation}
with the partition function
\begin{equation}
\overline{Z}_0 = \sum_{\varepsilon} z (\varepsilon) Z^{(\varepsilon)}_0
\end{equation}
summed over boundary conditions with amplitudes $z (\varepsilon)$. Similarly
from (\ref{Zrat}) we may deduce now
\begin{equation}
\frac{Z^{(\varepsilon)}_0}{\overline{Z}_0} = \frac{\left\langle \left\langle
\delta_{u, v} \Phi^{(\varepsilon)} (k) \right\rangle
\right\rangle}{\left\langle \left\langle \delta_{u, v} \right\rangle
\right\rangle} . \label{Phibar}
\end{equation}
\section{Numerical experiments}
In this section we test our simulation method for the case of free fermions on
various lattice sizes and (constant) $m$ values in both $D = 2$ and 3. While
of course no numerical simulations are really needed here, we nonetheless
think that for our loop-gas representation this is not an untypical case also
for later interacting applications. Thus the advance knowledge of the results
here is just an advantage for precision testing.
To extract fermionic quantities from simulations we must have the phase
$\Phi^{(\varepsilon)} (k)$ available for each sampled configuration for the
desired boundary conditions. For each admissible configuration with $\Theta (k
; u, v) = 1$ it can be constructed by tracing the string and all loops at the
cost of order $V$ operations. It is however easier to update its value
together with the configurations. In fact this is even necessary to measure
efficiently between microsteps as discussed in {\cite{Wolff:2008km}}. To that
end we assume $\Phi^{(\varepsilon)} (k)$ to be known for the start
configuration. We always took the trivial $u = v = 0, k_l \equiv 0$ with
$\Phi^{(\varepsilon)} (k) = 1$. Then, whenever an \ update proposal of type
$\mathrm{I}_v$ of the PS algorithm is accepted, we change
\begin{equation}
\Phi^{(\varepsilon)} (k) \rightarrow \Phi^{(\varepsilon)} (k') =
\Phi^{(\varepsilon)} (k) \times \frac{q}{|q|} \times \eta (\langle v v'
\rangle, \varepsilon)
\end{equation}
and similarly for $\mathrm{I}_u$. Here $q$ is given in table \ref{tabq} and the
additional factor
\begin{equation}
\eta (\langle v v' \rangle, \varepsilon) = \left\{ \begin{array}{lll}
- 1 & \tmop{if} & \langle v v' \rangle \tmop{is} \text{a} \mu -
\tmop{boundary} \tmop{link} \tmop{and} \varepsilon_{\mu} = 1\\
+ 1 & \tmop{else} &
\end{array} \right.
\end{equation}
takes into account the boundary conditions (see (\ref{blink})). Needless to
say, one may also keep track of $\Phi^{(\varepsilon)}$ for several boundary
conditions in the same run, as the updates do not depend on them.
The spin matrix is easy to construct at any time from the single dimers
adjacent to $u$ and $v \not= u$ and is trivial for the coinciding case. In
practice we measure correlations contracted with some Dirac matrix $\Gamma$
which leads to
\begin{equation}
- \left\langle \overline{\xi} (0) \Gamma \xi (z) \right\rangle = \rho (z)
\frac{ \left\langle \left\langle \delta^{(\varepsilon)}_{u - v, z}
\Phi^{(\varepsilon)} (k) \tmop{tr} [\mathcal{S}(k ; u, v) \Gamma]
\right\rangle \right\rangle}{\left\langle \left\langle \delta_{u, v}
\Phi^{(\varepsilon)} (k) \right\rangle \right\rangle} . \label{kGammara}
\end{equation}
We want to further specialize to zero spatial momentum as discussed in
appendix \ref{appA},
\begin{equation}
k_{\Gamma} (z_0) = - \sum_{z_k} \left\langle \overline{\xi} (0) \Gamma \xi
(z) \right\rangle \propto \rho (z_0) \left\langle \left\langle
\delta^{(\varepsilon)}_{u_0 - v_0, z_0} \Phi^{(\varepsilon)} (k) \tmop{tr}
[\mathcal{S}(k ; u, v) \Gamma] \right\rangle \right\rangle . \label{kGamma}
\end{equation}
We took $\rho$ to only depend on time and dropped the denominator. For
symmetry reasons only $\Gamma = 1, \gamma_0$ (also labeled as $S, V$) leading
to the scalar and vector correlations $k_S, k_V$ are nontrivial. During the
simulation we simply add the corresponding amplitudes into bins for each
separation $z_0$ and then end up with correlations measured for all distances.
Pre-computed tables are heavily used to speed up and they lead to a very
simple code. In the simulations reported below we have observed Metropolis
acceptance rates close to 50\% \ for $D = 2$ and $30\% \tmop{for} D = 3$. This
is close to the amplitude change by $1 / (D + m)$ when the worm `eats' a
monomer which it has to do to grow. In equilibrium also the other processes
are important, but this one seems to set the scale.
All error estimates below are derived with the method and tools detailed in
{\cite{Wolff:2003sm}}. In particular the definition of integrated
autocorrelation times $\tau_{\tmop{int}}$ employed here can be found, see also
remarks in {\cite{Wolff:2008km}}. Due to time series of length $10^6$ and more
the convolution step in \tmtexttt{UWerr}, eq. (31) in {\cite{Wolff:2003sm}},
became a bit slow for online data analysis. We therefore tailored a special
version \tmtexttt{UWerr\_fft} which accelerates this step by using the fast
Fourier transform. It is available on the web under
\tmtexttt{www.physik.hu-berlin.de/com/ALPHAsoft}.
\begin{figure}[htb]
\centering
\epsfig{file=c64m0.eps,width=0.45\textwidth}
\qquad
\epsfig{file=c64z5.eps,width=0.45\textwidth}
\caption{Two typical configurations on a $64^2$ lattice at criticality ($m =
0$, left panel) and with correlation length $64 / 5$ ($m = 0.0812 \ldots .$,
right panel). The string $\sigma$ is given by the fat (red line), the other
lines are loops $\lambda_j$. Readers are asked to identify left-right and
top-bottom edges in their mind.\label{D2conf}}
\end{figure}
\subsection{$D = 2$, physically large volume}
For tests in this subsection we chose a mass such that $\omega L = 5$ holds
with the pole mass $\omega = \ln (1 + m)$. The zero momentum timeslice
correlations (\ref{kGamma}) then fall off exactly with $\exp (- \omega x_0$),
modified to cosh or sinh due to time periodicity, see appendix \ref{appA}. A
typical configuration is visualized by the right picture in figure
\ref{D2conf}. We see that for fermions, in contrast to the Ising model, there
really is a unique `worm', which moves by the updates{\footnote{The poor unoriented
Majorana worm has however no distinction between head and tail!}}. In all
simulations the complete zero momentum two-point functions at all separations
were consistent within errors with the exact results. We routinely checked
plots of the deviation in units of the estimated error against $x_0$ which are
order one, occasionally straddling $\pm 2$. \ In addition diagnostic
quantities like (\ref{Kbar}) and (\ref{Phibar}) were monitored. Because in a
large box (compared to the inverse mass) few configurations wind around the
torus we find no significant difference between periodic and antiperiodic
boundary conditions. In the example to follow we measured (\ref{Phibar}) and
obtained
\[ \frac{Z^{(0, 0)}_0}{\overline{Z}_0} = 0.9744 (5), \hspace{1em} \frac{Z^{(1,
0)}_0}{\overline{Z}_0} = 0.9745 (5) \]
in agreement with the exact answer.
\begin{figure}[htb]
\centering
\epsfig{file=64128n0.eps,width=0.9\textwidth}
\caption{Correlation function $k_V$ and the effective mass derived from it.
Errorbars are one sigma high. \label{fig2}}
\end{figure}
As an impression for the reader we show in figure \ref{fig2} results for the
vector correlation $k_V$ on a lattice $L = 64, T = 2 L$ with
$\varepsilon_{\mu} = (1, 0)$ after accumulating $10^7$ iterations (steps per
site). The correlation length $\omega^{- 1}$ is hence about 13 lattice
spacings. We have chosen the bias $\rho$ as
\begin{equation}
\rho (t) \propto \cosh [\omega (T / 2 - t)],
\end{equation}
which leads to a population of timeslices $\langle \langle \delta_{u_0 - v_0,
t} \rangle \rangle$ approximately flat in $t$. We refer the reader to the
discussion in {\cite{Wolff:2008km}}, which can be taken over essentially
without change. The upper panel shows the correlation $k_V (x_0)$ itself
normalized by its exact value. The growth of the errors from left to right is
due to an increase of the integrated correlation time $\tau_{\tmop{int}}$ from
about 0.6 iterations at short distances to about 15 iterations at $x_0 = T /
2$. In the lower panel we give the effective mass as a function of distance by
matching subsequent timeslices to
\begin{equation}
\frac{k_V (x_0 + 1)}{k_V (x_0)} = \frac{\cosh (m_{\tmop{eff}} (T / 2 - x_0 -
1)}{\cosh (m_{\tmop{eff}} (T / 2 - x_0)}, \hspace{1em} 0 < m_{\tmop{eff}}
\equiv m_{\tmop{eff}} (x_0 + 1 / 2) . \label{meff}
\end{equation}
Here errors appear (apart from $x_0$ very close to $T / 2$) to be independent
of the separation in agreement with the observed autocorrelations
$\tau_{\tmop{int}} \approx 0.5$ for all $x_0$. The longer autocorrelations
observed in $k_V$ apparently cancel in the ratio. From the fluctuations in
figure \ref{fig2} we conclude qualitatively that statistical fluctuations at
neighboring time separations are strongly correlated in $k_V$, but much less
so in $m_{\tmop{eff}}$. In a run with $\rho \equiv 1$ the growth of
$\tau_{\tmop{int}}$ for $k_V$ does not occur. Its error however grows in a
similar way due to the larger variance coming from fewer data at large
separation $u - v$ (fewer `long worms') when no bias $\rho$ is applied. The
more interesting effective mass is more accurate with the bias used for the
figure.
\subsection{$D = 2$, physically small volume}
We now simulate at the critical point which in the free case is known to occur
at $m = 0.$ We remind that due to the infrared regulator given by the small
{\tmem{inverse}} temperature $T$ with antiperiodic boundary conditions
$\varepsilon_0 = 1$ this is physically well-defined. Such applications are of
interest in interacting theories to study nonperturbative renormalization
using the universal finite volume continuum limit. To this end we report
measurements of $k_S (T / 4)$ and $k_V (T / 2)$ with $\varepsilon = (1, 0)$.
Further motivation for the study of these objects can be found in appendix
\ref{appA} and refs. {\cite{Korzec:2006hy}}, {\cite{TomPhD}}. In table
\ref{tab3} we compile our results from performing $10^6$ iterations at each of
the lattice sizes. Again $\tau_{\tmop{int}}$ are given in iterations.
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$T = L$ & $k_S (T / 4)$ & $\tau_{\tmop{int}, k_S (T / 4)}$ & $k_V (T / 2)$
& $\tau_{\tmop{int}, k_V (T / 4)}$\\
\hline
16 & 0.010(7) & 1.85(8) & 0.997(10) & 1.11(2)\\
\hline
32 & -0.011(15) & 3.6(4) & 1.007(16) & 1.43(4)\\
\hline
64 & 0.011(25) & 5.0(5) & 0.943(29) & 2.68(14)\\
\hline
\end{tabular}
\caption{Results at the critical point $m = 0$. The exact values for all $L$
are $k_S (T / 4) = 0$ due to chiral symmetry and $k_V (T / 2) = 1$
corresponding to canonical field normalization.\label{tab3}}
\end{table}
H{\noindent}ere the topology and the sign $\Phi^{(1, 0)}$ fluctuate, but we
can achieve a percent accuracy with the given statistics, which could be
enlarged.
\begin{figure}[htb]
\centering
\epsfig{file=c12z6D3.eps,width=0.6\textwidth}
\caption{A typical configuration on $24 \times 12^2$ with correlation length
2 ($m = 0.6487 \ldots)$.\label{D3conf}}
\end{figure}
\subsection{$D = 3$, sign problem}
It is trivial to adapt the code from two to three dimensions. On small
lattices $T, L = 4, 6$ we performed similar validation tests as before with
completely accurate and satisfactory results. Note that the formulae for $k_S,
k_V$ in appendix \ref{appA} are equally valid for $D = 2, 3$. It turns out,
however, that now very suddenly as the volume is increased or the mass is
lowered the sign fluctuations abruptly become so violent that no signal is
left in (\ref{kGammara}) and also for ratios of correlations as in
(\ref{meff}) all estimates yield `$0 / 0$' within errors: the sign problem.
For large enough mass loops remain small and predominantly planar. Such loops
are as in two dimensions with phase one. For a demonstration we show in table
\ref{Res3D} results for two cases just before trouble strikes. In figure
\ref{D3conf} the last configuration of our run at $\omega L = 6$ is shown. For
$\omega L = 4$ and the same lattice size no meaningful results can be obtained
anymore. Integrated autocorrelations times were close to $1 / 2$ for all the
quantities studied. Although the observables in the dimer ensemble are
complex, the averages of the imaginary parts vanish within errors as they have
to, since they are parity odd. This was first checked and then used before
forming quotients. A bias was not used here, $\rho \equiv 1$.
\begin{table}[htb]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
$\omega L$ & {\large $\frac{\langle \langle \delta_{u, v} \Phi^{(0, 0, 0)} \rangle
\rangle}{\langle \langle \delta_{u, v} \rangle \rangle}$} &{\large $\frac{\langle
\langle \delta_{u, v} \Phi^{(1, 0, 0)} \rangle \rangle}{\langle \langle
\delta_{u, v} \rangle \rangle}$} & $m_{\tmop{eff}} L (3.5)$ & $M /
\omega$\\
\hline
6 & 0.1344(5) & 0.1343(5) & 6.04(5) & 0.51\\
\hline
5 & 0.0168(6) & 0.0164(6) & 4.2(10) & 0.27\\
\hline
\end{tabular}
\caption{Simulation results from $10^7$ iterations on $24 \times 12^2$
lattices.\label{Res3D}}
\end{table}
In the last column we report a mass $M$ which was extracted from the time
slice correlation $\langle \langle \delta_{u_0 - v_0, t} \rangle \rangle$
without phase factors and using the time periodic $\delta$-function. It also
shows a mass-plateau and, as we see, it is smaller than the physical mass. Due
to interference effects the spinor correlation decays faster than this
`geometric' one. From this observation one could think that the loops and
strings have the `wrong' size. We tried to generate them with a modified mass
parameter $m + \Delta m$ such that $M \approx \omega$ is achieved and then
reweighted the observables to the true mass, which is easy if the total
monomer number is available. We find however that this simple idea does not
improve the sign problem.
\section{Conclusions}
We have formulated the `worm' algorithm of Prokof'ev and Svistunov for lattice
fermions of the Wilson type with Wilson parameter $r = 1$. As in the Ising
model it estimates stochastically by the Monte Carlo method the untruncated
hopping parameter expansion of the partition function together with the graphs
needed for the full two-point function, which can thus be computed. The PS
algorithm very naturally lends itself to easily keep track of all phase
factors and spin matrices that appear in the expansion. In two space time
dimensions the contributions of all graphs are positive up to finite size
effects and simulations are similarly efficient as in the Ising model. The all
order hopping expansion is also worked out and numerically tested in three
dimensions. Here the weights of fermion loops acquire complex phase factors
and, for small mass and/or large volume lead to numerically uncontrollable
fluctuations. The very sharp borderline was found around correlation length
two for a $24 \times 12^2$ lattice. In particular, the continuum limit cannot
be approached. Clearly here the method has to be complemented for instance by
an improved estimator which sums some part of the contributions analytically
as in cluster methods. No such method is known at present for the system at
hand.
For two dimensional fermions we now plan to add the interaction of the
Gross-Neveu model. For the O($N$) invariant model the Majorana fermion
discussed here then has to be replicated $N$-fold. For each multi-dimer
configuration there are now between $K (x) = 0$ and $K (x) = N$ monomers at each
site. The four fermion interaction can be enforced by integrating over the
common external field $m (x)$ site by site with the appropriate Gaussian
weight yielding a $K (x)$-dependent total weight as already discussed in
{\cite{Wolff:2007ip}}. In this way a coupling between the $N$ `flavors'
arises. The worm head and tail $u$ and $v$ now refer to one of the flavors.
When $u = v$ is reached with random re-location also a new flavor-index is
chosen randomly. Further details still have to be worked out.
\tmtextbf{Acknowledgments}. I would like to thank Oliver B\"ar, Rainer Sommer and Willi Rath
for discussions. Part of this work was carried out during a one month visit to
UCSD (San Diego) and I wish to thank Julius Kuti and the whole high energy
physics group for making my stay a most pleasant experience. I would like to
thank the Deutsche Forschungsgemeinschaft for support in the framework of SFB
Transregio 9.
|
1,477,468,751,394 | arxiv | \section{Introduction}
The class of bihamiltonian integrable hierarchies which possess hydrodynamic limits plays an important role in the study of Gromov--Witten invariants,
2D topological field theory, and other research fields of mathematical physics. In \cite{DZ-NF} the first- and third-named authors of the present paper
initiated a program of classifying deformations of bihamiltonian integrable hierarchies of hydrodynamic type under the so-called Miura type transformations.
They introduced the notion of bihamiltonian cohomologies of a bihamiltonian structure and converted the classification problem into the computation of
these cohomology groups. The first two bihamiltonian cohomologies for semisimple bihamiltonian structures of hydrodynamic type were calculated in
\cite{DLZ-1, LZ-1}, and it was proved that the infinitesimal deformations of a semisimple bihamiltonian structure of hydrodynamic type are parametrized
by a set of smooth functions of one variable. For a given deformation of a semisimple bihamiltonian structure of hydrodynamic type these functions
$c_1(u^1),\dots, c_n(u^n)$ can be calculated by an explicit formula represented in terms of the canonical coordinates $u^1,\dots, u^n$ of the bihamiltonian
structure. These functions are invariant under the Miura type transformations, due to this reason they are called the central invariants of the deformed
bihamiltonian structure.
In \cite{BCIH-I}, the second- and third-named author of the present paper continued the study of the above mentioned classification problem.
They reformulated the notion of infinite dimensional Hamiltonian structures in terms of the infinite jet space of a super manifold, and provided
a framework of infinite dimensional Hamiltonian structures which is convenient for the study of properties of Hamiltonian and bihamiltonian
cohomologies. One of the main results which is crucial for the computation of bihamiltonian cohomologies is given by Lemma 3.7 of \cite{BCIH-I}.
It reduces the computation of the bihamiltonian cohomologies to the computations of cohomology groups of a bicomplex on the space of differential
polynomials, instead of on the space of local functionals. Based on this result, they computed the third bihamiltonian cohomology group of the
bihamiltonian structure of the dispersionless KdV hierarchy, and showed that any infinitesimal deformation of this bihamiltonian structure can be
extended to a full deformation.
In \cite{CPS-2}, Carlet, Posthuma and Shadrin completed the computation of the third bihamiltonian cohomology group for a general semisimple
bihamiltonian structure of hydrodynamic type based on the results of \cite{BCIH-I}. Their result confirms the validity of the conjecture of \cite{BCIH-I}
that any infinitesimal deformation of a semisimple bihamiltonian structures of hydrodynamic type can be extended to a full deformation, i.e. for any given
smooth functions $c_i(u^i)\ (i=1, \dots, n)$, there exists a deformation of the corresponding semisimple bihamiltonian structure of hydrodynamic type such
that its central invariants are given by $c_i(u^i)\ (i=1, \dots, n)$.
This paper is a continuation of \cite{BCIH-I}. We are to give a detailed study of properties of the integrable hierarchies associated with a special class of
semisimple bihamiltonian structures of hydrodynamic type and their deformations, which are called \emph{flat exact semisimple bihamiltonian structures
of hydrodynamic type}. One of their most important properties is the existence of tau structures for the associated integrable hierarchies and their
deformations with constant central invariants.
For a hierarchy of Hamiltionian evolutionary PDEs, a tau structure is a suitable choice of the densities of the Hamiltonians satisfying certain conditions
which enables one to define a function, called the tau function, for solutions of the hierarchy of evolutionary PDEs, as it is defined in \cite{DZ-NF}.
The notion of tau functions was first introduced by M.~Sato \cite{Sato} for solutions to the KP equation and by Jimbo, Miwa and Ueno for a class of
monodromy preserving deformation equations of linear ODEs with rational coefficients \cite{JMU-1, JM-2, JM-3} at the beginning of 80's of the last
century. It was also adopted to soliton equations that can be represented as equations of isospectral deformations of certain linear spectral problems
or as Hamiltonian systems, and has played crucial role in the study of relations of soliton equations with infinite dimensional Lie algebras \cite{DKJM, KW},
and with the geometry of infinite dimensional Grassmannians \cite{SS, SW}. The importance of the notion of tau functions of soliton equations is manifested
by the discovery of the fact that the tau function of a particular solution of the KdV hierarchy is a partition function of 2D gravity, see \cite{Wi, Ko} for
details. In \cite{DZ-NF}, the first- and the third-named authors introduced the notion of tau structures for the class of bihamiltonian integrable hierarchies
possessing hydrodynamic limits, and constructed the so-called topological deformations of the principal hierarchy of a semisimple Frobenius manifold by
using properties of the associated tau functions. On the other hand, not all bihamiltonian integrable hierarchies possess tau structures. In this paper we
introduce the notion of \emph{flat exact} bihamiltonian structure, and study the classification of the associated tau structures. It turns out that this notion
is an appropriate generalization of semisimple conformal Frobenius manifolds when considering the associated integrable hierarchies and their tau
structures. One can further consider the deformations of a flat exact semisimple bihamiltonian structure of hydrodynamic type which possess tau structures.
It is known that the central invariants of such deformations must be constant \cite{yz}. We show that deformations with constant central invariants of
a flat exact semisimple bihamiltonian structure of hydrodynamic type indeed possess tau structures, and we also give a classification theorem for the
associated tau structures.
The paper is arranged as follows. In Sec.\,\ref{sec-1} we introduce the notion of flat exact semisimple bihamiltonian structures of hydrodynamic type and
present the main results. In Sec.\,\ref{sec-2} we study the relations between flat exact semisimple bihamiltonian structures of hydrodynamic type and
semisimple Frobenius manifolds, and give a proof of Theorem \ref{mainthm00}. In Sec.\,\ref{sec-3} we construct the principal hierarchy for a flat exact
semisimple bihamiltonian structures of hydrodynamic type and show the existence of a tau structure. In Sec.\,\ref{sec-4} we consider properties of
deformations of the principal hierarchies which possess tau structures and the Galilean symmetry, and then in Sec.\,\ref{sec-5} we prove the existence of
deformations of the principal hierarchy of a flat exact bihamiltonian structures of hydrodynamic type, which are bihamiltonian integrable hierarchies
possessing tau structures and the Galilean symmetry, and we prove Theorem \ref{main-thm}. Sec.\,\ref{sec-7} is a conclusion. In the Appendix, we prove some properties of
semi-Hamiltonian integrable hierarchies, some of which are used in the proof of the uniqueness theorem given in Sec.\,\ref{sec-4}.
\section{Some notions and the main results }\label{sec-1}
The class of systems of hydrodynamic type on the infinite jet space of an $n$-dimensional manifold $M$ consists of systems of $n$ first order quasilinear
partial differential equations (PDEs)
\begin{equation}\label{sht00}
v^\alpha_t = \sum_{\beta=1}^n A_\beta^\alpha(v) v^\beta_x, \quad \alpha=1, \dots, n, \quad v=\left( v^1, \dots, v^n\right)\in M.
\end{equation}
Here $A^\alpha_\beta(v)$ is a section of the bundle $TM \otimes T^*M$. For the subclass of Hamiltonian systems of hydrodynamic type the r.h.s. of
\eqref{sht00} admits a representation
\begin{equation}\label{hsht00}
v^\alpha_t = P^{\alpha\beta} \frac{\partial h(v)}{\partial v^\beta}.
\end{equation}
Here the smooth function $h(v)$ is the density of the Hamiltonian
\[H=\int h(v)\, dx\]
and
\begin{equation}\label{pbht00}
P^{\alpha\beta}=g^{\alpha\beta}(v) \partial_x +\Gamma^{\alpha\beta}_\gamma(v) v^\gamma_x
\end{equation}
is the operator of a \emph{Poisson bracket of hydrodynamic type}. As it was observed in \cite{DN83} such operators satisfying the \emph{nondegeneracy condition}
\begin{equation}
\det \left( g^{\alpha\beta}(v)\right)\neq 0 \label{nondegeneracy}
\end{equation}
correspond to flat metrics (Riemannian or pseudo-Riemannian)
\[ds^2 =g_{\alpha\beta}(v) dv^\alpha dv^\beta\]
on the manifold $M$. Namely,
\[g^{\alpha\beta}(v)=\left( g_{\alpha\beta}(v)\right)^{-1}\]
is the corresponding inner product on $T^*M$, the coefficients $\Gamma^{\alpha\beta}_\gamma(v)$ are the contravariant components of the Levi-Civita
connection for the metric. In the present paper it will be assumed that all Poisson brackets of hydrodynamic type satisfy the nondegeneracy condition
\eqref{nondegeneracy}.
A bihamiltonian structure of hydrodynamic type is a pair $(P_1, P_2)$ of operators of the form \eqref{pbht00} such that an arbitrary linear combination
$\lambda_1 P_1+\lambda_2 P_2$ is again the operator of a Poisson bracket. They correspond to pairs of flat metrics $g_1^{\alpha\beta}(v)$,
$g^{\alpha\beta}_2(v)$ on $M$ satisfying certain compatibility condition (see below for the details). The bihamiltonian structure of hydrodynamic type is
called \emph{semisimple} if the roots $u^1(v)$, \dots, $u^n(v)$ of the characteristic equation
\begin{equation}\label{char00}
\det \left( g^{\alpha\beta}_2(v)-u \,g^{\alpha\beta}_1(v)\right)=0
\end{equation}
are pairwise distinct and are not contant for a generic point $v\in M$.
According to Ferapontov's theorem \cite{Fera}, these roots can serve as local coordinates
of the manifold $M$, which are called the canonical coordinates of the bihamiltonian structure $(P_1, P_2)$.
We assume in this paper that $D$ is a sufficiently small domain on $M$ such that $(u^1, \dots, u^n)$ is the local
coordinate system on $D$. In the canonical coordinates the two metrics have diagonal forms
\begin{equation}\label{zh-10-30}
g_1^{ij}(u)=f^i(u)\delta^{ij}, \quad g_2^{ij}(u)=u^i\,f^i(u)\delta^{ij}.
\end{equation}
We will need to use the notion of rotation coefficients of the metric $g_1$
which are defined by the following formulae:
\begin{equation}\label{zh-11-1}
\gamma_{ij}(u)=\frac{1}{2\sqrt{f_i f_j}} \frac{\partial f_i}{\partial u^j},\quad i\ne j
\end{equation}
with $f_i=\frac{1}{f^i}$. We also define $\gamma_{ii}=0$.
\begin{dfn}[cf. \cite{DZ-NF}]\label{zh-11-2}
The semisimple bihamiltonian structure $(P_1, P_2)$ is called reducible at $u\in M$
if there exists a partition of the set $\{1, 2, \dots, n\}$ into the union of
two nonempty nonintersecting sets $I$ and $J$ such that
\[
\gamma_{ij}(u)= 0,\quad \forall i\in I,\ \forall j\in J.\]
$(P_1, P_2)$ is called irreducible on a certain domain $D\subset M$, if it is not reducible at any point $u\in D$.
\end{dfn}
The main goal of the present paper is to introduce tau-functions of bihamiltonian systems of hydrodynamic type and of their dispersive deformations.
This will be done under the following additional assumption.
\begin{dfn}\label{zh-10-31}
The bihamiltonian structure $(P_1, P_2)$ of hydrodynamic type is called \emph{exact} if there exists a vector field $Z\in Vect\left( M\right)$ such that
\begin{equation}\label{cond-exact}
[Z, P_1]=0, \quad [Z, P_2]=P_1.
\end{equation}
Here $[\ \,, \ ]$ is the infinite-dimensional analogue of the Schouten--Nijenhuis bracket
(see the next section and \cite{BCIH-I} for details of the definition).
It is called \emph{flat exact} if the vector field $Z$ is flat with respect to the metric associated with the Hamiltonian structure $P_1$.
\end{dfn}
\begin{emp} \label{exam-frob} Let $\left( M, \cdot\,, \eta, e, E\right)$ be a Frobenius manifold. Then the pair of metrics
\begin{equation}\label{pencil-frob}
\begin{aligned}
& g_1^{\alpha\beta}(v)=\langle dv^\alpha, dv^\beta\rangle= \eta^{\alpha\beta},\\
& g_2^{\alpha\beta}(v)=(dv^\alpha, dv^\beta)=i_E \left( dv^\alpha\cdot dv^\beta\right)=:g^{\alpha\beta}(v)
\end{aligned}
\end{equation}
on $T^*M$ defines a flat exact bihamiltonian structure with $Z=e$, see \cite{Du-1} for the details. For a semisimple Frobenius manifold the resulting
bihamiltonian structure will be semisimple. Roots of the characteristic equation \eqref{char00} coincide with the canonical coordinates on the Frobenius
manifold.
\end{emp}
More bihamiltonian structures can be obtained from those of Example \ref{exam-frob} by a Legendre-type transformation
\cite{Du-1, XZ}
\begin{equation}\label{legen01}
\hat v_\alpha =b^\gamma\frac{\partial^2 F(v)}{\partial v^\gamma\partial v^\alpha}, \quad \hat v^\alpha=\eta^{\alpha\beta}\hat v_\beta.
\end{equation}
Here $F(v)$ is the potential of the Frobenius manifold and $b=b^\gamma\frac{\partial}{\partial v^\gamma}$ is a flat invertible vector field on it.
The new metrics on $T^*M$ by definition have the \emph{same} Gram matrices in the new coordinates
\begin{equation}\label{legen02}
\langle d\hat v^\alpha, d\hat v^\beta\rangle =\eta^{\alpha\beta}, \quad \left( d\hat v^\alpha, d\hat v^\beta\right) =g^{\alpha\beta}(v).
\end{equation}
Recall that applying the transformation \eqref{legen01} to $F(v)$ one obtains a new solution $\hat F(\hat v)$ to the WDVV associativity equations defined from
\begin{equation}\label{legen03}
\frac{\partial^2 \hat{F}(\hat{v})}{\partial\hat{v}^\alpha\partial\hat{v}^\beta}=\frac{\partial^2 F(v)}{\partial{v}^\alpha\partial{v}^\beta}.
\end{equation}
The new unit vector field is given by
\begin{equation}\label{legen04}
\hat e =b^\gamma\frac{\partial}{\partial\hat v^\gamma}.
\end{equation}
The new solution to the WDVV associativity equations defines on $M$ another Frobenius manifold structure if the vector $b=b^\gamma\frac{\partial}{\partial v^\gamma}$ satisfies
\[\left[ b, E\right] = \lambda\cdot b\]
for some $\lambda\in\mathbb C$. Otherwise the quasihomogeneity axiom does not hold true.
\begin{thm} \label{mainthm00}
For an arbitrary Frobenius manifold $M$ the pair of flat metrics obtained from \eqref{pencil-frob} by a transformation of the form
\eqref{legen01}--\eqref{legen02} defines on $M$ a flat exact bihamiltonian structure of hydrodynamic type. Conversely, any irreducible
flat exact semisimple bihamiltonian structure of hydrodynamic type can be obtained in this way.
\end{thm}
Now we can describe a tau-symmetric bihamiltonian hierarchy associated with a flat exact semisimple bihamiltonian structure $(P_1, P_2; Z)$ of
hydrodynamic type. Let us choose a system of flat coordinates $\left( v^1, \dots, v^n\right)$ for the first metric. So the operator $P_1$ has the form
\[P_1^{\alpha\beta}=\eta^{\alpha\beta}\frac{\partial}{\partial x}\]
for a constant symmetric nondegenerate matrix $\eta^{\alpha\beta}=g_1^{\alpha\beta}$.
It is convenient to normalize the choice of flat coordinates by the requirement
\[Z=\frac{\partial}{\partial v^1}.\]
We are looking for an infinite family of systems of first order quasilinear evolutionary PDEs of the form \eqref{sht00}
satisfying certain additional conditions. The systems of the form \eqref{sht00} will be labeled by pairs of indices $(\alpha, p)$, $\alpha=1, \dots, n$, $p\geq 0$.
Same labels will be used for the corresponding time variables $t=t^{\alpha, p}$. The conditions to be imposed are as follows.
1. All the systems under consideration are \emph{bihamiltonian} PDEs w.r.t. $(P_1, P_2)$. This implies pairwise commutativity of the flows \cite{DLZ-1}
\begin{equation}\label{comm00}
\frac{\partial}{\partial t^{\alpha,p}}\frac{\partial v^\gamma}{\partial t^{\beta,q}}=\frac{\partial}{\partial t^{\beta,q}}\frac{\partial v^\gamma}{\partial t^{\alpha,p}}.
\end{equation}
2. Denote
\begin{equation}\label{hamilt00}
H_{\alpha,p}=\int h_{\alpha, p}(v)\, dx
\end{equation}
the Hamiltonian of the $(\alpha,p)$-flow with respect to the first Poisson bracket,
\begin{equation}\label{hamilt01}
\frac{\partial v^\gamma}{\partial t^{\alpha,p}}=\eta^{\gamma\lambda}\frac{\partial}{\partial x} \frac{\delta H_{\alpha,p}}{\delta v^\lambda(x)} \equiv \eta^{\gamma\lambda}\frac{\partial}{\partial x} \frac{\partial h_{\alpha,p}(v)}{\partial v^\lambda}.
\end{equation}
The Hamiltonian densities satisfy the following \emph{recursion}\footnote{This recursion acts in the opposite direction with respect to the
bihamiltonian one - see eq. \eqref{bi-recursion00} below.}
\begin{equation}\label{recur00}
\frac{\partial}{\partial v^1} h_{\alpha, p}(v) = h_{\alpha,p-1}(v), \quad \alpha=1, \dots, n, \quad p\geq 0
\end{equation}
(recall that $\frac{\partial}{\partial v^1} =Z$) where we denote
\begin{equation}\label{casi00}
h_{\alpha, -1}(v)=v_\alpha\equiv\eta_{\alpha\beta}v^\beta, \quad \alpha=1, \dots, n.
\end{equation}
Observe that the functionals $H_{\alpha,-1}=\int h_{\alpha,-1}(v)\, dx$ span the space of Casimirs of the first Poisson bracket.
3. Normalization
\begin{equation}\label{normalize01}
\frac{\partial}{\partial t^{1,0}}=\frac{\partial}{\partial x}.
\end{equation}
\begin{prp}\label{prp-25}
Integrable hierarchies of the above form satisfy the \emph{tau-symmetry} condition
\begin{equation}\label{tau-sym00}
\frac{\partial h_{\alpha, p-1}}{\partial t^{\beta,q}}= \frac{\partial h_{\beta,q-1}}{\partial t^{\alpha, p}},\quad \forall ~ \alpha, \, \beta=1, \dots, n, \quad \forall~ p, \, q\geq 0.
\end{equation}
Moreover, this integrable hierarchy is invariant with respect to the Galilean symmetry
\begin{align}
&\frac{\partial v}{\partial s} =Z(v)+\sum_{p\geq 1} t^{\alpha, p}\frac{\partial v}{\partial t^{\alpha, p-1}},\label{galileo}\\
&\left[ \frac{\partial}{\partial s}, \frac{\partial}{\partial t^{\alpha,p}}\right]=0, \quad \forall \alpha=1, \dots, n, \quad p\geq 0.\nonumber
\end{align}
\end{prp}
\begin{dfn}\label{dfn-cali}
A choice of the Hamiltonian densities $h_{\alpha, p}(v)$, $\alpha=1, \dots, n$, $p\geq -1$ satisfying the above conditions is called a \emph{calibration}
of the flat exact bihamiltonian structure $(P_1, P_2; Z)$ of hydrodynamic type. The integrable hierarchy \eqref{hamilt01} is called the
\emph{principal hierarchy} of $(P_1, P_2; Z)$ associated with the given calibration.
\end{dfn}
\begin{emp} Let $\left( M, \,\cdot\,, \eta, e, E\right)$ be a Frobenius manifold. Denote
\[\left(\theta_1(v; z), \dots, \theta_n(v; z)\right) z^\mu z^R\]
with
\begin{align}
&\theta_\alpha(v; z) =\sum_{p=0}^\infty \theta_{\alpha,p}(v) z^p, \quad \alpha=1, \dots, n\label{levelt00}
\end{align}
a Levelt basis of deformed flat coordinates \cite{Du-3}. Here the matrices
\[\mu={\rm diag}(\mu_1, \dots, \mu_n), \quad R=R_1+\dots,\quad [\mu, R_k] =k\, R_k\]
constitute a part of the spectrum of the Frobenius manifold, see details in \cite{Du-3}. Then
\begin{equation}\label{levelt01}
h_{\alpha,p}(v) =\frac{\partial \theta_{\alpha,p+2}(v)}{\partial v^1}, \quad \alpha=1, \dots, n, \quad p\geq -1
\end{equation}
is a calibration of the flat exact bihamiltonian structure associated with the metrics \eqref{pencil-frob} on the Frobenius manifold. In this case the family of
pairwise commuting bihamiltonian PDEs \eqref{hamilt01} is called the \emph{principle hierarchy} associated with the Frobenius manifold. With this choice
of the calibration the Hamiltonians \eqref{hamilt00}, \eqref{levelt01} satisfy the bihamilonian recursion relation
\begin{equation}\label{bi-recursion00}
\{ \, .\,, H_{\beta,q-1}\}_2=
(q+\frac12+\mu_\beta)\{ \, . \,, H_{\beta,q}\}_1+
\sum_{k=1}^{q-1} (R_{q-k})^\alpha_\gamma
\{ \, . \,, H_{\beta,k}\}_1.
\end{equation}
Other calibrations can be obtained by taking constant linear combinations and shifts
\begin{align}
& \tilde\theta_\alpha(v; z) =\theta_\beta(v; z) C_\alpha^\beta(z)+\theta_{\alpha}^0(z), \quad \alpha =1, \dots, n \label{basis-change00} \\
& C(z)=\left( C_\alpha^\beta(z)\right) =\mathbf{1}+C_1 z+C_2 z^2+\dots, \quad C^T(-z) C(z)=\mathbf{1} \nonumber\\
& \theta_\alpha^0(z)=\sum_{p\geq 0}\theta_{\alpha,p}^0 z^p, \quad \theta_{\alpha,p}^0\in \mathbb C. \nonumber
\end{align}
\end{emp}
For the flat exact bihamiltonian structure obtained from \eqref{pencil-frob} by a Legendre-type transformation \eqref{legen01}--\eqref{legen04}
one can choose a calibration by introducing functions $\hat \theta_{\alpha,p}\left( \hat v\right)$ defined by
\begin{equation}\label{legen05}
\frac{\partial \hat\theta_\alpha\left(\hat v; z\right)}{\partial \hat v^\beta}=\frac{\partial \theta_\alpha(v; z)}{\partial v^\beta},\quad \forall\, \alpha, \, \beta=1, \dots, n.
\end{equation}
Remarkably in this case the new Hamiltonians satisfy the \emph{same} bihamiltonian recursion \eqref{bi-recursion00}.
Other calibrations can be obtained by transformations of the form \eqref{basis-change00}.
\begin{prp}
For a flat exact bihamiltonian structure of hydrodynamic type obtained from a Frobenius manifold by a Legendre-type transformation \eqref{legen01}--\eqref{legen04} the construction \eqref{legen05} and \eqref{levelt01} defines a calibration. Any calibration can be obtained in this way up to the transformation \eqref{basis-change00} .
\end{prp}
The properties of a calibration, in particular the tau-symmetry property
\eqref{tau-sym00}, of a flat exact semisimple bihamiltonian structure of hydrodynamic type $(P_1, P_2; Z)$ enable us to define a tau structure and tau functions for it and the associated principal hierarchy \eqref{hamilt01}, see Definitions \ref{zh-12-2} and \ref{zh-01-22f} in Section \ref{sec-3}.
One of the main purposes of the present paper is to study the existence and properties of tau structures for deformations of the bihamiltonian structure $(P_1, P_2; Z)$ and the principal hierarchy.
Let $c_i(u^i)\ (i=1, \dots, n)$ be a collection of arbitrary smooth functions, Carlet, Posthuma, and Shadrin showed that there exists a deformation
$(\tilde{P}_1, \tilde{P}_2)$ of $(P_1, P_2)$ such that its central invariants are given by $c_i(u^i)\ (i=1, \dots, n)$ \cite{CPS-3}.
By using the triviality of the second bihamiltonian cohomology, one can show that there also exists a unique deformation of the principal hierarchy
of $(P_1, P_2)$ such that all its members are bihamiltonian vector fields of $(\tilde{P}_1, \tilde{P}_2)$ (see Sec.\,\ref{sec-5}). The deformed
integrable hierarchy usually does not possess a tau structure unless the central invariants are constant (first observed in \cite{yz}). On the other hand,
it is shown by Falqui and Lorenzoni in \cite{FL} that, if $c_i(u^i)\ (i=1, \dots, n)$ are constants, one can choose the representative $(\tilde{P}_1, \tilde{P}_2)$
such that they still satisfy the exactness condition, that is
\[[Z, \tilde{P}_1]=0, \quad [Z, \tilde{P}_2]=\tilde{P}_1.\]
With such a pair $(\tilde{P}_1, \tilde{P}_2)$ in hand, we can ask the following questions:
\begin{enumerate}
\item Does the deformed integrable hierarchy have tau structures?
\item If it does, how many of them?
\end{enumerate}
The following theorem is the main result of the present paper, which answers the above questions.
\begin{thm}\label{main-thm}
Let $(P_1, P_2; Z)$ be a flat exact semisimple bihamiltonian structure of hydrodynamic type which satisfies the irreducibility condition. We fix a calibration
$\{h_{\alpha,p}\,|\,\alpha=1,\dots,n;\, p=0,1,2,\dots\}$ of the bihamiltonian structure
$(P_1, P_2; Z)$.
Then the following statements hold true:
\begin{itemize}
\item[i)] For any deformation $(\tilde{P}_1, \tilde{P}_2; \tilde{Z})$ of $(P_1, P_2; Z)$ with constant central invariants, there exists a deformation
$\{\tilde{h}_{\alpha, p}\}$ of the Hamiltonian densities $\{h_{\alpha, p}\}$ such that the corresponding Hamiltonian vector fields $\tilde{X}_{\alpha, p}$
yield a deformation of the principal hierarchy which is a bihamiltonian integrable hierarchy possessing a tau structure and the Galilean symmetry.
\item[ii)] Let $(\hat{P}_1, \hat{P}_2; \hat{Z})$ be another deformation of $(P_1, P_2; Z)$
with the same central invariants as $(\tilde{P}_1, \tilde{P}_2; \tilde{Z})$, and let $\{\hat{h}_{\alpha, p}\}$ be the corresponding tau-symmetric
deformation of the Hamiltonian densities, then the logarithm of the tau function for $\{\hat{h}_{\alpha, p}\}$ can be obtained from the one for
$\{\tilde{h}_{\alpha, p}\}$ by adding a differential polynomial.
\end{itemize}
\end{thm}
\section{Flat exact semisimple bihamitonian structures and Frobenius manifolds}\label{sec-2}
Let $M$ be a smooth manifold of dimension $n$. Denote by $\hat{M}$ the super manifold of
dimension $(n\mid n)$ obtained from the cotangent bundle of $M$ by reversing the parity of the
fibers. Suppose $U$ is a local coordinate chart on $M$ with coordinates $(u^1, \dots, u^n)$,
then
\[\theta_i=\frac{\partial}{\partial u^i},\quad i=1, \dots, n\]
can be regarded as local coordinates on the corresponding local chart $\hat{U}$ on $\hat{M}$.
Note that $\theta_i$'s are super variables, they satisfy the skew-symmetric commutation law:
\[\theta_i\theta_j+\theta_j\theta_i=0.\]
Let $J^\infty(M)$ and $J^\infty(\hat{M})$ be the infinite jet space of $M$ and $\hat{M}$,
which is just the projective limits of the corresponding finite jet bundles. There is a natural local
chart $\hat{U}^\infty$ over $\hat{U}$ with local coordinates
\[\{u^{i,s}, \theta_i^s \mid i=1, \dots, n; s=0, 1, 2, \dots\}.\]
See \cite{BCIH-I} for more details. Denote by $\hat{\A}$ the spaces of differential polynomials on $\hat{M}$.
Locally, we can regard $\hat{\A}$ as
\[C^\infty(\hat{U})[[u^{i,s}, \theta_i^s \mid i=1, \dots, n; s=1, 2, \dots]].\]
The differential polynomial algebra $\mathcal{A}$ on $M$ can be defined similarly as a subalgebra of $\hat{\A}$.
There is a globally defined derivation on $J^\infty(\hat{M})$
\begin{equation}\label{zh-12-1}
\partial=\sum_{i=1}^n\sum_{s\ge0}\left(u^{i,s+1}\frac{\partial}{\partial u^{i,s}}+\theta_i^{s+1}\frac{\partial}{\partial \theta_i^s}\right).
\end{equation}
Its cokernel $\hat{\F}=\hat{\A}/\partial\hat{\A}$ is called the space of local functionals. Denote the projection $\hat{\A}\to\hat{\F}$ by $\int$. We can also define $\mathcal{F}=\mathcal{A}/\partial\mathcal{A}$, whose elements
are called local functionals on $M$.
There are two useful degrees on $\hat{\A}$, which are called standard gradation
$$
\deg u^{i,s}=\deg \theta_i^s=s
$$
and super gradation
$$
\deg \theta_i^s=1, \quad \deg u^{i,s}=0
$$
respectively:
\[\hat{\A}=\bigoplus_{d\ge 0}\hat{\A}_d=\bigoplus_{p\ge 0}\hat{\A}^p.\]
We denote $\hat{\A}^p_d=\hat{\A}_d\cap \hat{\A}^p$.
In particular, $\mathcal{A}=\hat{\A}^0$, $\mathcal{A}_d=\hat{\A}^0_d$. The derivation $\partial$ has the property $\partial (\hat{\A}^p_d) \subseteq \hat{\A}^p_{d+1}$, hence it induces the same degrees on $\hat{\F}$, so we also have the homogeneous components $\hat{\F}_d$, $\hat{\F}^p$, $\hat{\F}_d^p$,
and the ones for $\mathcal{F}=\hat{\F}^0$. The reader can refer to \cite{BCIH-I} for details of the definitions of these notations.
There is a graded Lie algebra structure on $\hat{\F}$, whose bracket operation is given by
\[[P, Q]=\int\left(\frac{\delta P}{\delta \theta_i}\frac{\delta Q}{\delta u^i}+(-1)^p\frac{\delta P}{\delta u^i}\frac{\delta Q}{\delta \theta_i}\right),\]
where $P\in\hat{\F}^p$, $Q\in \hat{\F}^q$. This bracket is called the Schouten--Nijenhuis bracket on $J^\infty(M)$.
A Hamiltonian structure is defined as an elements $P\in\hat{\F}^2$ satisfying $[P, P]=0$. For example,
the operator \eqref{pbht00} corresponds to an element $P\in\hat{\F}^2_1$ of the form
\[P=\frac12\int\left(g^{ij}(u)\theta_i\theta_j^1+\Gamma^{ij}_{k}(u)u^{k,1}\theta_i\theta_j\right).\]
The fact that $P$ is a Hamiltonian operator is equivalent to the condition $[P, P]=0$.
A bihamiltonian structure of hydrodynamic type can be given by a pair of Hamiltonian structures of hydrodynamic type
$(P_1, P_2)$ satisfying the additional condition $[P_1, P_2]=0$. Denote by $g_1, g_2$ the flat metrics associated with the Hamiltonian structures $P_1, P_2$.
In what follows, we will assume that $(P_1, P_2)$ is semisimple with a fixed system of canonical coordinates $u^1, \dots, u^n$, in which the two flat metrics take the
diagonal form \eqref{zh-10-30}, and the contravariant Christoffel coefficients of them have the following expressions respectively:
\begin{align}
\Gamma^{ij}_{k}&=\frac12\frac{\partial f^i}{\partial u^k}\delta^{ij}+
\frac12\frac{f^i}{f^j}\frac{\partial f^j}{\partial u^i}\delta^{jk}-\frac12\frac{f^j}{f^i}\frac{\partial f^i}{\partial u^j}\delta^{ik},\label{gamma-1}\\
\hat\Gamma^{ij}_{k}&=\frac12\frac{\partial (u^i f^i)}{\partial u^k}\delta^{ij}+
\frac12\frac{u^i f^i}{f^j}\frac{\partial f^j}{\partial u^i}\delta^{jk}-\frac12\frac{u^j f^j}{f^i}\frac{\partial f^i}{\partial u^j}\delta^{ik}\label{gamma-2}.
\end{align}
The diagonal entries $f^i$ satisfy certain non-linear differential equations which are equivalent to the flatness of $g_1$, $g_2$ and the condition
$[P_1, P_2]=0$. See the appendix of \cite{DLZ-1} for details. We denote by $\nabla$, $\hat{\nabla}$ the Levi-Civita connections of the metrics $g_1$, $g_2$
respectively.
We also assume henceforth that the semisimple bihamiltonian structure of hydrodynamic type $(P_1, P_2)$
is flat exact (see Definition \ref{zh-10-31}), and the corresponding vector field is given by $Z\in\hat{\F}^1$. We will denote this exact
bihamiltonian structure by $(P_1, P_2; Z)$.
\begin{lem} \label{lem-24}
If $Z\in\hat{\F}^1$ satisfies the condition \eqref{cond-exact}, then it has the following form:
\[Z=\int \left(\sum_{i=1}^n \theta_i\right)+X,\]
where $X$ is a bihamiltonian vector field of $(P_1, P_2)$.
\end{lem}
\begin{prf}
We first decompose $Z\in \hat{\F}^1$ into the sum of homogeneous components:
\[Z=Z_0+Z_1+Z_2+\cdots, \quad \mbox{where } Z_k\in\hat{\F}^1_k.\]
It is proved in \cite{FL} that $Z_0$ must take the form
\begin{equation}
Z_0=\int \left(\sum_{i=1}^n \theta_i\right). \label{eq-Z0}
\end{equation}
Then $X:=Z-Z_0$ satisfies $[X, P_1]=[X, P_2]=0$, so it is a bihamiltonian vector field of $(P_1, P_2)$.
\end{prf}
The $X$-part of $Z$ does not affect anything, so it can be omitted safely.
Then $Z=Z_0$, and we call it the unit vector field of $(P_1, P_2)$.
According to the convention used in \cite{BCIH-I}, this $Z$ corresponds to a vector field on $M$ given by
\[D_Z=\sum_{i=1}^n \frac{\partial}{\partial u^i}\]
(see Definition 2.2 and Equation (2.5) of \cite{BCIH-I}).
It is also proved in \cite{FL} that if \eqref{cond-exact} holds true then
\begin{equation}\label{w-10}
D_Z(f^i)=\sum_{k=1}^n \frac{\partial f^i}{\partial u^k}=0,\quad i=1, \dots, n.
\end{equation}
Note that the flatness of the vector field $Z$ (or, equivalently, $D_Z$) given in Definition \ref{zh-10-31} can be
represented as
\begin{equation}\label{cond-flat}
\nabla D_Z=0.
\end{equation}
\begin{lem} \label{lem-25}
$D_Z$ is flat if and only if $f_i:=(f^i)^{-1}\ (i=1, \dots, n)$ satisfy the following Egoroff conditions:
\begin{equation}\label{jw-5-2}
\frac{\partial f_i}{\partial u^j}=\frac{\partial f_j}{\partial u^i}, \quad \forall\,1\le i,j \le n.
\end{equation}
\end{lem}
\begin{prf}
The components of $D_Z$ read $Z^j=1$, so we have
\begin{equation}\label{zh-07}
0=\nabla^i Z^j=g_1^{ik}\frac{\partial Z^j}{\partial u^k}-\Gamma^{ij}_{k} Z^k=-\sum_{k=1}^n \Gamma^{ij}_{k}.
\end{equation}
By using \eqref{gamma-1} and \eqref{w-10}, the lemma can be easily proved.
\end{prf}
The above lemma implies that, if $Z$ is flat, then $\gamma_{ij}=\gamma_{ji}$ (see \eqref{zh-11-1}).
In this case, the conditions that $(P_1, P_2)$ is a bihamiltonian structure
are equivalent to the following equations for $\gamma$ (see the appendix of \cite{DLZ-1}):
\begin{align}
&\frac{\partial \gamma_{ij}}{\partial u^k}=\gamma_{ik}\gamma_{jk}, \quad \mbox{for distinct } i, j, k, \label{gamma-cond-1}\\
&\sum_{k=1}^n \frac{\partial \gamma_{ij}}{\partial u^k}=0, \label{gamma-cond-2}\\
&\sum_{k=1}^n u^k \frac{\partial \gamma_{ij}}{\partial u^k}=-\gamma_{ij}. \label{gamma-cond-3}
\end{align}
The condition \eqref{gamma-cond-2} is actually $D_Z(\gamma_{ij})=0$. If we introduce the Euler vector field
\begin{equation}
E=\sum_{k=1}^n u^k \frac{\partial}{\partial u^k}, \label{euler-vf}
\end{equation}
then the condition \eqref{gamma-cond-3} is $E(\gamma_{ij})=-\gamma_{ij}$, that is, $\gamma_{ij}$ has degree $-1$ if we adopt $\deg u^i=1$.
Consider the linear system
\begin{align}
&\frac{\partial\psi_j}{\partial u^i}=\gamma_{ji} \psi_i,\quad i\ne j,\label{w-11}\\
&\frac{\partial\psi_i}{\partial u^i}=-\sum_{k\ne i} \gamma_{ki} \psi_k.\label{w-12}
\end{align}
The above conditions for $\gamma_{ij}$ ensure the compatibility of this linear system, so its solution space $\S$ has dimention $n$,
and we can find a fundamental system of solutions
\begin{equation}\label{zh-11-5}
\Psi_\alpha=(\psi_{1\alpha}(u),\dots, \psi_{n\alpha}(u))^T, \quad \alpha=1,\dots,n,
\end{equation}
which form a basis of $\S$.
\begin{lem}\label{zh-11-8}
Let $\psi=(\psi_1,\dots, \psi_n)$ be a nontrivial solution of the linear system \eqref{w-11}, \eqref{w-12} on the domain $D$, that is there exist
$i\in \{1, \dots, n\}$, and $u \in D$ such that $\psi_i(u)\ne 0$.
Assume that the rotation coefficients $\gamma_{ij}$
satisfy the irreducibility condition given in Definition \ref{zh-11-2}, then there exists
$u_0\in D$ such that for each $i\in\{1, \dots, n\}$, $\psi_i(u_0)\ne 0$.
\end{lem}
\begin{prf}
For any subset $S\subseteq \{1, \dots, n\}$, define $\phi_S=\prod_{i \in S}\psi_i$.
We assume $\phi_{\{1, \dots, n\}}=0$ on the domain $D$,
then we are to show that $\psi$ is a trivial solution, that is $\phi_{\{i\}}=0$ on $D$ for each
$i=1, \dots, n$. To this end, we will prove that for any $S\subseteq \{1, \dots, n\}$,
$\phi_S=0$ for any $u\in D$ by induction on the size of $S$. We have known that if $\# S=n$,
then $\phi_S=0$. Assume for some $k\le n$, and any $S\subseteq \{1, \dots, n\}$
with $\#S=k$, we have $\phi_S(u)=0$ for any $u\in D$.
For $T\subseteq \{1, \dots, n\}$ with $\# T=k-1$,
and any given $u\in D$, we can find $i\in T$, and $j \notin T$ such that
$\gamma_{ij}(u)\ne 0$ because of the irreducibility condition. Without loss of generality we can assume that $\psi_i(u)$ does not identically vanish. Take $S=T\cup \{j\}$, then consider $\frac{\partial \phi_S}{\partial u^i}$:
\[0=\frac{\partial \phi_S}{\partial u^i}=\sum_{k\in S}\phi_{S-\{k\}}\frac{\partial \psi_k}{\partial u^i}
=\sum_{k\in S, k\ne i}\phi_{S-\{i, k\}}\gamma_{ik}(\psi_i^2-\psi_k^2),\]
so we have
\[\phi_T\frac{\partial \phi_S}{\partial u^i}=\gamma_{ij}\phi_T^2\psi_i=0.\]
Since $\gamma_{ij}(u)\ne 0$, we have $\phi_T^2\psi_i=0$, which implies
$\phi_T=0$.
\end{prf}
We assume that $\gamma_{ij}$ is irreducible from now on, and shrink $D$ (if necessary) such that $D$ is contractible,
and $\psi_{i1}\ne 0$ on $D$ for each $i=1, \dots, n$.
\begin{lem}We have the following facts:
\begin{itemize}
\item[i)] Define \[\eta_{\alpha\beta}=\sum_{i=1}^n \psi_{i\alpha}\psi_{i\beta},\]
then $(\eta_{\alpha\beta})$ is a constant symmetric non-degenerate matrix. We denote its inverse matrix by $(\eta^{\alpha\beta})$.
\item[ii)] For each $\alpha=1,\dots, n$, the 1-form
\[\omega_\alpha=\sum_{i=1}^n \psi_{i\alpha} \psi_{i1} d u^i\]
is closed, so there exist smooth functions $v_\alpha$ such that $\omega_\alpha=d v_\alpha$.
Denote $v^\alpha=\eta^{\alpha\beta}v_\beta$, then $(v^1, \dots, v^n)$ can serve as a local coordinate system on $D$. In this local coordinate system we have
\[D_Z=\frac{\partial}{\partial v^1}.\]
\item[iii)] Define the functions \[c_{\alpha\beta\gamma}=\sum_{i=1}^n \frac{\psi_{i\alpha}\psi_{i\beta}\psi_{i\gamma}}{\psi_{i1}},\]
then $c_{\alpha\beta\gamma}$ are symmetric with respect to the three indices and satisfy the following conditions:
\begin{align}
&c_{1\alpha\beta}=\eta_{\alpha\beta},\label{cabc-cond-1}\\
& c_{\alpha\beta\xi}\eta^{\xi\zeta} c_{\zeta\gamma\delta}=c_{\delta\beta\xi}\eta^{\xi\zeta} c_{\zeta\gamma\alpha},\label{cabc-cond-2}\\
&\frac{\partial c_{\alpha\beta\gamma}}{\partial v^\xi}= \frac{\partial c_{\xi\beta\gamma}}{\partial v^\alpha}. \label{cabc-cond-3}
\end{align}
\end{itemize}
\end{lem}
\begin{prf}
The items i), ii) and the condition \eqref{cabc-cond-1} are easy, so we omit their proofs. The condition \eqref{cabc-cond-2} follows from the identity
$\psi_{i\xi}\eta^{\xi\zeta}\psi_{j\zeta}=\delta_{ij}$. The condition \eqref{cabc-cond-3} can be proved by the chain rule and the following identities
\begin{equation}
\frac{\partial v^\alpha}{\partial u^i}=\psi_i^\alpha\psi_{i1}, \quad \frac{\partial u^i}{\partial v^\alpha}=\frac{\psi_{i\alpha}}{\psi_{i1}},
\label{jacobian-uv}
\end{equation}
where $\psi_i^\alpha=\eta^{\alpha\beta}\psi_{i\beta}$.
\end{prf}
The above lemma implies immediately the following corollary.
\begin{cor}\label{cor-potential}
There exists a smooth function $F(v)$ on $D$ such that
\[c_{\alpha\beta\gamma}=\frac{\partial^3 F}{\partial v^\alpha\partial v^\beta\partial v^\gamma},\]
and it gives the potential of a Frobenius manifold structure (without the quasi-homogeneity condition) on $D$.
\end{cor}
By using \eqref{jacobian-uv} we have
\[\frac{\partial}{\partial u^i}\circ \frac{\partial}{\partial u^j}=c_{\alpha\beta}^\gamma\frac{\partial v^\alpha}{\partial u^i}\frac{\partial v^\beta}{\partial u^j}
\frac{\partial u^k}{\partial v^\gamma}\frac{\partial}{\partial u^k}=\delta_{ij}\frac{\partial}{\partial u^i},\]
so $u^1, \dots, u^n$ are the canonical coordinates of this Frobenius manifold. Then its first metric reads
\[\langle d u^i, d u^j\rangle_1=\eta^{\alpha\beta}\frac{\partial u^i}{\partial v^\alpha}\frac{\partial u^j}{\partial v^\beta}
=\delta_{ij} \psi_{i1}^{-2},\]
which is in general not equal to the original metric $g_1$ associated to the first
Hamiltonian structure $P_1$. Though this Frobenius manifold may be not quasi-homogeneous, we can still define its
second metric as follows:
\[\langle d u^i, d u^j\rangle_2=\delta_{ij} u^i \psi_{i1}^{-2}.\]
The two metrics $\langle\ \,,\ \rangle_1$ and $\langle\ \,,\ \rangle_2$ are compatible, since they have the same rotation coefficients with
the original $g_1$, $g_2$ associated to the bihamiltonian structure $(P_1, P_2)$.
The above Frobenius manifold structure depends on the choice of the solution $\Psi_1$ of the linear system \eqref{w-11}, \eqref{w-12}. It is easy to see that
\[\psi_{i1}=f_i^{\frac12}=(f^i)^{-\frac12},\quad i=1,\dots,n\] give a solution to the linear system \eqref{w-11}, \eqref{w-12}.
If we choose it as $\Psi_1$, then the two metrics $\langle\ \,,\ \rangle_1$ and $\langle\ \,,\ \rangle_2$ coincide with $g_1$, $g_2$,
so we call the corresponding Frobenius manifold structure the {\em canonical one} associated to $(P_1, P_2; Z)$.
There are also other choices for $\Psi_1$ such that the corresponding Frobenius manifold is quasi-homogeneous.
By using the identity \eqref{gamma-cond-3}, one can show that Euler vector field $E$ defined by \eqref{euler-vf} acts on the solution space $\S$ as a linear transformation.
Suppose we are working in the complex manifold case, then $E$ has at least one eigenvector in $\S$. We denote this eigenvector by $\Psi_1$,
and denote its eigenvalue by $\mu_1$, then choose other basis $\Psi_2, \dots, \Psi_n$ such that the matrix of $E$ becomes the Jordan normal form,
that is, there exists $\mu_\alpha\in\mathbb{C}$, and $p_\alpha=0$ or $1$, such that
\[E(\Psi_\alpha)=\mu_\alpha \Psi_\alpha+p_{\alpha-1}\Psi_{\alpha-1}.\]
\begin{lem}
The Frobenius manifold structure corresponding to the above $\Psi_1$ is quasi-homogeneous with the Euler vector field $E$
and the charge $d=-2\mu_1$.
\end{lem}
\begin{prf}
The trivial identity $E(\eta_{\alpha\beta})=0$ implies that
\[\left(\mu_{\alpha}\eta_{\alpha\beta}+p_{\alpha-1}\eta_{(\alpha-1)\beta}\right)
+\left(\mu_{\beta}\eta_{\alpha\beta}+p_{\beta-1}\eta_{\alpha(\beta-1)}\right)=0.\]
Denote by $L_E$ the Lie derivative with respect to $E$, then the identity $L_E \omega_\alpha=d E(v_\alpha)$ implies
\[d E(v_\alpha)=\left(\mu_\alpha+\mu_1+1\right)dv_\alpha+p_{\alpha-1}d v_{\alpha-1},\]
so there exist some constants $r_\alpha\in\mathbb{C}$ such that
\[E(v_\alpha)=\left(\mu_\alpha+\mu_1+1\right)v_\alpha+p_{\alpha-1}v_{\alpha-1}+r_\alpha.\]
On the other hand, we have
\begin{align*}
E(c_{\alpha\beta\gamma})=&\left(\mu_\alpha+\mu_\beta+\mu_\gamma-\mu_1\right)c_{\alpha\beta\gamma}\\
&\quad+p_{\alpha-1}c_{(\alpha-1)\beta\gamma}+p_{\beta-1}c_{\alpha(\beta-1)\gamma}+p_{\gamma-1}c_{\alpha\beta(\gamma-1)}.
\end{align*}
By using the above identities, one can show that
\[\frac{\partial^3}{\partial v^\alpha\partial v^\beta \partial v^\gamma}\left(E(F)-(3+2\mu_1) F\right)=0\quad \mbox{for all}\quad \alpha, \, \beta\, \gamma\]
that gives the quasi-homogeneity condition for $F$.
\end{prf}
For each eigenvector $\Psi_1$ of $E$, one can construct a quasi-homogeneous Frobenius manifold. All these Frobenius manifolds (including the
canonical one) are related by Legendre transformations (see \cite{Du-1}).
To see this, let us denote by $F(v)=F(v^1,\dots, v^n)$ and $\tilde{F}(\tilde{v})=\tilde{F}(\tilde{v}^1,\dots, \tilde{v}^n)$ the Frobenius manifold potentials
constructed above starting from the fundamental solutions
$(\Psi_1,\dots, \Psi_n)$ and $(\tilde{\Psi}_1,\dots, \tilde{\Psi}_n)$
of the linear system \eqref{w-11}, \eqref{w-12}. These two fundamental solutions are related by a non-degenerate constant matrix $A=(a^\alpha_\beta)$ by the formula
\[ (\Psi_1,\dots, \Psi_n)=(\tilde\Psi_1,\dots, \tilde\Psi_n) A.\]
Introduce the new coordinates
\[\begin{pmatrix} \hat{v}^1 \\ \vdots \\ \hat{v}^n\end{pmatrix}=A \begin{pmatrix} v^1 \\ \vdots \\ v^n\end{pmatrix}\]
and denote
\[ \hat{F}(\hat{v})=\hat{F}(\hat{v}^1,\dots, \hat{v}^n):=F(v).\]
Then it is easy to verify that
\[\hat{v}^\alpha=\tilde{\eta}^{\alpha\beta} a^\gamma_1 \frac{\partial\tilde{F}(\tilde{v})}{\partial \tilde{v}^\beta\partial\tilde{v}^\gamma},\quad
\frac{\partial^2\hat{F}(\hat{v})}{\partial\hat{v}^\alpha\partial\hat{v}^\beta}=\frac{\partial^2\tilde{F}(\tilde{v})}{\partial\tilde{v}^\alpha\partial\tilde{v}^\beta},\]
and in the $\hat{v}^1,\dots, \hat{v}^n$ coordinates the metrics $g_1, g_2$
have the expressions
\[\frac{\partial \hat{v}^\alpha}{\partial u^i} g_1^{ij}(u) \frac{\partial \hat{v}^\beta}{\partial u^j}
=\tilde{\eta}^{\alpha\beta}
,\quad
\frac{\partial \hat{v}^\alpha}{\partial u^i} g_2^{ij}(u) \frac{\partial \hat{v}^\beta}{\partial u^j}
=\tilde{g}(\tilde{v}).\]
\noindent{\em{Proof of Theorem \ref{mainthm00}\,}}
The first part of the theorem
follows from the results of \cite{XZ}, and the second part of the theorem is proved by the arguments given above. The theorem is proved.\hfill{$\Box$}
\section{The principal hierarchy and its tau structure}\label{sec-3}
Let $(P_1, P_2; Z)$ be a flat exact bihamiltonian structure.
Denote
\begin{align*}
&d_a=\mathrm{ad}_{P_a}:\hat{\F}\to\hat{\F},\quad a=1, 2,\\
&\delta=ad_Z:\hat{\F}\to\hat{\F}.
\end{align*}
\begin{dfn}\mbox{}
\begin{itemize}
\item[i)] Define $\H:=\mathrm{Ker}(d_2\circ d_1)\cap \hat{\F}^0$, whose elements are called bihamiltonian conserved quantities.
\item[ii)] Define $\mathcal{X}:=\mathrm{Ker}(d_1)\cap\mathrm{Ker}(d_2)\cap \hat{\F}^1$, whose elements are called bihamiltonian vector fields.
\end{itemize}
\end{dfn}
Note that the space $\mathcal{X}$ is actually the bihamiltonian cohomology $BH^1(\hat{\F}, P_1, P_2)$, see \cite{BCIH-I}.
\begin{lem}\label{lem-H0X1}
$\H\subset \hat{\F}^0_0$, and $\mathcal{X}\subset \hat{\F}^1_1$.
\end{lem}
\begin{prf}
If $[P_2, [P_1, H]]=0$, then there exists $K\in \hat{\F}^0$ such that $[P_1, H]=[P_2, K]$. By using Lemma 4.1 of \cite{DLZ-1},
we know that $H\in \hat{\F}^0_0$.
If $X=\int(X^\alpha\theta_\alpha)\in\hat{\F}^1_0$ satisfies $[P_1,X]=[P_2, X]=0$, then we have
\[\nabla_j X^i=0,\quad \hat{\nabla}_j X^i=0.\]
Recall that $\nabla$, $\hat{\nabla}$ are the Levi-Civita connections of the metrics $g_1$, $g_2$ associated
with $P_1, P_2$ respectively,
\begin{equation}\label{zh-12-30a}
\nabla_i=\nabla_{\frac{\partial}{\partial u^i}},\quad \hat{\nabla}_i=\hat{\nabla}_{\frac{\partial}{\partial u^i}}
\end{equation}
and $u^1, \dots, u^n$ are the canonical coordinates of $(P_1, P_2)$.
It follows from the explicit expressions of $g_a^{ij}$, $\Gamma^{ij}_{k,a}$ that $X^i=0$ and so we have
$BH^1_0(\hat{\F}, P_1, P_2)\cong 0$.
On the other hand, Lemma 4.1 of \cite{DLZ-1}
implies that $BH^1_{\ge2}(\hat{\F}, P_1, P_2)\cong 0$, so consequently $\mathcal{X}=BH^1_1(\hat{\F}, P_1, P_2)$.
The lemma is proved.
\end{prf}
\begin{cor}
\begin{itemize}
\item[i)] For any $X, Y \in \mathcal{X}$, we have $[X, Y]=0$;
\item[ii)] For any $X\in \mathcal{X}$, $H\in \H$, we have $[X, H]=0$;
\item[iii)] For any $H, K \in \H$, we have $\{H, K\}_{P_1}:=[[P_1, H], K]=0$.
\end{itemize}
\end{cor}
\begin{prf}
i) If $X, Y \in \mathcal{X}$, then the above lemma shows that $\deg X=\deg Y=1$, so $\deg [X, Y]=2$. But we also have $[X, Y]\in \mathcal{X}$, so $[X, Y]=0$.
ii) If $X\in \mathcal{X}$, $H\in \H$, then $K=[X, H]\in \H$. But $\deg X=1$, $\deg H=0$, so $\deg K=1$, which implies $K=0$.
iii) Take $X=[P_1, H]$, then by applying ii) we obtain $\{H, K\}_{P_1}=0$.
\end{prf}
\begin{lem} \label{lem-23}
We have the following isomorphism
\begin{equation}\label{w-1}
\mathcal{X}\cong \H/\mathcal{V},
\end{equation}
where $\mathcal{V}=\mathrm{Ker}(d_1)\cap \hat{\F}^0$ is the space of Casimirs of $P_1$.
\begin{itemize}
\item[i)] A local functional $H\in\hat{\F}^0$ is a bihamiltonian conserved quantity if and only if one can choose its density $h$ so that $h\in \mathcal{A}_0$
and satisfies the condition
\begin{equation}\label{w-2}
\nabla_i\nabla_jh=0,\quad i\ne j,
\end{equation}
where $\nabla_i=\nabla_{\frac{\partial}{\partial u^i}}$ are defined as in \eqref{zh-12-30a}.
\item[ii)] A vector field $X\in \hat{\F}^1$ is a bihamiltonian vector field if and only if it has the following form
\[X=\int\sum_{i=1}^n A^i(u)u^{i,1}\theta_i,\]
where $A^i(u)$ satisfy the following equations:
\begin{equation}
\frac{\partial A^i}{\partial u^j}=\Gamma^i_{ij}\left(A^j-A^i\right), \quad \textrm{for } j\ne i, \label{de-for-X}
\end{equation}
here $\Gamma^i_{ij}$ is the Christoffel coefficients of the Levi-Civita connection of $g_1$.
\end{itemize}
\end{lem}
\begin{prf}
Consider the map $\phi=d_1|_{\H}:\H\to \mathcal{X}$. It is easy to see that $\phi$ is well-defined, and $\mathrm{Ker}(\phi)=\mathcal{V}$. Note that
\[H^1_{\ge 1}(\hat{\F}, P_a)\cong 0, \quad a=1,2,\]
so for a given $X\in BH^1_{\ge1}(\hat{\F}, P_1, P_2)$, there exists $H, G\in\mathcal{F}$ such that
\[X=[P_1, H]=[P_2, G].\]
From the second equality we also know that $H\in\H$. So the map $\phi$ is surjective and we proved that the map $\phi$ induces the isomorphism \eqref{w-1}.
Let $H\in\H$, then it yields a bihamiltonian vector field $X=[P_1, H]$. According to Lemma \ref{lem-H0X1}, $H\in\mathcal{F}_0$, $X\in\hat{\F}^1_1$.
So we can choose the density of $H=\int(h)$ such that $h\in \mathcal{A}_0$, and
\[X=\int(X^i_ju^{j,1}\theta_i),\]
where $X^i_j=-\nabla^i\nabla_jh$, and $\nabla^i=g^{ik}_1 \nabla_k$. The conditions $[P_1, X]=0$
and $[P_2, X]=0$ read
\begin{align}
& g_1^{ij}X^k_j=g_1^{kj}X^i_j,\quad \nabla_kX^i_j=\nabla_jX^i_k, \label{X-cond-1}\\
& g_2^{ij}X^k_j=g_2^{kj}X^i_j,\quad \hat{\nabla}_kX^i_j=\hat{\nabla}_jX^i_k, \label{X-cond-2}
\end{align}
The diagonal form \eqref{zh-10-30} of $g_1$ and $g_2$ and the first equations of \eqref{X-cond-1} and \eqref{X-cond-2} imply that
\[(u^i-u^j) f^j X^i_j=0,\]
so $X^i_j$ is diagonal. Then the second equation of \eqref{X-cond-1} gives the desired equation \eqref{de-for-X}.
Let $\hat{\Gamma}^i_{ij}$ be the Christoffel coefficients of the Levi-Civita connection of $g_2$, then one can show that for $i\ne j$
\[\hat{\Gamma}^i_{ij}=\Gamma^i_{ij}=\frac{1}{2f_i}\frac{\partial f_i}{\partial u^j},\]
so the second equation of \eqref{X-cond-2} also gives \eqref{de-for-X}. The lemma is proved.
\end{prf}
\begin{lem}\label{lem-zh-3-3}
We have $\delta(\H)\subseteq \H$. Denote $\varphi=\delta|_{\H}:\H\to\H$, then $\varphi$ is surjective and $\dim\mathrm{Ker}(\varphi)=n$.
\end{lem}
\begin{prf}
Let $H\in\H$, so we have $[P_2, [P_1, H]]=0$. From the graded Jacobi identity it follows that
\begin{align*}
[P_2, [P_1, [Z, H]]]=&[P_2, -[[H, P_1],Z]-[[P_1, Z],H]]\\
=& [[P_2,Z],[P_1, H]]+[[P_2, [P_1, H]],Z]=[P_1, [P_1, H]]=0,
\end{align*}
so we have $\delta(\H)\subseteq \H$.
Suppose $H=\int(h)\in\H$, then from Lemma \ref{lem-23} it follows that the density $h$ can be chosen to belong to $\mathcal{A}_0$ and
$\nabla_i\nabla_j h=0$ for $i\ne j$. If $\varphi(H)=0$, then
\[\sum_{i=1}^n\nabla_i h=0,\]
so we have $\nabla_i\nabla_j h=0$ for any $i, j$, i.e. $h\in\mathcal{V}$. Thus $h$ can be represented as
\[h=\sum_{\alpha=1}^n c_\alpha v^\alpha+c_0,\]
where $c_0, c_1, \dots, c_n$ are some constants, and $v^\alpha$ are the flat coordinates of $g_1$.
From the condition
$D_Z=\frac{\partial}{\partial v^1}$ it follows that $c_1=0$, so $\dim\mathrm{Ker}(\varphi)=n$.
To prove that $\varphi$ is surjective, we need to show that for any $g\in\mathcal{A}_0$ satisfying $\nabla_i\nabla_j g=0\ (i\ne j)$, there exists $h\in\mathcal{A}_0$ such that
\begin{equation}\label{zh-08}
\nabla_i\nabla_j h=0\ (i\ne j),\quad \sum_{i=1}^n\nabla_i h=g.
\end{equation}
Denote $\xi_j=\nabla_jh$, then by using the identity \eqref{zh-07}
we know that the above equations imply that
\begin{equation}\label{w-5}
\nabla_i\xi_j=\left\{\begin{array}{cc}0, & i\ne j; \\ \nabla_i g, & i=j.\end{array}\right.
\end{equation}
Let us first prove that the functions $\zeta_{ij}$ defined by the l.h.s. of \eqref{w-5}
satisfy the equalities
\begin{equation}
\nabla_k\zeta_{ij}=\nabla_i\zeta_{kj}. \label{nabla-zeta}
\end{equation}
Denote by $\Gamma_{ij}^k$ the Christoffel coefficients of the first metric, then we have
\begin{align*}
\nabla_k\zeta_{ij} &= \zeta_{ij,k}-\Gamma^{\alpha}_{ki}\zeta_{\alpha j}-\Gamma^{\alpha}_{kj}\zeta_{i\alpha}\\
&= \zeta_{ij,k}-\Gamma^j_{ki}\zeta_{jj}-\Gamma^i_{kj}\zeta_{ii}.
\end{align*}
Here summation over the repeated upper and lower \emph{Greek} indices is assumed. Note that we do not sum over the repeated \emph{Latin} indices.
Since $\Gamma_{ki}^j=\Gamma_{ik}^j$, in order to prove the identity \eqref{nabla-zeta} we only need to show that
\[\zeta_{ij,k}-\Gamma^i_{kj}\zeta_{ii}=\zeta_{kj,i}-\Gamma^k_{ij}\zeta_{kk}.\]
When $i=j=k$ or $i, j, k$ are distinct, the above equation holds true trivially, so we only need to consider the case when $i=j$ and $i\ne k$.
In this case, the above equation becomes
\[\left(\nabla_i g\right)_{,k}-\Gamma^i_{ki}\nabla_i g+\Gamma^k_{ii}\nabla_k g=0.\]
On the other hand, the function $g$ satisfies $\nabla_k\nabla_i g=0\ (k\ne i)$, which implies
\[\left(\nabla_i g\right)_{,k}=\Gamma^k_{ki}\nabla_k g+\Gamma^i_{ki}\nabla_i g,\]
here we used the fact that $\Gamma_{ij}^k=0$ if $i, j, k$ are distinct. So we only need to show
\[\Gamma^k_{ki}+\Gamma^k_{ii}=0,\quad i\ne k,\]
which is equivalent to the flatness condition \eqref{jw-5-2}.
The equalities given in \eqref{nabla-zeta} imply that there exist solutions $\xi_1,\dots, \xi_n$ of the equations \eqref{w-5}. Since $\zeta_{ij}$ are
symmetric with respect to the indices $i, j$, we can find a function $h\in\mathcal{A}_0$ so that
$\xi_i=\nabla_i h$. It follows from \eqref{zh-07} and \eqref{w-5} that $\sum_{i=1}^n \nabla_i h-g$ is a constant, thus by adjusting the function $h$
by adding $c\, v^1$
for a certain constant $c$ we prove the existence of $h\in\mathcal{A}_0$ satisfying the equations
given in \eqref{zh-08}. The lemma is proved.
\end{prf}
The space $\H$ is too big, so we restrict our interest to a ``dense'' (in a certain sense) subspace of $\H$.
\begin{dfn}
Define $\H^{(-1)}=\mathcal{V}$, $\H^{(p)}=\varphi^{-1}\left(\H^{(p-1)}\right)$, and
\[\H^{(\infty)}=\bigcup_{p\ge -1}\H^{(p)}.\]
\end{dfn}
\begin{rmk}
The action of $\varphi$ is just $\frac{\partial}{\partial v^1}$, so the space $\H^{(\infty)}$ is a polynomial ring in the indeterminate $v^1$.
It is indeed dense in the space of smooth functions in $v^1$ with respect to an appropriate topology.
\end{rmk}
It is easy to see that $\delta(\mathcal{V})\subseteq \mathcal{V}$, so
\[\mathcal{V}=\H^{(-1)}\subseteq \H^{(0)}\subseteq \cdots\subseteq \H^{(\infty)}.\]
Note that $\dim \H^{(-1)}=n+1$, and
\[\dim \H^{(p)}=\dim\H^{(p-1)}+\dim\mathrm{Ker} (\varphi)=\dim\H^{(p-1)}+n,\]
so we have $\dim \H^{(p)}=n(p+2)+1$.
Suppose the collection of functions
\[\{h_{\alpha,p}\in\mathcal{A}_0\mid \alpha=1, \dots, n;\ p=0, 1, 2, \dots\}\]
is a calibration of $(P_1, P_2; Z)$ (see Definition \ref{dfn-cali}).
Then it is easy to see that $h_{\alpha, p}\in \H^{(p)}$, and when $p\ge0$, they form a basis of $\H^{(p)}/\H^{(p-1)}$.
When $p=-1$, $\H^{(-1)}=\mathcal{V}$ contains not only $h_{\alpha, 0}=v_{\alpha}$ but also a trivial functional $\int (1)$,
which form a basis of $\H^{(-1)}$. Let us rephrase the conditions that must be satisfied by the functions $h_{\alpha,p}$ of a calibration as follows:
\begin{flalign}
\qquad & 1.\quad H_{\alpha,p}=\int(h_{\alpha, p})\in\H,& \label{calib-1} \\
\qquad & 2.\quad h_{\alpha, -1}=v_{\alpha},\quad D_Z(h_{\alpha, p})=h_{\alpha, p-1}\ (p\ge 1),& \label{calib-2} \\
\qquad & 3.\quad \mbox{The normalization condition \eqref{normalize01}}.& \label{calib-3}
\end{flalign}
Now let us proceed to constructing a calibration for the canonical Frobenius manifold structure $F(v)$ of $(P_1, P_2; Z)$.
Following the construction of \cite{Du-1}, we first define the functions
\[\theta_{\alpha,0}(v)=v_\alpha, \quad \theta_{\alpha,1}(v)=\frac{\partial F(v)}{\partial v^\alpha},\quad \alpha=1,\dots, n,\]
where $F$ is introduced in Lemma \ref{cor-potential}.
By adding to the function $F(v)$ a certain quadratic term in $v^1,\dots, v^n$, if needed, we can assume that
\[ \frac{\partial^2 F(v)}{\partial v^1 \partial v^\alpha}=v_\alpha .\]
Thus we have the following relation:
\[ D_Z\theta_{\alpha,1}=\frac{\partial\theta_{\alpha,1}}{\partial v^1}=\theta_{\alpha,0}.\]
The functions $\theta_{\alpha,p}(v)$ for $p\ge 2$ can be defined recursively by using the following relations:
\begin{equation}\label{theta-recur}
\frac{\partial^2\theta_{\gamma,p+1}(v)}{\partial v^\alpha\partial v^\beta}=c_{\alpha\beta\xi} \eta^{\xi\zeta} \frac{\partial\theta_{\gamma,p}(v)}{\partial v^\zeta},\quad \alpha, \beta, \gamma=1,\dots, n.
\end{equation}
The existence of solutions of these recursion relations is ensured by the associativity conditions \eqref{cabc-cond-2}.
We can require, as it is done in \cite{Du-1}, that these functions also satisfy the following normalization conditions
\[ \frac{\partial\theta_\alpha(v; z)}{\partial v^\xi} \eta^{\xi\zeta}\frac{\partial\theta_{\beta}(v; -z)}{\partial v^\zeta}=\eta_{\alpha\beta},\quad
\alpha, \beta=1,\dots, n.\]
Here $\theta_{\alpha}(v; z)=\sum_{p\ge 0} \theta_{\alpha,p}(v) z^p$. Now we define the functions $h_{\alpha,p}(v)$ so that
their generating functions $h_\alpha(v; z)=\sum_{p\ge -1} h_{\alpha,p}(v) z^{p+1}$ satisfy the following defining relation
\begin{equation}
h_{\alpha}(v; z)=\frac1{z} \frac{\partial\theta_{\alpha}(v; z)}{\partial v^1} -\frac{1}{z} \eta_{\alpha 1}. \label{dfn-h}
\end{equation}
Moreover, these functions also satisfy the normalization condition
\[ \frac{\partial h_\alpha(v; z)}{\partial v^\xi} \eta^{\xi\zeta}\frac{\partial h_{\beta}(v; -z)}{\partial v^\zeta}=\eta_{\alpha\beta},\quad
\alpha, \beta=1,\dots, n.\]
By adding, if needed, a certain linear in $v^1, \dots v^n$ term to the functions $F(v)$ we also have the relations
\begin{equation}\label{zh-09}
h_{\alpha, 0}(v)=\frac{\partial F(v)}{\partial v^\alpha},\quad \alpha=1,\dots, n.
\end{equation}
For the above constructed functions $\{h_{\alpha, p}\}$, denote $H_{\alpha, p}=\int h_{\alpha, p}$, and define
\begin{equation}\label{w-8}
X_{\alpha, p}=-[P_1, H_{\alpha, p}]=\int \left(\eta^{\gamma\lambda}\partial\left(\frac{\delta H_{\alpha, p}}{\delta v^\lambda}\right)\theta_\gamma\right), \quad p\ge 0.
\end{equation}
Then the associated evolutionary vector field $D_{X_{\alpha, p}}$ (see Definition 2.2 and Equation (2.5) of \cite{BCIH-I} for details)
corresponds to the system of first order quasilinear evolutionary PDEs \eqref{hamilt01}
\begin{equation}\label{jw-5-3}
\frac{\partial v^\gamma}{\partial t^{\alpha, p}}=D_{X_{\alpha, p}}(v^\gamma),\quad \alpha=1,\dots, n,\, p\ge 0.
\end{equation}
\begin{lem}\label{lem-cali}
The functions $h_{\alpha,p}$ and the associated local functionals that we constructed above have the following properties:
\begin{itemize}
\item[i)] $H_{\alpha,p}=\int(h_{\alpha, p})\in\H^{(p)}$,
\item[ii)] $h_{\alpha, -1}=v_{\alpha}$, $D_Z(h_{\alpha, p})=h_{\alpha, p-1}\ (p\ge 0)$.
\end{itemize}
\end{lem}
\begin{prf}
According to the definition \eqref{dfn-h} of $h_{\gamma,p}$, we have
\[h_{\gamma, p}=\frac{\partial \theta_{\gamma, p+2}}{\partial v^1}=\sum_{k=1}^n \frac{\partial \theta_{\gamma, p+2}}{\partial u^k}.\]
We only need to prove that $H_{\gamma,p}=\int(h_{\gamma, p})\in\H$, that is $\nabla_i\nabla_j h_{\gamma, p}=0$ for $i\ne j$. The other properties are easy to verify.
The condition $\nabla_i\nabla_j h_{\gamma, p}=0$ for $i\ne j$ reads
\[\frac{\partial^2 h_{\gamma, p}}{\partial u^i\partial u^j}=\sum_{l=1}^n\Gamma_{ij}^l\frac{\partial h_{\gamma, p}}{\partial u^l},\]
which is equivalent to
\begin{equation}
\frac{\partial^2 \theta_{\gamma, p+1}}{\partial u^i\partial u^j}=\sum_{k, l=1}^n\Gamma_{ij}^l\frac{\partial^2 \theta_{\gamma, p+2}}{\partial u^k \partial u^l}.\label{iden-to-prove}
\end{equation}
The recursion relation \eqref{theta-recur} of $\theta_{\alpha, p}$ has the following form in the canonical coordinates:
\[\frac{\partial^2 \theta_{\gamma, p+1}}{\partial u^i \partial u^j}=\delta_{ij}\frac{\partial \theta_{\gamma, p}}{\partial u^i}+\frac{\partial (\psi_i^\alpha\psi_{i1})}{\partial u^j}\frac{\partial \theta_{\gamma, p+1}}{\partial v^\alpha}.\]
Note that $i\ne j$ in the identity \eqref{iden-to-prove}, so its left hand side reads
\[\frac{\partial (\psi_i^\alpha\psi_{i1})}{\partial u^j}\frac{\partial \theta_{\gamma, p+1}}{\partial v^\alpha}
=\gamma_{ij}\left(\psi_{i1}\psi_j^\alpha+\psi_{j1}\psi_i^\alpha\right)\frac{\partial \theta_{\gamma, p+1}}{\partial v^\alpha}.\]
The right hand side of \eqref{iden-to-prove} then reads
\begin{equation}
\sum_{k, l=1}^n\Gamma_{ij}^l\frac{\partial^2 \theta_{\gamma, p+2}}{\partial u^k \partial u^l}
=\sum_{k, l=1}^n\Gamma_{ij}^l\left(\delta_{kl}\frac{\partial \theta_{\gamma, p+2}}{\partial u^k}
+\frac{\partial (\psi_k^\alpha\psi_{k1})}{\partial u^l}\frac{\partial \theta_{\gamma, p+2}}{\partial v^\alpha}\right). \label{iden-left-hand}
\end{equation}
Note that
\[\sum_{k=1}^n \psi_k^\alpha\psi_{k1}=\delta^\alpha_1\]
is a constant, so the second summation in \eqref{iden-left-hand} vanishes. In the first summation, we have
\[\Gamma_{ij}^l=\gamma_{ij}\left(\delta_{il}\frac{\psi_{j1}}{\psi_{i1}}+\delta_{jl}\frac{\psi_{i1}}{\psi_{j1}}\right), \quad \mbox{for } i\ne j,\]
and
\[\frac{\partial \theta_{\gamma, p+2}}{\partial u^k}=\frac{\partial v^\alpha}{\partial u^k}\frac{\partial \theta_{\gamma, p+2}}{\partial v^\alpha}
=\psi^\alpha_k\psi_{k1}\frac{\partial \theta_{\gamma, p+2}}{\partial v^\alpha},\]
which leads to the identity \eqref{iden-to-prove}. The lemma is proved.
\end{prf}
\begin{lem} \label{lem-trans}
The first flow $\frac{\partial}{\partial t^{1,0}}$ is given by the
translation along the spatial variable $x$, i.e.
\[\frac{\partial}{\partial t^{1,0}}=\partial.\]
\end{lem}
\begin{prf}From our definition \eqref{w-8}, \eqref{jw-5-3} of the evolutionary vector fields we have
\begin{align*}
&\frac{\partial v^\alpha}{\partial t^{1,0}}=\eta^{\alpha\beta}\partial\frac{\partial h_{1,0}}{\partial v^\beta}=\eta^{\alpha\beta}\partial\frac{\partial^2 \theta_{1,2}}{\partial v^\beta \partial v^1}\\
=&\eta^{\alpha\beta}\partial\frac{\partial \theta_{1,1}}{\partial v^\beta}=\eta^{\alpha\beta}\frac{\partial^2 \theta_{1,1}}{\partial v^\beta \partial v^{\gamma}}v^{\gamma}_x=v^{\alpha}_x.
\end{align*}
Here we use the recursion relation \eqref{theta-recur}. The lemma is proved.
\end{prf}
From Lemma \ref{lem-cali} and Lemma \ref{lem-trans} we have the following proposition.
\begin{prp}\label{whatever}
The collection of functions
\[\{h_{\alpha, p}(v)\, |\, \alpha=1,\dots, n; p=0, 1,2, \dots\}\]
that we constructed above is a calibration of the flat exact bihamiltonian structure $(P_1, P_2; Z)$.
\end{prp}
In the next section, we will use some results proved in the Appendix, which requires that there exists a bihamiltonian vector field
\[X=\int \left(\sum_{i=1}^n A^i(u)u^{i,1}\theta_i\right)\in\mathcal{X}\]
such that for all $i=1, \dots, n$, and for some $u \in D$,
\[\frac{\partial}{\partial u^i}A^i(u)\ne 0.\]
In this case, $X$ is called \emph{nondegenerate}.
\begin{lem}
If the bihamiltonian vector field $X$ is nondegenerate, then $A^i(u)\ne A^j(u)$ for all $i\ne j$ and for some $u\in D$.
\end{lem}
\begin{prf}
According to \eqref{de-for-X}, if $A^i(u)=A^j(u)$ for $i\ne j$ and $u\in D$, then
\[\frac{\partial A^i}{\partial u^i}=\frac{\partial A^j}{\partial u^i}=\Gamma^j_{ji}\left(A^i-A^j\right)=0.\]
The lemma is proved.
\end{prf}
By shrinking the domain $D$, the nondegeneracy condition for $X$ and the result of the above lemma can be modified
to ``for all $u\in D$'' instead of ``for some $u\in D$''.
\begin{lem}\label{new-lemma}
\mbox{}
\begin{itemize}
\item[i)] When $n=1$, the bihamiltonian vector fields $X_{1,p}\ (p>0)$ are always nondegenerate.
\item[ii)] When $n\ge2$, suppose the bihamiltonian structure $(P_1,P_2)$ is irreducible, then there exists a nondegenerate bihamiltonian vector field $X$ satisfying $[Z, X]=0$.
\end{itemize}
\end{lem}
\begin{prf}
We rewrite the bihamiltonian vector field $X_{\alpha, p}$ defined by \eqref{w-8} in the form
\[X_{\alpha, p}=\int\left(\sum_{i=1}^n A^i_{\alpha, p}(u) u^{i,1}\theta_i\right),\]
then $A^i_{\alpha, p}$ satisfy the following equations:
\begin{align*}
&\frac{\partial A^i_{\alpha, p}}{\partial u^j}=\Gamma^i_{ij}\left(A^j_{\alpha, p}-A^i_{\alpha, p}\right), \quad \mbox{for } j\ne i, \\
&\frac{\partial A^i_{\alpha, p}}{\partial u^i}=-\sum_{j\ne i}\frac{\partial A^i_{\alpha, p}}{\partial u^j}+A^i_{\alpha, p-1}, \quad A^i_{1, 0}=1.
\end{align*}
When $n=1$, we have $A^i_{1, p}=\frac{(u^1)^p}{p!}$, so $X_{1,p}\ (p>0)$ are always nondegenerate.
When $n\ge2$, a bihamiltonian vector field $X=\int\left(\sum_{i=1}^n A^i(u) u^{i,1}\theta_i\right)$ satisfying $[Z, X]=0$ is characterized by the following
equation
\begin{align}
&\frac{\partial A^i}{\partial u^j}=\Gamma^i_{ij}\left(A^j-A^i\right), \quad \mbox{for } j\ne i, \label{eq-AA-1}\\
&\frac{\partial A^i}{\partial u^i}=-\sum_{j\ne i}\frac{\partial A^i}{\partial u^j}. \label{eq-AA-2}
\end{align}
The solution space of this system has dimention $n$. If $X$ is degenerate, that is, there exists $i_0\in \{1, \dots, n\}$ such that
\[0\equiv\frac{\partial A^{i_0}}{\partial u^{i_0}}=-\sum_{j\ne i_0}\Gamma^{i_0}_{i_0j}\left(A^j-A^{i_0}\right)=-\sum_{j=1}^n\Gamma^{i_0}_{i_0j}A^j.\]
Since $(P_1, P_2)$ is irreducible, there exists $j_0\in \{1, \dots, n\}$ with $j_0\ne i_0$ such that $\Gamma^{i_0}_{i_0j_0}(u)\ne 0$ for some $u\in D$, so from the above
equation we have
\[A^{j_0}=-\frac{1}{\Gamma^{i_0}_{i_0j_0}}\sum_{k\ne j_0}\Gamma^{i_0}_{i_0k}A^k.\]
Substituting this expression of $A^{j_0}$ into \eqref{eq-AA-1} and \eqref{eq-AA-2}, we obtain a new linear homogeneous system with unknowns $A^k\ (k\ne j_0)$. The dimension of the solution space
of this new system is at most $n-1$, so not all solutions of \eqref{eq-AA-1} and \eqref{eq-AA-2} are degenerate. The lemma is proved.
\end{prf}
\vskip 1em
Let us proceed to prove Proposition \ref{prp-25} which
shows that the functions $h_{\alpha,p}$ of a calibration of $(P_1, P_2; Z)$
satisfy the tau symmetry condition, and the associated principal hierarchy \eqref{hamilt01} possesses Galilean symmetry.
\vskip 1em
\begin{prfn}{Proposition \ref{prp-25}}
By using the chain rule and the properties of $\{h_{\alpha, p}\}$, we have
\begin{align*}
& \frac{\partial h_{\alpha, p-1}}{\partial t^{\beta, q}}=\sum_{i=1}^n \frac{\partial u^i}{\partial t^{\beta,q}}\frac{\partial h_{\alpha, p-1}}{\partial u^i}\\
=& \sum_{i=1}^n \left(\sum_{j=1}^n \nabla^i \nabla_j h_{\beta, q} u^{j,1}\right)
\left(\sum_{k=1}^n \frac{\partial^2 h_{\alpha, p}}{\partial u^i \partial u^k}\right)\\
=& \sum_{i=1}^n \left(f^i \nabla_i \nabla_i h_{\beta, q} u^{i,1}\right)
\left(\sum_{k=1}^n \frac{\partial^2 h_{\alpha, p}}{\partial u^i \partial u^k}\right).
\end{align*}
Note that the flatness of $Z$ implies the identity \eqref{zh-07},
so we have
\[\sum_{k=1}^n \frac{\partial^2 h_{\alpha, p}}{\partial u^i \partial u^k}=\sum_{k=1}^n \nabla_i\nabla_k h_{\alpha, p}
=\nabla_i\nabla_i h_{\alpha, p}.\]
Therefore,
\[\frac{\partial h_{\alpha, p-1}}{\partial t^{\beta, q}}=\sum_{i=1}^n f^i\left(\nabla_i \nabla_i h_{\beta, q}\right)
\left(\nabla_i\nabla_i h_{\alpha, p}\right) u^{i,1}=\frac{\partial h_{\beta, q-1}}{\partial t^{\alpha, p}}.\]
Next we show that for any $\gamma=1, \dots, n$,
\[\frac{\partial}{\partial t^{\alpha, p}}\frac{\partial v^\gamma}{\partial s}=\frac{\partial}{\partial s}\frac{\partial v^\gamma}{\partial t^{\alpha, p}}.\]
The left hand side reads
\[\frac{\partial}{\partial t^{\alpha, p}}\frac{\partial v^\gamma}{\partial s}=\frac{\partial v^\gamma}{\partial t^{\alpha, p-1}}
+\sum_{\beta, q}t^{\beta, q}\frac{\partial^2 v^\gamma}{\partial t^{\alpha, p}\partial t^{\beta, q-1}}.\]
Note that $\frac{\partial v^\gamma}{\partial t^{\alpha, p}}$ only depends on $v^\mu$ and $v^\mu_x$, so we have
\begin{align*}
&\frac{\partial}{\partial s}\frac{\partial v^\gamma}{\partial t^{\alpha, p}}=\frac{\partial v^\mu}{\partial s}\frac{\partial}{\partial v^\mu}\left(\frac{\partial v^\gamma}{\partial t^{\alpha, p}}\right)
+\frac{\partial v^\mu_x}{\partial s}\frac{\partial}{\partial v^\mu_x}\left(\frac{\partial v^\gamma}{\partial t^{\alpha, p}}\right)\\
=&\left(\delta^\mu_1+\sum_{\beta, q}t^{\beta, q}\frac{\partial v^\mu}{\partial t^{\beta, q-1}}\right)\frac{\partial}{\partial v^\mu}
\left(\frac{\partial v^\gamma}{\partial t^{\alpha, p}}\right)\\
&\quad+\left(\sum_{\beta, q}t^{\beta, q}\frac{\partial v^\mu_x}{\partial t^{\beta, q-1}}\right)\frac{\partial v^\mu_x}{\partial s}
\frac{\partial}{\partial v^\mu_x}\left(\frac{\partial v^\gamma}{\partial t^{\alpha, p}}\right)\\
=&\frac{\partial}{\partial v^1}\frac{\partial v^\gamma}{\partial t^{\alpha, p}}+\sum_{\beta, q}t^{\beta, q}\frac{\partial^2 v^\gamma}{\partial t^{\beta, q-1}\partial t^{\alpha, p}},
\end{align*}
so we only need to show that $\frac{\partial v^\gamma}{\partial t^{\alpha, p-1}}=\frac{\partial}{\partial v^1}\frac{\partial v^\gamma}{\partial t^{\alpha, p}}$, which can be easily obtained
from the fact that $X_{\alpha, p-1}=[Z, X_{\alpha, p}]$. The proposition is proved.
\end{prfn}
Since we have $[X_{\beta,q}, H_{\alpha,p-1}]=0$, $\frac{\partial h_{\alpha, p-1}}{\partial t^{\beta, q}}$ must be a total
$x$-derivative, so there exists a function $\Omega_{\alpha,p;\beta,q}\in \mathcal{A}_0$
such that
\begin{equation}
\frac{\partial h_{\alpha, p-1}}{\partial t^{\beta, q}}=\frac{\partial h_{\beta, q-1}}{\partial t^{\alpha, p}}=\partial \Omega_{\alpha,p;\beta,q}.
\end{equation}
The functions $\Omega_{\alpha,p;\beta,q}$ are determined up to the addition of constants, so one can adjust the constants such that these functions
satisfy some other properties which we describe below.
\begin{dfn}\label{zh-12-2}
A collection of functions
\[\{\Omega_{\alpha,p;\beta,q}\in\mathcal{A}_0\mid \alpha, \beta=1, \dots, n;\ p,q=0, 1, 2, \dots\}\]
is called a tau structure of the flat exact bihamiltonian structure $(P_1, P_2; Z)$
with a fixed calibration $\{h_{\alpha,p}\}$ if the following conditions are satisfied:
\begin{itemize}
\item[i)] $\partial\Omega_{\alpha, p; \beta, q}=\frac{\partial h_{\alpha, p-1}}{\partial t^{\beta, q}}=\frac{\partial h_{\beta, q-1}}{\partial t^{\alpha, p}}$.
\item[ii)] $\Omega_{\alpha,p;\beta,q}=\Omega_{\beta,q;\alpha,p}$.
\item[iii)] $\Omega_{\alpha,p;1,0}=h_{\alpha,p-1}$.
\end{itemize}
\end{dfn}
\begin{lem}\label{omega-2}
A tau structure $\{\Omega_{\alpha,p;\beta,q}\}$ satisfies the following equations:
\begin{equation}\label{zh-12}
\frac{\partial \Omega_{\alpha,p;\beta,q}}{\partial t^{\gamma,r}}=\frac{\partial \Omega_{\alpha,p;\gamma,r}}{\partial t^{\beta,q}}.
\end{equation}
\end{lem}
\begin{prf}By using Definition \ref{zh-12-2} of tau structures we have
\[\partial\left(\frac{\partial \Omega_{\alpha,p;\beta,q}}{\partial t^{\gamma,r}}-\frac{\partial \Omega_{\alpha,p;\gamma,r}}{\partial t^{\beta,q}}\right)
=\frac{\partial}{\partial t^{\gamma,r}}\frac{\partial h_{\alpha,p-1}}{\partial t^{\beta,q}}-\frac{\partial}{\partial t^{\beta,q}}\frac{\partial h_{\alpha,p-1}}{\partial t^{\gamma,r}}=0,\]
so the difference between the left hand side and the right hand side of \eqref{zh-12} is a constant. However, both sides can be represented as differential
polynomials of degree $1$, so the constant must be zero. The lemma is proved.
\end{prf}
\begin{dfn}[cf. \cite{DZ-NF}]\label{zh-01-22f}
Let $\{\Omega_{\alpha,p;\beta,q}\}$ be a tau structure of $(P_1, P_2; Z)$ with the calibration $\{h_{\alpha,p}\}$. The family of partial differential equations
\begin{align}
\frac{\partial f}{\partial t^{\alpha,p}} & =f_{\alpha,p}, \\
\frac{\partial f_{\beta,q}}{\partial t^{\alpha,p}} & =\Omega_{\alpha,p;\beta,q}(v), \\
\frac{\partial v^\gamma}{\partial t^{\alpha,p}} & =\eta^{\gamma\xi}\partial \Omega_{\alpha,p;\xi,0}(v)
\end{align}
with unknown functions $(f, \{f_{\beta,q}\}, \{v^\gamma\})$
is called the tau cover of the principal hierarchy \eqref{hamilt01} with respect to the tau structure $\{\Omega_{\alpha,p;\beta,q}\}$, and the function
$\tau=e^f$ is called the tau function of the principal hierarchy. Here $\alpha, \beta, \gamma=1,\dots, n, p\ge 0$.
\end{dfn}
By using Lemma \ref{omega-2}, one can easily show that members of the tau cover commute with each other.
It is obvious that the covering map
\[(f, \{f_{\beta,q}\}, \{v^\gamma\})\mapsto (\{v^\gamma\})\]
pushes forward the
tau cover to the principal hierarchy. This is the reason why it is named ``tau cover''.
In the remaining part of this section, we assume that the calibration $\{h_{\alpha,p}\}$ is constructed from $\{\theta_{\alpha,p}\}$ as above, see
Proposition \ref{whatever}. We can construct, following \cite{Du-1}, the functions $\Omega_{\alpha,p;\beta}(v)$ by
\begin{equation}\frac{\partial h_{\alpha}(v; z_1)}{\partial v^\xi} \eta^{\xi\zeta} \frac{\partial h_{\beta}(v; z_2)}{\partial v^\zeta}-\eta_{\alpha\beta}
=(z_1+z_2)\sum_{p, q\ge 0} \Omega_{\alpha, p;\beta, q}(v) z_1^p z_2^q. \label{omega-dfn}
\end{equation}
We can easily prove the following proposition.
\begin{prp}\label{prop-3-12}
The collection of functions
\[\{\Omega_{\alpha, p; \beta, q}(v)\,|\, \alpha, \beta=1,\dots,; p, q=0,1,2,\dots\}\]
is a tau structure of the exact bihamiltonian structure $(P_1, P_2; Z)$ with the given calibration $\{h_{\alpha,p}\}$.
\end{prp}
\begin{lem}\label{omega-3}
The functions $\{\Omega_{\alpha,p;\beta,q}\}$ constructed in \eqref{omega-dfn} satisfy
the identities
\begin{equation}
\frac{\partial \Omega_{\alpha,p;\beta,q}}{\partial v^1}=\Omega_{\alpha,p-1;\beta,q}+\Omega_{\alpha,p;\beta,q-1}+\eta_{\alpha\beta}\delta_{p0}\delta_{q0}.
\label{omega-iden-1}
\end{equation}
\end{lem}
\begin{prf}
For a fixed pair of indices $\{\alpha, \beta\}$, the above identities are equivalent to the identity
\begin{equation}
\frac{\partial \Omega_{\alpha;\beta}(v; z_1, z_2)}{\partial v^1}=(z_1+z_2)\Omega_{\alpha;\beta}(v; z_1, z_2)+\eta_{\alpha\beta}\label{omega-iden-2}
\end{equation}
for the generating function
\[\Omega_{\alpha;\beta}(v; z_1, z_2)=\sum \Omega_{\alpha,p;\beta,q}(v) z_1^p z_2^q.\]
Note that the generation function $h_{\alpha}(v;\,z)$ satisfies
\[\frac{\partial h_{\alpha}(v; z)}{\partial v^1}=z\,h_{\alpha}(v; z)+\eta_{\alpha 1},\]
then the identity \eqref{omega-iden-2} can be easily proved by using the definition \eqref{omega-dfn}.
The lemma is proved.\end{prf}
\begin{thm}\label{zh-01-22-g}
The tau cover admits the following Galilean symmetry:
\begin{align}
\frac{\partial f}{\partial s} & =\frac{1}{2}\eta_{\alpha\beta}t^{\alpha,0}t^{\beta,0}+\sum_{\alpha,p}t^{\alpha,p+1}f_{\alpha,p}, \label{SE-1}\\
\frac{\partial f_{\beta,q}}{\partial s} & =\eta_{\alpha\beta}t^{\alpha,0}\delta_{q0}+f_{\beta,q-1}+\sum_{\alpha,p}t^{\alpha,p+1}\Omega_{\alpha,p;\beta,q}, \label{SE-2}\\
\frac{\partial v^\gamma}{\partial s} & =\delta^{\gamma}_1+\sum_{\alpha,p}t^{\alpha,p+1}\frac{\partial v^{\gamma}}{\partial t^{\alpha,p}}. \label{SE-3}
\end{align}
\end{thm}
\begin{prf}
To prove $\frac{\partial}{\partial s}$ is a symmetry of the tau cover, we only need to show:
\begin{align}
\left[\frac{\partial}{\partial s}, \frac{\partial}{\partial t^{\alpha, p}}\right] K=0,\label{identity-to-prove}
\end{align}
where $K=f$, $f_{\beta,q}$, or $v^\gamma$.
Denote the right hand side of \eqref{SE-1} by $W$, then \eqref{SE-2}, \eqref{SE-3} can be written as
\[\frac{\partial f_{\beta,q}}{\partial s}=\frac{\partial W}{\partial t^{\beta,q}}, \quad \frac{\partial v^\gamma}{\partial s}
=\eta^{\gamma\beta}\frac{\partial^2 W}{\partial t^{1,0}\partial t^{\beta,0}},\]
so the identity \eqref{identity-to-prove} is equivalent to the following one:
\[\frac{\partial}{\partial s}\Omega_{\alpha,p;\beta,q}=\frac{\partial^2}{\partial t^{\alpha,p}\partial t^{\beta,q}}W.\]
By using the chain rule, we have
\begin{align*}
& \frac{\partial}{\partial s}\Omega_{\alpha,p;\beta,q}=\frac{\partial \Omega_{\alpha,p;\beta,q}}{\partial v^{\gamma}}\frac{\partial v^{\gamma}}{\partial s}
=\frac{\partial \Omega_{\alpha,p;\beta,q}}{\partial v^{\gamma}}
\left(\delta^{\gamma}_1+\sum_{\xi,s}t^{\xi,s+1}\frac{\partial v^{\gamma}}{\partial t^{\xi,s}}\right)\\
=& \frac{\partial \Omega_{\alpha,p;\beta,q}}{\partial v^1}+\sum_{\xi,s}t^{\xi,s+1}\frac{\partial \Omega_{\alpha,p;\beta,q}}{\partial t^{\xi,s}}.
\end{align*}
On the other hand,
\begin{align*}
& \frac{\partial^2}{\partial t^{\alpha,p}\partial t^{\beta,q}}W=\frac{\partial}{\partial t^{\alpha,p}}\left(
\eta_{\xi\beta}t^{\xi,0}\delta_{q0}+f_{\beta,q-1}+\sum_{\xi,s}t^{\xi,s+1}\Omega_{\xi,s;\beta,q}
\right)\\
=& \eta_{\alpha\beta}\delta_{p0}\delta_{q0}+\Omega_{\alpha,p-1;\beta,q}+\Omega_{\alpha,p;\beta,q-1}
+\sum_{\xi,s}t^{\xi,s+1}\frac{\partial \Omega_{\xi,s;\beta,q}}{\partial t^{\alpha,p}}.
\end{align*}
The theorem then follows from Lemma \ref{omega-2} and \ref{omega-3}.
\end{prf}
\section{Tau-symmetric integrable Hamiltonian deformations of the principal hierarchy}\label{sec-4}
Let $(P_1, P_2; Z)$ be a flat exact semisimple bihamiltonian structure of hydrodynamic type.
In this and the next section we consider properties of deformations of
the principal hierarchy \eqref{hamilt01} and its tau structure.
To this end, we fix a calibration $\{h_{\alpha, p}\}$ and a tau structure $\{\Omega_{\alpha, p; \beta, q}\}$
as in the previous section, and we assume that $(P_1, P_2; Z)$ is also irreducible.
Note that the principal hierarchy is determined by the first Hamiltonian structure $P_1$ and the calibration $\{h_{\alpha,p}\}$,
so we first consider their deformations.
\begin{dfn}\label{dfn-tau-sym}
The pair $(\tilde{P_1}, \{\tilde{h}_{\alpha, p}\})$ is called a tau-symmetric integrable deformation, or simply a deformation for short,
of $(P_1, \{h_{\alpha, p}\})$ if it satisfies the following conditions:
\begin{itemize}
\item[i)] $\tilde{P}_1\in \hat{\F}^2$ has the form
\[\tilde{P}_1=P_1+P^{[2]}_1+P^{[3]}_1+\dots,\]
where $P^{[k]}_1\in \hat{\F}^2_{k+1}$, and it is a Hamiltonian structure.
\item[ii)] $\tilde{h}_{\alpha, p}$ has the form
\[\tilde{h}_{\alpha, p}=h_{\alpha, p}+h_{\alpha, p}^{[2]}+h_{\alpha, p}^{[3]}+\cdots,\]
where $h_{\alpha, p}^{[k]}\in\mathcal{A}_{k}$. Define $\tilde{H}_{\alpha, p}=\int (h_{\alpha, p})$, then
for any pair of indices $(\alpha, p)$, $(\beta, q)$ we must have
\begin{equation}
\{\tilde{H}_{\alpha, p}, \tilde{H}_{\beta, q}\}_{\tilde{P}_1}=0.
\end{equation}
Here $\{F, G\}_{\tilde{P}_1}=[[\tilde{P}_1, F], G]$ for $F, G \in \mathcal{F}$.
\item[iii)] Define $\tilde{X}_{\alpha, p}=-[\tilde{P}_1, \tilde{H}_{\alpha, p}]$, and denote $\tilde{\partial}_{\alpha, p}=D_{\tilde{X}_{\alpha, p}}$,
then $\{\tilde{h}_{\alpha, p}\}$ satisfy the tau-symmetry condition
\begin{equation}
\tilde{\partial}_{\alpha, p}\left(\tilde{h}_{\beta, q-1}\right)=\tilde{\partial}_{\beta, q}\left(\tilde{h}_{\alpha, p-1}\right).\label{zh-n-3}
\end{equation}
\end{itemize}
\end{dfn}
\begin{rmk}
Note that we assume the deformation starts from the second degree, i.e. there is no $P_1^{[1]}$ and $h_{\alpha,p}^{[1]}$ terms.
Without this condition we can also prove the next lemma, and then define the tau cover. We add it to avoid some subtle problems in Theorem
\ref{thm-unq-2} (see Remark \ref{tau-rmk} for more details). Note that for integrable hierarchies that arise in the study of semisimple
cohomological field theories, there are no deformations with odd degrees.
\end{rmk}
A deformation of $(P_1, \{h_{\alpha, p}\})$ yields a {\em tau-symmetric integrable Hamiltonian
deformation of the principal hierarchy} \eqref{hamilt01} which consists of the flows
\begin{equation}\label{zh-n-17}
\frac{\partial v^\alpha}{\partial t^{\beta,q}}=D_{\tilde{X}_{\beta,q}}(v^\alpha),\quad 1\le \alpha, \beta\le n,\, q\ge 0.
\end{equation}
Here the evolutionary vector fields are given by
\[\tilde{X}_{\beta,q}=-[\tilde{P}_1, \tilde{H}_{\beta,q}].\]
From the property ii) of Definition \ref{dfn-tau-sym} we know that these deformed
evolutionary vector fields are mutually commuting, and so the associated flows which we
denote by $\tilde{\partial}_{\beta,q}$ are also mutually commuting. This is the reason why we call the
above deformed hierarchy \eqref{zh-n-17} an integrable Hamiltonian deformation of the principal hierarchy.
We will show below that the deformed hierarchy also possesses a tau structure. We note that the notion of
\emph{tau-symmetric integrable Hamiltonian deformation} of the principal hierarchy associated to a Frobenius manifold was introduced in \cite{DLYZ}. In the definition given there the following additional conditions are required:
\begin{enumerate}
\item $\tilde{\partial}_{1,0}=\partial$.
\item $\tilde{H}_{\alpha,-1}$ are Casimirs of $\tilde{P}_1$.
\end{enumerate}
These two conditions are consequences of the Definition \ref{dfn-tau-sym}. In fact, since the evolutionary vector field $X$
corresponding to the flow $\tilde{\partial}_{1,0}-\partial$ is a symmetry of the deformed integrable hierarchy and it belongs to
$\hat{\F}^1_{\ge 2}$, by using the existence of a non-degenerate bihamiltonian vector field proved in Lemma \ref{new-lemma} and the property ii) of Corollary \ref{app-cor} we know that $X$ must vanishes. Thus we have
\begin{equation}\label{zh-n-12}
\tilde{\partial}_{1, 0}=\partial.
\end{equation}
Similarly, from the fact that $[P_1, H_{\alpha,-1}]=0$ we know that the vector field
$X=-[\tilde{P}_1, \tilde{H}_{\alpha,-1}]\in\hat{\F}^1_{\ge 2}$. Since it is a symmetry of the deformed integrable hierarchy
\eqref{zh-n-17} we know that it also vanishes. Thus the second condition also holds true.
\begin{lem}
For any deformation $(\tilde{P_1}, \{\tilde{h}_{\alpha, p}\})$ of $(P_1, h_{\alpha, p}\})$, there exists a unique collection of differential polynomials
$\{\tilde{\Omega}_{\alpha, p; \beta, q}\}$
satisfying the following conditions:
\begin{itemize}
\item[i)]
$\tilde{\Omega}_{\alpha, p; \beta, q}=\Omega_{\alpha, p; \beta, q}+\Omega_{\alpha, p; \beta, q}^{[2]}+\Omega_{\alpha, p; \beta, q}^{[3]}+\cdots$,
where $\Omega_{\alpha, p; \beta, q}^{[k]}\in\mathcal{A}_{k}$.
\item[ii)] $\partial \tilde{\Omega}_{\alpha, p; \beta, q}=\tilde{\partial}_{\alpha, p}\left(\tilde{h}_{\beta, q-1}\right)$.
\item[iii)] $\tilde{\Omega}_{\alpha, p; \beta, q}=\tilde{\Omega}_{\beta, q; \alpha, p}$, and\,
$\tilde{\Omega}_{\alpha, p; 1, 0}=\tilde{h}_{\alpha, p-1}$.
\item[iv)]
$\tilde{\partial}_{\gamma, r}\tilde{\Omega}_{\alpha, p; \beta, q}=\tilde{\partial}_{\beta, q}\tilde{\Omega}_{\alpha, p; \gamma, r}$.
\end{itemize}
Here $\alpha, \beta, \gamma=1, \dots, n$ and $p, q, r\ge0$.
This collection of differential polynomials $\{\tilde{\Omega}_{\alpha, p; \beta, q}\}$ is called a \emph{tau structure} of $(\tilde{P_1}, \{\tilde{h}_{\alpha, p}\})$.
\end{lem}
\begin{prf}
According to the definition of $\{\tilde{h}_{\alpha, p}\}$,
\[\int\left(\tilde{\partial}_{\alpha, p}\left(\tilde{h}_{\beta, q}\right)\right)
=[\tilde{X}_{\alpha, p}, \tilde{H}_{\beta, q}]=-\{\tilde{H}_{\alpha, p}, \tilde{H}_{\beta, q}\}_{\tilde{P}_1}=0,\]
so there exists $\tilde{\Omega}_{\alpha, p; \beta, q}$ satisfying the conditions i), ii). These conditions determine $\tilde{\Omega}_{\alpha, p; \beta, q}$
up to a constant, which has degree zero. Note that the condition i) fixes the degree zero part of $\tilde{\Omega}_{\alpha, p; \beta, q}$, so it
is unique. The conditions iii) and iv) can be verified by considering the action of $\partial$ on both sides of the equalities, as we did
in the proof of Lemma \ref{omega-2}.
\end{prf}
\begin{dfn}[\cite{DZ-NF}]
The differential polynomials
\begin{equation}\label{zh-15}
w^{\alpha}=\eta^{\alpha\beta}\tilde{h}_{\beta,-1}=v^{\alpha}+F^{\alpha}_2+F^{\alpha}_3+\cdots,\quad F^\alpha_k\in\mathcal{A}_{k}
\end{equation}
are called the \emph{normal coordinates} of $(\tilde{P_1}, \{\tilde{h}_{\alpha, p}\})$ and of the deformed principal hierarchy \eqref{zh-n-17}.
\end{dfn}
The properties of the differential polynomials $\tilde{\Omega}_{\alpha,p;\beta,q}$ enable us to define the tau cover for $(\tilde{P_1}, \{\tilde{h}_{\alpha, p}\})$
and the deformed principal hierarchy \eqref{zh-n-17}, just as we did for the principal hierarchy given in Definition \ref{zh-01-22f}. From \eqref{zh-15} we know
that we can also represent $v^\alpha$ in the form
\begin{equation}\label{zh-16}
v^\alpha=w^\alpha+\tilde{F}^{\alpha}_2+\tilde{F}^{\alpha}_3+\cdots,
\end{equation}
where $\tilde{F}^\alpha_k$ are differential polynomials of $w^1,\dots, w^n$ of degree
$k$. So the functions $\tilde{\Omega}_{\alpha,p;\beta, q}(v, v_x,\dots)$ can also be represented as
differential polynomials in $w^1,\dots, w^n$ by the change of coordinates formulae given in \eqref{zh-16}.
\begin{dfn}[c.f. \cite{DZ-NF}]
The family of partial differential equations
\begin{align}
\frac{\partial \tilde{f}}{\partial t^{\alpha,p}} & =\tilde{f}_{\alpha,p}, \label{zh-18a}\\
\frac{\partial \tilde{f}_{\beta,q}}{\partial t^{\alpha,p}} & =\tilde{\Omega}_{\alpha,p;\beta,q}, \\
\frac{\partial w^\gamma}{\partial t^{\alpha,p}} & =\eta^{\gamma\xi}\partial \tilde{\Omega}_{\alpha,p;\xi,0}\label{zh-18b}
\end{align}
with the unknowns functions $(\{w^\alpha\}, \{\tilde{f}_{\alpha,p}\}, \tilde{f})$
is called the tau cover of the deformed principal hierarchy \eqref{zh-n-17} with respect to
the tau structure $\{\tilde\Omega_{\alpha,p;\beta, q}\}$, and the function $\tilde{\tau}=e^{\tilde{f}}$ is called the tau function of the deformed
principal hierarchy.
\end{dfn}
\begin{dfn}\label{equivalent}
Suppose $(\tilde{P}_1, \{\tilde{h}_{\alpha, p}\})$ and $(\hat{P}_1, \{\hat{h}_{\alpha, p}\})$ are two deformations of $(P_1, \{h_{\alpha, p}\})$.
Define $\tilde{H}_{\alpha, p}=\int\left(\tilde{h}_{\alpha, p}\right)$ and $\hat{H}_{\alpha, p}=\int\left(\hat{h}_{\alpha, p}\right)$.
If there exists a Miura transformation $e^{\mathrm{ad}_Y}\ (Y\in \hat{\F}^1_{\ge1})$ such that
\[\hat{P}_1=e^{\mathrm{ad}_Y}\left(\tilde{P}_1\right), \quad \hat{H}_{\alpha, p}=e^{\mathrm{ad}_Y}\left(\tilde{H}_{\alpha, p}\right),\]
then we say that $(\tilde{P}_1, \{\tilde{h}_{\alpha, p}\})$ and $(\hat{P}_1, \{\hat{h}_{\alpha, p}\})$ are equivalent.
\end{dfn}
If $(\tilde{P}_1, \{\tilde{h}_{\alpha, p}\})$ and $(\hat{P}_1, \{\hat{h}_{\alpha, p}\})$ are equivalent, then
\[\hat{X}_{\alpha, p}=-[\hat{P}_1, \hat{H}_{\alpha, p}]=-e^{\mathrm{ad}_Y}\left([\tilde{P}_1, \tilde{H}_{\alpha, p}]\right)
=e^{\mathrm{ad}_Y}\left(\tilde{X}_{\alpha, p}\right),\]
which is equivalent to $\hat{\partial}_{\alpha, p}=e^{D_Y}\tilde{\partial}_{\alpha, p}e^{-D_Y}$.
The associated deformed principal hierarchy has the form (c.f. \eqref{zh-n-17})
\begin{equation}\label{zh-17}
\frac{\partial v^\alpha}{\partial t^{\beta,q}}=D_{\hat{X}_{\beta,q}}(v^\alpha),\quad 1\le \alpha, \beta\le n,\, q\ge 0.
\end{equation}
It is obtained from \eqref{zh-n-17} by representing the equations of the hierarchy in terms of the new unkown functions
$\tilde{v}^\alpha=e^{-D_Y}\left(v^\alpha\right)$ and re-denoting $\tilde{v}^\alpha,
\tilde{v}^\alpha_x,\dots$ by $v^\alpha, v^\alpha_x,\dots$.
\begin{thm} \label{thm-unq-2}
Suppose $(\tilde{P}_1, \{\tilde{h}_{\alpha, p}\})$ and $(\hat{P}_1, \{\hat{h}_{\alpha, p}\})$ are two equivalent deformations related
by a Miura transformation $e^{\mathrm{ad}_Y}$, and they have tau structures $\{\tilde{\Omega}_{\alpha,p;\beta,q}\}$ and $\{\hat{\Omega}_{\alpha,p;\beta,q}\}$
respectively. Then there exists a differential polynomial $G$ such that
\begin{align*}
& \hat{h}_{\alpha, p}=e^{D_Y}\left(\tilde{h}_{\alpha, p}\right)+\partial\hat{\partial}_{\alpha, p} G,\\
& \hat{\Omega}_{\alpha, p; \beta, q}=e^{D_Y}\left(\tilde{\Omega}_{\alpha, p; \beta, q}\right)+\hat{\partial}_{\alpha, p}\hat{\partial}_{\beta, q}G.
\end{align*}
Moreover, suppose $\{\tilde{f}(t), \{\tilde{f}_{\alpha,p}(t)\}, \{\tilde{w}^\alpha(t)\}\}$ is a solution to the tau cover corresponding to the tau structure
$\{\tilde{\Omega}_{\alpha, p; \beta, q}\}$, then
\[\hat{f}(t)=\tilde{f}(t)+G(t),\quad \hat{f}_{\alpha, p}(t)=\tilde{f}_{\alpha, p}+\frac{\partial G(t)}{\partial t^{\alpha,p}},\quad
\hat{w}^\alpha(t)=\tilde{w}^\alpha(t)+\eta^{\alpha\beta}\frac{\partial^2 G(t)}{\partial x\partial t^{\beta, 0}}\]
give a solution $\{\hat{f}(t), \{\hat{f}_{\alpha,p}(t)\}, \{\hat{w}^\alpha(t)\}\}$ to the tau cover corresponding to the tau structure
$\{\hat{\Omega}_{\alpha, p; \beta, q}\}$ and the associated deformed principal hierarchy.
Here $G(t)$ is defined from the differential polynomial $G=G(v, v_x,\dots)$ by
\[ G(t)=\left(e^{-D_Y}G(v, v_x,\dots)\right)|_{v^\alpha=v^\alpha(\tilde{w}(t)), \tilde{w}_x(t),\dots)},\]
and $v^\alpha=v^\alpha(\tilde{w}, \tilde{w}_x,\dots)$ are defined by the relation $\tilde{w}^\alpha=\eta^{\alpha, \gamma} \tilde{h}_{\gamma, 0}(v, v_x, \dots)$
just as we did in \eqref{zh-16}.
\end{thm}
\begin{prf}
The condition $\hat{H}_{\alpha, p}=e^{\mathrm{ad}_Y}\left(\tilde{H}_{\alpha, p}\right)$ implies that there exists $g_{\alpha, p}\in\mathcal{A}_{\ge1}$ such that
\[\hat{h}_{\alpha, p}=e^{D_Y}\left(\tilde{h}_{\alpha, p}\right)+\partial g_{\alpha, p}.\]
The tau-symmetry condition $\hat{\partial}_{\alpha, p}\hat{h}_{\beta, q-1}=\hat{\partial}_{\beta, q}\hat{h}_{\alpha, p-1}$ for
$\{\hat{h}_{\alpha,p}\}$ and the one for $\{\tilde{h}_{\alpha, p}\}$
implies that
\[\partial\left(\hat{\partial}_{\alpha, p}g_{\beta, q-1}-\hat{\partial}_{\beta, q}g_{\alpha, p-1}\right)=0,\]
so we have $\hat{\partial}_{\alpha, p}g_{\beta, q-1}=\hat{\partial}_{\beta, q}g_{\alpha, p-1}$. In particular, by taking $(\beta, q)=(1, 0)$, we have
\[\hat{\partial}_{\alpha, p}g_{1,-1}=\partial g_{\alpha, p-1},\]
so $\int\left(g_{1,-1}\right)$ gives a conserved quantity for $\hat{\partial}_{\alpha, p}$ with a positive degree. According to Theorem \ref{app-thm},
there exists $G\in\mathcal{A}$ such that
\begin{equation}\label{zh-n-15}
g_{1,-1}=\partial G,
\end{equation}
then we have
\[\partial\left(\hat{\partial}_{\alpha, p} G-g_{\alpha, p-1}\right)=0,\]
so $g_{\alpha,p-1}=\hat{\partial}_{\alpha, p}G$ for $\alpha=1,\dots, n, p\ge 0$. Thus we have
\begin{align*}
&\partial \hat{\Omega}_{\alpha, p; \beta, q}=\hat{\partial}_{\alpha, p}\hat{h}_{\beta, q-1}=\hat{\partial}_{\alpha, p}
\left(e^{D_Y}\left(\tilde{h}_{\beta,q-1}\right)+\partial \hat{\partial}_{\beta, q} G\right)\\
=& e^{D_Y}\tilde{\partial}_{\alpha, p}\left(\tilde{h}_{\beta,q-1}\right)+\partial \hat{\partial}_{\alpha, p}\hat{\partial}_{\beta, q} G
=\partial \left(e^{D_Y}\left(\tilde{\Omega}_{\alpha,p;\beta,q}\right)+\hat{\partial}_{\alpha, p}\hat{\partial}_{\beta, q}G\right),
\end{align*}
so the difference between $\hat{\Omega}_{\alpha, p; \beta, q}$ and $e^{D_Y}\left(\tilde{\Omega}_{\alpha,p;\beta,q}\right)
+\hat{\partial}_{\alpha, p}\hat{\partial}_{\beta, q}G$ is a constant. However, they have the same leading terms, so the constant must be zero.
The remaining assertions of the theorem follow from our definition of the tau covers of the deformed principal hierarchies.
The theorem is proved.
\end{prf}
\begin{rmk}\label{tau-rmk}
If in Definition \ref{dfn-tau-sym} we permit the appearance of first degree deformations, i.e. $P_1^{[1]}$ and $h_{\alpha, p}^{[1]}$, the first identity
of the above theorem should be replaced by
\[\hat{h}_{\alpha, p}=e^{D_Y}\left(\tilde{h}_{\alpha, p}\right)+\hat{\partial}_{\alpha, p}\sigma,\]
where $\sigma$ is a conserved density of $\hat{\partial}_{\alpha, p}$, and the solutions $\hat{f}=\log\hat{\tau}$ and $\tilde{f}=\log\tilde{\tau}$
of the tau covers of $\hat{\Omega}_{\alpha,p;\beta,q}$ and $\tilde{\Omega}_{\alpha,p;\beta,q}$ satisfy the relation
\[\partial\left(\log\hat{\tau}-\log\tilde{\tau}\right)=\sigma.\]
The different tau functions defined in \cite{EF, Mira, Wu} for the Drinfeld--Sokolov hierarchies have such a relationship.
\end{rmk}
Next let us consider the Galilean symmetry of the deformed principal hierarchy.
\begin{dfn}
The triple $(\tilde{P}_1, \{\tilde{h}_{\alpha,p}\}, \tilde{Z})$ is a deformation of $(P_1, \{h_{\alpha, p}\}, Z)$ if
\begin{itemize}
\item[i)] The pair $(\tilde{P}_1, \{\tilde{h}_{\alpha,p}\})$ is a deformation of $(P_1, \{h_{\alpha, p}\})$.
\item[ii)] The vector field $\tilde{Z}$ has the form
\[\tilde{Z}=Z+Z^{[2]}+Z^{[3]}+\dots, \quad Z^{[k]}\in \hat{\F}^{1}_k,\]
and satisfies conditions $[\tilde{Z}, \tilde{P}_1]=0$ and
\[D_{\tilde{Z}} \tilde{h}_{\alpha,-1}=\eta_{\alpha,1},\quad D_{\tilde{Z}} \tilde{h}_{\alpha,p}=\tilde{h}_{\alpha,p-1},\quad \alpha=1,\dots, n,\,p\ge 0.\]
\end{itemize}
\end{dfn}
\begin{lem}\label{omega-w}
Let $\{\tilde{\Omega}_{\alpha,p;\beta,q}\}$ be a tau structure of $(\tilde{P}_1, \{\tilde{h}_{\alpha,p}\}, \tilde{Z})$, and $w^1,\dots, w^n$
are the normal coordinates. Assume that the identity \eqref{omega-iden-1} holds true, then we have:
\[
\frac{\partial \tilde{\Omega}_{\alpha,p;\beta,q}}{\partial w^1}=
\tilde{\Omega}_{\alpha,p-1;\beta,q}+\tilde{\Omega}_{\alpha,p;\beta,q-1}+\eta_{\alpha\beta}\delta_{p0}\delta_{q0}.
\]
\end{lem}
\begin{prf}
According to Lemma \ref{omega-3}, we only need to show that
\[\partial\frac{\partial \tilde{\Omega}_{\alpha,p;\beta,q}}{\partial w^1}=
\partial\tilde{\Omega}_{\alpha,p-1;\beta,q}+\partial\tilde{\Omega}_{\alpha,p;\beta,q-1},\]
that is,
\begin{equation}
\frac{\partial}{\partial w^1}\left(\tilde{\partial}_{\beta, q}\left(\tilde{h}_{\alpha, p-1}\right)\right)
=\tilde{\partial}_{\beta, q}\left(\tilde{h}_{\alpha, p-2}\right)+\tilde{\partial}_{\beta, q-1}\left(\tilde{h}_{\alpha, p-1}\right). \label{idid}
\end{equation}
We first note that one can replace $\frac{\partial}{\partial w^1}$ by $D_{\tilde{Z}}$. This is because
\[D_{\tilde{Z}}=\partial^s\left(D_{\tilde{Z}}\left(w^\gamma\right)\right)\frac{\partial}{\partial w^{\gamma, s}}
=\partial^s\left(\delta^\gamma_1\right)\frac{\partial}{\partial w^{\gamma, s}}=\frac{\partial}{\partial w^1}.\]
Then the identity \eqref{idid} is equivalent to $[D_{\tilde{Z}}, \tilde{\partial}_{\beta, q}]=\tilde{\partial}_{\beta, q-1}$, which follows from the identities
$\tilde{\partial}_{\beta, q}=-D_{[\tilde{P}_1, \tilde{H}_{\beta, q}]}$, and
\[[D_{\tilde{Z}}, D_{[\tilde{P}_1, \tilde{H}_{\beta, q}]}]=D_{[\tilde{Z}, [\tilde{P}_1, \tilde{H}_{\beta, q}]]}
=D_{[\tilde{P}_1, \tilde{H}_{\beta, q-1}]}.\]
The lemma is proved.
\end{prf}
Similar to Theorem \ref{zh-01-22-g}, we have the following theorem on the Galilean
symmetry of the deformed hierarchy $\{\tilde{\partial}_{\alpha, p}\}$.
\begin{thm}\label{stringEquation}
Under the assumption of Lemma \ref{omega-w}, the above defined tau cover \eqref{zh-18a}--\eqref{zh-18b} admits the following Galilean symmetry:
\begin{align}
\frac{\partial \tilde{f}}{\partial s} & =\frac{1}{2}\eta_{\alpha\beta}t^{\alpha,0}t^{\beta,0}+\sum_{\alpha,p}t^{\alpha,p+1}\tilde{f}_{\alpha,p},\\
\frac{\partial \tilde{f}_{\beta,q}}{\partial s} & =\eta_{\alpha\beta}t^{\alpha,0}\delta_{q0}+\tilde{f}_{\beta,q-1}+\sum_{\alpha,p}t^{\alpha,p+1}\tilde{\Omega}_{\alpha,p;\beta,q}, \\
\frac{\partial w^\gamma}{\partial s} & =\delta^{\gamma}_1+\sum_{\alpha,p}t^{\alpha,p+1}\frac{\partial w^{\gamma}}{\partial t^{\alpha,p}}.
\end{align}
\end{thm}
\begin{prf}
We can prove the theorem by using the same argument as the one given in the proof of Theorem
\ref{zh-01-22-g}, and by using Lemma \ref{omega-w}.
\end{prf}
\begin{emp}\label{zh-12-31b}
Let $c=\{c_{g,n}: V^{\otimes n}\to H^*(\overline{\mathcal{M}}_{g, n}, \mathbb{Q})\}$ be a semisimple cohomological
field theory. Its genus zero part defines a semisimple Frobenius manifold, which corresponds to a flat exact semisimple bihamiltonian structure of hydrodynamic type. Its principal hierarchy
has a useful deformation, called topological deformation, such that the partition function of $c$ is a tau function of this deformed hierarchy
\cite{DZ-NF, BPS-1, BPS-2}. On the other hand, Buryak constructed another deformation, called double ramification deformation, from the same data,
and conjectured that they are actually equivalent \cite{Bu}. This conjecture is refined in \cite{BD} as follow:
Suppose $\mathcal{F}$ is the free energy of the topological deformation. Buryak \emph{et al} show that there exists a unique differential polynomial $P$
such that $\mathcal{F}^{\mathrm{red}}=\mathcal{F}+P$ satisfies the following condition:
\[\left.\mathrm{Coef}_{\epsilon^{2g}}\frac{\partial ^{n}\mathcal{F}^{\mathrm{red}}}{\partial t^{\alpha_1, p_1}\cdots\partial t^{\alpha_n, p_n}}\right|_{t^{*, *}=0}=0,\quad
p_1+\cdots+p_n\le 2g-2. \]
It is conjectured that $\mathcal{F}^{\mathrm{red}}$ is just the free energy of the double ramification deformation.
Buryak \emph{et al}'s refined conjecture is compatible with our Theorem \ref{thm-unq-2}. They also show that the double ramification deformation
satisfies the string equation, which can also be derived from our Theorem \ref{stringEquation}.
\end{emp}
\section{Tau-symmetric bihamiltonian deformations of the principal hierarchy}\label{sec-5}
In this section, we construct a class of tau-symmetric integrable Hamiltonian deformations
of the principal hierarchy associated with a semisimple flat exact bihamiltonian structure $(P_1, P_2; Z)$ of hydrodynamic type.
These deformations of the principal hierarchies are in fact bihamiltonian integrable hierarchies.
From \cite{CPS-2, BCIH-I} we know that the bihamiltonian structure $(P_1, P_2)$
possesses deformations of the form
\[ \tilde{P}_1=P_1+\sum_{k\ge 1} Q_{1,k},\quad
\tilde{P}_2=P_2+\sum_{k\ge 1} Q_{2,k},\quad Q_{1,k}, Q_{2, k}\in \hat{\F}^2_{k+1}\]
such that $(\tilde{P}_1, \tilde{P}_2)$ is still a bihamiltonian structure, i.e.
\[[\tilde{P}_a, \tilde{P}_b]=0, \quad a, b=1, 2.\]
The space of deformations of the bihamiltonian structure $(P_1, P_2)$ is characterized
by the central invariants $c_1(u), \dots, c_n(u)$ of $(\tilde{P}_1, \tilde{P}_2)$.
The following theorem of Falqui and Lorenzoni gives a condition under which the deformed bihamiltonian structure inherits the exactness property.
This means that there exists a vector field
$\tilde{Z}\in \hat{\F}^1$ such that
\[[\tilde{Z}, \tilde{P}_1]=0,\quad [\tilde{Z}, \tilde{P}_2]=\tilde{P}_1.\]
\begin{thm}[\cite{FL}]\label{thm-FL}
The deformed bihamiltonian structure $(\tilde{P}_1, \tilde{P}_2)$ is exact if and only if its central invariants $c_1, \dots, c_n$ are constant functions.
Moreover, there exists a Miura type transformation $g$ such that
\begin{equation}\label{zh-16-1}
g(\tilde{P}_1)=P_1,\quad
g(\tilde{P}_2)=P_2+\sum_{k\ge 1} Q_{k},\quad Q_{k}\in \hat{\F}^2_{2k+1}
\end{equation}
and $g(\tilde{Z})=Z$, where $Z=Z_0$ is given by \eqref{eq-Z0}.
\end{thm}
In what follows, we assume that $(\tilde{P}_1, \tilde{P}_2; \tilde{Z})$ is a deformation of the
flat exact bihamiltonian structure $(P_1, P_2; Z)$ with constant central invariants $c_1, \dots, c_n$,
$\tilde{P}_1, \tilde{P}_2$ have the form given in \eqref{zh-16-1}, and $\tilde{Z}=Z$.
We denote by $u^1,\dots, u^n$ and $v^1,\dots, v^n$ the canonical coordinates of $(P_1, P_2)$ and the flat coordinates of $P_1$
respectively. We also fix a calibration
\[\{h_{\alpha, p}(v)\in\mathcal{A}_0 \mid \alpha=1,\dots, n;\ p=0, 1,2, \dots\}\]
and a tau structure
\[\{\Omega_{\alpha,p;\beta,q}(v)\in\mathcal{A}_0\mid \alpha, \beta=1, \dots, n;\ p,q=0, 1, 2, \dots\}\]
of the flat exact bihamiltonian structure $(P_1, P_2; Z)$ (see above their construction given in Propositions \ref{whatever}, \ref{prop-3-12}).
We define the space of Casimirs of $\tilde{P}_1$, the space of bihamiltonian conserved quantities and the space of bihamiltonian vector fields respectively,
just like we did for $(P_1, P_2)$, as follows:
\begin{align*}
\hat{\V}:=& \mathrm{Ker}([\tilde{P}_1, \cdot])\cap \mathcal{F}, \\
\hat{\H}:=& \mathrm{Ker}([\tilde{P}_2, [\tilde{P}_1, \cdot]])\cap \mathcal{F}, \\
\hat{\X}:=& \mathrm{Ker}([\tilde{P}_1, \cdot])\cap \mathrm{Ker}([\tilde{P}_2, \cdot])\cap \hat{\F}^1.
\end{align*}
\begin{thm}\label{thm-31}
We have the following isomorphisms:
\begin{equation}
\mathcal{V}\cong \hat{\V},\quad \H\cong \hat{\H},\quad \mathcal{X}\cong \hat{\X}.
\end{equation}
In particular, $\hat{\X}\cong\hat{\H}/\hat{\V}$.
\end{thm}
\begin{prf}
Since $\tilde{P}_1=P_1$, we only need to prove that $\H\cong \hat{\H}$, $\mathcal{X}\cong \hat{\X}$.
Suppose $H\in\hat{\H}$ is a bihamiltonian conserved quantity of $(\tilde{P}_1, \tilde{P}_2)$. Expand $H$ as the sum of homogeneous components
\[H=H_0+H_1+H_2+\cdots,\quad H_k\in\mathcal{F}_{2 k},\]
then $H_0$ is a bihamiltonian conserved quantity of $(P_1, P_2)$, so we have a map $\pi:\hat{\H}\to\H$, $H\mapsto H_0$.
The fact that $\H$ is concentrated in degree zero (see Lemma \ref{lem-23}) implies that $\pi$ is injective. To prove the isomorphism $\H\cong\hat{\H}$,
we only need to show that $\pi$ is surjective, that is, for any bihamiltonian conserved quantity $H_0$ of $(P_1, P_2)$ there exists a
bihamiltonian conserved quantity $H$ of $(\tilde{P}_1, \tilde{P}_2)$ with $H_0$ as its leading term.
Recall that $(\tilde{P}_1, \tilde{P}_2; Z)$ takes the form \eqref{zh-16-1}. If we
denote $d_a=[P_a, \cdot]\ (a=1,2)$, then $Q_k$ satisfy the following equations:
\[d_1Q_k=0,\quad d_2 Q_k+\frac12\sum_{i=1}^{k-1}[Q_i, Q_{k-i}]=0.\]
We assert that, for any bihamiltonian conserved quantity $H_0\in\H$ of $(P_1, P_2)$, there exists $H_k\in\mathcal{F}_{2k}$ such that
\[H=H_0+H_1+H_2+\cdots\]
is a bihamiltonian conserved quantity of $(\tilde{P}_1, \tilde{P}_2)$. This assertion is equivalent to the solvability of the following equations
for $H_k$ to be solved recursively:
\[d_1d_2H_k=\sum_{i=1}^k[Q_i, d_1 H_{k-i}], \quad k=1, 2, \dots.\]
Assume that we have already solved the above equations for $H_1, \dots, H_{k-1}$ starting from $H_0$. Denote by $W_k$ the right hand side of the above equation.
Then it is easy to see that $d_1W_k=0$, and
\begin{align*}
& d_2W_k = \left[P_2, \sum_{i=1}^k[Q_i, d_1 H_{k-i}]\right]\\
=& -\sum_{i=1}^k\left([[d_1H_{k-i}, P_2], Q_i]+[[P_2, Q_i], d_1 H_{k-i}]\right)\\
=& \sum_{i=1}^k\left([d_1d_2H_{k-i}, Q_i]+[-d_2Q_i, d_1 H_{k-i}]\right)\\
=& \sum_{i=1}^k\sum_{j=1}^{k-i}[[Q_j, d_1 H_{k-i-j}], Q_i]+\frac12\sum_{m=1}^k\sum_{i=1}^{m-1}[[Q_i,Q_{m-i}],d_1H_{k-m}]\\
=& \frac12\sum_{i,j\ge1, l\ge0, i+j+l=k}\left([[Q_j, d_1 H_l], Q_i]+[[Q_i, d_1 H_l], Q_j]+[[Q_i,Q_j], d_1 H_l]\right)\\
=& 0,
\end{align*}
so $W_k\in\mathrm{Ker}(d_1)\cap\mathrm{Ker}(d_2)\cap\hat{\F}^2_{\ge4}$. Since $BH^2_{\ge4}(\hat{\F})\cong0$, there exists $H_k\in\mathcal{F}$ such that
$W_k=d_1d_2H_k$. Thus the isomorphism $\H\cong\hat{\H}$ is proved.
It is easy to see that the map
\[ d_1:\, \hat{\H}/\hat{\V}\to\hat{\X},\quad H\mapsto X=-[\tilde{P}_1, H]\]
gives the isomorphism $\hat{\H}/\hat{\V}\cong \hat{\X}$, which also induces the isomorphism $\mathcal{X}\cong\hat{\X}$.
The theorem is proved.
\end{prf}
It follows from the above theorem that there exist unique deformations
\[\tilde{H}_{\alpha, p}=H_{\alpha,p}+H_{\alpha,p}^{[1]}+H_{\alpha,p}^{[2]}+\dots,\quad
H_{\alpha,p}^{[k]}\in \mathcal{F}_{2k}\]
of the bihamiltonian conserved quantities $H_{\alpha, p}=\int (h_{\alpha,p})\in \H$ such that, together
with the constant local functional $\int (1)$, they form a basis of the subspace
\[ \hat{\H}^{\infty}=\bigcup_{p\ge0}\hat{\H}^{(p)}\]
of $\hat{\H}$, where $\hat{\H}^{(p)}$ is the image of $\H^{(p)}$ in $\hat{\H}$ of the isomorphism given in the above theorem. For any pair of indices $(\alpha, p)$, $(\beta, q)$,
it is easy to see that the local functional
$H=\{\tilde{H}_{\alpha,p}, \tilde{H}_{\beta,q}\}_{\tilde{P}_1}:=[[\tilde{P}_1, \tilde{H}_{\alpha,p}], \tilde{H}_{\beta,q}]$ is a bihamiltonian conserved quantity
w.r.t. $(\tilde{P}_1, \tilde{P}_2)$. Since $H\in\mathcal{F}_{\ge 1}$ we obtain
\begin{equation}\label{zh-n-19}
\{\tilde{H}_{\alpha,p}, \tilde{H}_{\beta,q}\}_{\tilde{P}_1}=0
\end{equation}
by using Lemma \ref{new-lemma} and the property i) of Corollary \ref{app-cor}.
Define an operator
\begin{equation}\label{zh-21}
\delta_Z:\hat\mathcal{F}\to\hat\mathcal{A}, \quad Q\mapsto \sum_{i=1}^n \frac{\delta Q}{\delta u^i}=\frac{\delta{Q}}{\delta v^1}.
\end{equation}
Here we used the fact that
\[ D_Z=\frac{\partial}{\partial v^1}=\sum_{i=1}^n \frac{\partial}{\partial u^i}.\]
Then for a local functional $H\in\mathcal{F}$ we have $[Z, H]=\int\left(\delta_Z(H)\right)$.
Now let us define
\begin{equation} \label{var-h}
\tilde{h}_{\alpha,p}=\delta_Z \tilde{H}_{\alpha,p+1},\quad \alpha=1,\dots, n, \ p=-1, 0,1,\dots.
\end{equation}
\begin{thm}\label{zh-01-22-a}
The triple $(\tilde{P}_1, \{\tilde{h}_{\alpha,p}\}, \tilde{Z})$ gives a deformation of $(P_1, \{h_{\alpha, p}\}, Z)$.
\end{thm}
\begin{prf}
Define $\tilde{H}'_{\alpha, p}=\int\left(\tilde{h}_{\alpha, p}\right)$.
From the definition of $\tilde{h}_{\alpha,p}$ we see that $\tilde{H}'_{\alpha, p}=[Z, \tilde{H}_{\alpha,p+1}]$, so it belongs to
$\hat{\H}$.
From the property $D_Z h_{\alpha, p+1}=h_{\alpha,p}$ we know that $\tilde{H}'_{\alpha, p}$ and
$\tilde{H}_{\alpha,p}$ have the same leading term $\int(h_{\alpha,p})$. Since the bihamiltonian
conserved quantities of $(\tilde{P}_1, \tilde{P}_2)$ are uniquely determined by their leading terms,
we obtain
\[\tilde{H}'_{\alpha, p}=\tilde{H}_{\alpha,p}.\]
In particular, we know from \eqref{zh-n-19} that $\{\tilde{H}'_{\alpha, p}, \tilde{H}'_{\beta, q}\}_{\tilde{P}_1}=0$, and
\[\tilde{X}'_{\alpha,p}=-[\tilde{P}_1, \tilde{H}'_{\alpha, p}]=-[\tilde{P}_1, \tilde{H}_{\alpha, p}]=\tilde{X}_{\alpha,p}.\]
Denote by $\bar{\theta}_\alpha$ the super variables corresponding to the flat coordinates $v^1, \dots, v^n$. Recall that
\[\tilde{P}_1=P_1=\frac{1}{2}\int\left(\eta^{\alpha\beta}\bar{\theta}_\alpha\bar{\theta}_\beta^1\right),
\quad \mbox{where } \eta^{\alpha\beta}=\langle dv^\alpha, dv^\beta\rangle_{g_1},\]
so we have
\[\tilde{X}_{\alpha,p}=\int\left(\eta^{\beta\gamma}\partial\left(\frac{\delta \tilde{H}_{\alpha,p}}{\delta v^\gamma}\right)\bar{\theta}_\beta\right).\]
Denote by $V=\eta_{1\gamma} v^\gamma$, then
\[\frac{\partial V}{\partial t^{\alpha,p}}=\frac{\delta \tilde{X}_{\alpha,p}}{\delta \bar{\theta}_\gamma}\frac{\partial V}{\partial v^\gamma}
=\eta_{1\gamma}\eta^{\gamma\beta}\partial \left(\frac{\delta \tilde{H}_{\alpha,p}}{\delta v^\beta}\right)=
\partial\left(\frac{\delta \tilde{H}_{\alpha,p}}{\delta v^1}\right),\]
which implies that
\[\partial\left(\tilde{\partial}_{\alpha, p}\left(\tilde{h}_{\beta, q-1}\right)-\tilde{\partial}_{\beta, q}\left(\tilde{h}_{\alpha, p-1}\right)\right)
=\frac{\partial}{\partial t^{\alpha,p}}\frac{\partial V}{\partial t^{\beta,q}}-\frac{\partial}{\partial t^{\beta,q}}\frac{\partial V}{\partial t^{\alpha,p}}=0.\]
Since the difference $\tilde{\partial}_{\alpha, p}\left(\tilde{h}_{\beta, q}\right)-\tilde{\partial}_{\beta, q}\left(\tilde{h}_{\alpha, p}\right)$
is a differential polynomial with terms of degree greater or equal to one, so it must be zero. The above computation shows that
$(\tilde{P}_1, \{\tilde{h}_{\alpha, p}\})$ is a deformation of $(P_1, \{h_{\alpha, p}\})$, see Definition \ref{dfn-tau-sym}.
Next let us consider the action of $D_Z$ on $\tilde{h}_{\alpha,p}$. We have
\[D_Z(\tilde{h}_{\alpha,p+1})
=\frac{\partial}{\partial v^1}\frac{\delta}{\delta v^1}\tilde{H}_{\alpha, p+2}
=\frac{\delta}{\delta v^1}\frac{\delta}{\delta v^1}\tilde{H}_{\alpha, p+2}
=\frac{\delta}{\delta v^1}\tilde{H}_{\alpha, p+1}=\tilde{h}_{\alpha, p}.\]
Here we used the following identity for variational derivatives:
\[\frac{\partial}{\partial v^1}\frac{\delta}{\delta v^1}=\frac{\delta}{\delta v^1}\frac{\delta}{\delta v^1},\]
which is a particular case of the identity (i) of Lemma 2.1.5 in \cite{Jacobi}.
We still need to check the identities $D_Z(\tilde{h}_{\alpha,-1})=\eta_{\alpha, 1}$, which is equivalent to $\delta_Z \tilde{H}_{\alpha,-1}=\eta_{\alpha, 1}$.
Note that the leading term $\tilde{H}_{\alpha,-1}^{[0]}=\int \left(\eta_{\alpha\beta}v^\beta\right)$ of $\tilde{H}_{\alpha,-1}$ is a Casimir of $P_1=\tilde{P_1}$,
so it also belongs to $\hat{\H}$. On the other hand, elements of $\hat{\H}$ are determined by their leading terms, so we have
$\tilde{H}_{\alpha,-1}=\tilde{H}_{\alpha,-1}^{[0]}$, which implies the desired identity. The theorem is proved.
\end{prf}
\begin{rmk}
Our construction \eqref{var-h} of the Hamiltonian densities that satisfy the tau
symmetry property follows the approach given in \cite{DZ-NF} for the construction of the tau structure of the KdV hierarchy.
Note that this approach was also employed in \cite{BD} to construct tau structures for the double ramification hierarchies
associated to cohomological field theories.
\end{rmk}
The deformation $(\tilde{P}_1, \{\tilde{h}_{\alpha, p}\}, \tilde{Z})$ constructed in the above theorem depends on the choice of $\tilde{P}_2$.
It is natural to ask: if we start from another deformation $(\hat{P}_1, \hat{P}_2; \hat{Z})$ which has the same central invariants as
$(\tilde{P}_1, \tilde{P}_2; \tilde{Z})$ does, how does the result on the deformation $(\tilde{P}_1, \{\tilde{h}_{\alpha, p}\}, \tilde{Z})$ change?
Without loss of generality, we can assume that both $(\hat{P}_1, \hat{P}_2; \hat{Z})$ and $(\tilde{P}_1, \tilde{P}_2; \tilde{Z})$ have been transformed
to the form \eqref{zh-16-1}. If $(\hat{P}_1, \hat{P}_2)$ has the same central invariants as $(\tilde{P}_1, \tilde{P}_2)$,
then there exists a Miura type transformation of the second type
\[\mathrm{v}\mapsto \bar{\mathrm{v}}=e^{-D_Y}\left(\mathrm{v}\right)\]
with $Y\in\hat{\F}^{1}_{\ge 2}$ such that
\[\hat{P}_a=e^{\mathrm{ad}_{Y}}\left(\tilde{P}_a\right), \quad a=1, 2.\]
Note that $\hat{P}_1=\tilde{P}_1=P_1$, so $[P_1, Y]=0$, which implies that there exists $K\in\mathcal{F}_{\ge1}$ such that
$Y=[P_1, K]$.
\begin{lem}
The vector field $Y$ and the functional $K$ satisfy $[Y, Z]=0$ and $[K, Z]=0$.
\end{lem}
\begin{prf}
Denote $Z'=e^{\mathrm{ad}_{Y}}\left(Z\right)$, then we have
\begin{align*}
&[\hat{P}_1, Z']=e^{\mathrm{ad}_{Y}}\left([\tilde{P}_1, Z]\right)=0,\\
&[\hat{P}_2, Z']=e^{\mathrm{ad}_{Y}}\left([\tilde{P}_2, Z]\right)=\hat{P}_1,
\end{align*}
so $W=Z'-Z$ is a bihamiltonian vector field of $(\hat{P}_1, \hat{P}_2)$. On the other hand, $W\in\hat{\F}^1_{\ge2}$, so we have $W=0$ and, consequently, we have
$[Y, Z]=0$.
It follows from the identity $[Y,Z]=0$ that $[P_1, [K, Z]]=0$, so $C=[K, Z]$ is a Casimir of $P_1$. Since $C\in\mathcal{F}_{\ge1}$, we obtain $C=0$. The lemma is proved.
\end{prf}
From the above lemma we have
\[\int\left(\delta_Z K\right)=[Z,K]=0,\]
so there exists $g\in\mathcal{A}$ such that
\begin{equation}\label{zh-20}
\delta_Z K=\partial g.
\end{equation}
Let $\{\hat{H}_{\alpha,p}\}$, $\{\tilde{H}_{\alpha,p}\}$ be the bihamiltonian conserved quantities of $(\hat{P}_1, \hat{P}_2)$ and
$(\tilde{P}_1, \tilde{P}_2)$ respectively with the same leading terms $\{h_{\alpha,p}\}$, and $\{\hat{X}_{\alpha,p}\}$, $\{\tilde{X}_{\alpha,p}\}$ be
the corresponding bihamiltonian vector fields:
\[\hat{X}_{\alpha, p}=-[P_1, \hat{H}_{\alpha,p}], \quad \tilde{X}_{\alpha, p}=-[P_1, \tilde{H}_{\alpha,p}].\]
They are related by
\[\hat{H}_{\alpha,p}=e^{\mathrm{ad}_{Y}}\left(\tilde{H}_{\alpha,p}\right), \quad
\hat{X}_{\alpha,p}=e^{\mathrm{ad}_{Y}}\left(\tilde{X}_{\alpha,p}\right).\]
The flows corresponding to $\{\hat{X}_{\alpha,p}\}$ and $\{\tilde{X}_{\alpha,p}\}$ are denoted respectively by $\{\hat{\partial}_{\alpha, p}\}$ and
$\{\tilde{\partial}_{\alpha, p}\}$. We also have the associated triples $(\tilde{P}_1, \{\tilde{h}_{\alpha,p}\}, \tilde{Z})$ and
$(\hat{P}_1, \{\hat{h}_{\alpha,p}\}, \hat{Z})$ which are constructed in Theorem \ref{zh-01-22-a}. Let $\{\tilde{\Omega}_{\alpha,p;\beta,q}\}$ and
$\{\hat{\Omega}_{\alpha,p;\beta,q}\}$ be the corresponding tau structures. Then the relation between these tau structures and the solutions of the associated
tau covers of the deformed principal hierarchies is given by Theorem \ref{thm-unq-2}, and the following theorem gives the explicit expression of the
differential polynomial $G$.
\begin{thm}\label{thm-unq-1}
The differential polynomial $G$ of Theorem \ref{thm-unq-2} is given by the formula
\begin{equation}\label{zh-19}
G=\sum_{i=1}^\infty \frac{1}{i!}D_Y^{i-1}\left(g\right),
\end{equation}
where the function $g$ is defined in \eqref{zh-20}.
\end{thm}
\begin{prf}
From our construction of the densities of the Hamiltonians we have
\[\hat{h}_{\alpha,p}=\delta_Z \hat{H}_{\alpha,p+1},\quad \tilde{h}_{\alpha,p}=\delta_Z \tilde{H}_{\alpha,p+1},\]
so
\[\hat{h}_{\alpha,p}=\delta_Z\left(e^{\mathrm{ad}_{Y}}\left(\tilde{H}_{\alpha,p+1}\right)\right)
=\sum_{k=0}^\infty \frac{1}{k!}\delta_Z\left(\mathrm{ad}_{Y}^k\left(\tilde{H}_{\alpha,p+1}\right)\right).\]
By using the the definition \eqref{zh-21} of $\delta_Z$ and the identities
given in Lemma \ref{last-lem} we can show that
\[\delta_Z\left(\mathrm{ad}_{Y}^k\left(\tilde{H}_{\alpha,p+1}\right)\right)=D_Y^k\left(\tilde{h}_{\alpha,p}\right)
+\sum_{i=1}^k\binom{k}{i}D_{\mathrm{ad}_Y^{k-i}\left(\tilde{H}_{\alpha,p+1}\right)} D_Y^{i-1}\left(\delta_Z Y\right),\]
so we have
\begin{align*}
\hat{h}_{\alpha,p}
=&\sum_{k=0}^\infty \frac{1}{k!}\left(D_Y^k\left(\tilde{h}_{\alpha,p}\right)
+\sum_{i=1}^k\binom{k}{i}D_{\mathrm{ad}_Y^{k-i}\left(\tilde{H}_{\alpha,p+1}\right)} D_Y^{i-1}\left(\delta_Z Y\right)\right)\\
=&e^{D_Y}\left(\tilde{h}_{\alpha,p}\right)+D_{\hat{H}_{\alpha, p+1}}\left(\sum_{i=1}^\infty \frac{1}{i!}D_Y^{i-1}\left(\delta_Z Y\right)\right)
\end{align*}
By using the fact that
\[\delta_Z Y=\delta_Z[P_1, K]=D_{P_1}(\delta_Z K)=\partial D_{P_1}(g),\]
and $[D_Y, D_{P_1}]=0$, $[\partial, D_{Q}]=0$ for $Q\in\hat\mathcal{F}$ (see Lemma \ref{last-lem}), we obtain
\[\hat{h}_{\alpha,p}=e^{D_Y}\left(\tilde{h}_{\alpha,p}\right)+\partial D_{\hat{H}_{\alpha, p+1}}D_{P_1}G,\]
where
\[G=\sum_{i=1}^\infty \frac{1}{i!}D_Y^{i-1}\left(g\right).\]
Then by using the identity (see Lemma \ref{last-lem})
\[D_{H}D_{P_1}=D_{-[P_1, H]}-D_{P_1}D_{H},\]
and the fact that $D_{H}\left(G\right)=0$, we have
\[\hat{h}_{\alpha,p}=e^{D_Y}\left(\tilde{h}_{\alpha,p}\right)+\partial D_{\hat{X}_{\alpha, p+1}}G
=e^{D_Y}\left(\tilde{h}_{\alpha,p}\right)+\partial \hat{\partial}_{\alpha, p} G.\]
The theorem is proved.
\end{prf}
In the proof of the above theorem the following lemma is used.
\begin{lem}\label{last-lem}
The operator
\[D_P=\sum_{s\ge0}\left(\partial^s\left(\frac{\delta P}{\delta \theta_\alpha}\right)\frac{\partial}{\partial u^{\alpha,s}}
+(-1)^p\partial^s\left(\frac{\delta P}{\delta u^\alpha}\right)\frac{\partial}{\partial \theta_\alpha^s}\right),\quad P\in\hat{\F}^p\]
and the bracket
\[[P, Q]=\int\left(\frac{\delta P}{\delta \theta_\alpha}\frac{\delta Q}{\delta u^\alpha}
+(-1)^p\frac{\delta P}{\delta u^\alpha}\frac{\delta Q}{\delta \theta_\alpha}\right), \quad P\in \hat{\F}^p,\ Q\in\hat{\F}^q\]
satisfy the following identities:
\begin{align}
& [\partial, D_P]=0;\nonumber\\
& \frac{\delta}{\delta u^\alpha}[P,Q]=D_P\left(\frac{\delta Q}{\delta u^\alpha}\right)
+(-1)^{pq}D_Q\left(\frac{\delta P}{\delta u^\alpha}\right);\nonumber\\
& (-1)^{p-1}D_{[P,Q]}=D_P\circ D_Q-(-1)^{(p-1)(q-1)}D_Q\circ D_P.\nonumber
\end{align}
\end{lem}
\begin{prf}
The first identity can be obtained from the definition of $D_P$.
The second one is a corollary of the identity (iii) of Lemma 2.1.3 and the identity (i) of Lemma 2.1.5 given in \cite{Jacobi}.
The third identity is a corollary of the second one. The lemma is proved.
\end{prf}
Theorem \ref{zh-01-22-a} gives the existence part of Theorem \ref{main-thm}, and Theorem \ref{thm-unq-1} (combining with
Theorem \ref{thm-unq-2}) gives the uniqueness part.
There are two important examples of such deformations when the flat exact semisimple bihamiltonian structures is provided
by a semisimple cohomological field theory. In \cite{DZ-NF} the first- and the third-named authors construct, for any semisimple Frobenius manifold,
the so-called topological deformation of the associated principal hierarchy and its tau structure. As we mentioned in Example \ref{zh-12-31b}, in \cite{Bu} Buryak
constructed a Hamiltonian integrable hierarchy associated to any cohomological field theory, and in \cite{BD} he and his collaborators
showed that this integrable hierarchy also possesses a tau structure. Buryak conjectured in \cite{Bu} that the above two integrable hierarchies
are equivalent via a Miura type transformation. He and his collaborators further refined this conjecture in \cite{BD} as an equivalence between tau-symmetric
Hamiltonian deformations via a normal Miura type transformation. The notion of normal Miura type transformation was introduced in \cite{DLYZ}, our Definition \ref{equivalent} (see also Theorem \ref{thm-unq-2}) is a kind of its generalization. We hope our results
could be useful to solve the Buryak's \emph{et al} conjecture.
\section{Conclusion}\label{sec-7}
We consider in this paper the integrable hierarchies associated to a class of flat exact semisimple bihamiltonian structures of hydrodynamic type.
This property of flat exactness enables us to associate to any semisimple bihamiltonian structure of hydrodynamic type a Frobenius manifold
structure (without the Euler vector field), and a bihamiltonian integrable hierarchy which is called the principal hierarchy. We show that this
principal hierarchy possesses a tau structure and also the Galilean symmetry. For any deformation of the flat exact semisimple bihamiltonian
structures of hydrodynamic type which has constant central invariants, we construct the deformation of the principal hierarchy and show the
existence of tau structure and Galilean symmetry for this deformed integrable hierarchy. We also describe the ambiguity of the choice of tau
structure for the deformed integrable hierarchy. Our next step is to study properties of the Virasoro symmetries that are inherited from the
Galilean symmetry of the deformed integrable hierarchy in order to fix an appropriate representative of the tau structures which, in the case
associated to a cohomological field theory, corresponds to the partition function. We will do it in a subsequent publication.
\paragraph{Acknowledgements} This work is partially supported by NSFC No.\,11371214 and No.\,11471182.
B.D. kindly acknowledge the hospitality and generous support during his visit
to the Department of Mathematics of Tsinghua University where part of this work was completed.
|
1,477,468,751,395 | arxiv | \section{Introduction}
Quantum transport of relativistic electrons in topological semimetals has been an issue of great interest in topological materials' science \cite{armitage18a}.
In such materials, the quantum state of the Dirac or Weyl electrons is strongly coupled to the crystal symmetry,
and hence the engineering of the electronic symmetry is a promising way to search for exotic quantum transport of such quasiparticles. In recent years more and more materials have been theoretically predicted and experimentally found to be a topological semimetal. Prototypical materials include $A_3$Bi with $A = $~Na, K, Rb \cite{zwang12b,zkliu14b,neupane14a}, BiO$_2$ and SbO$_2$ \cite{young12a}, and Cd$_{3}$As$_{2}$\ \cite{zwang13a,ali14a,tliang14a,zkliu14a,sjeon14a,uchida17a,nakazawa18a,uchida19a}.
Among them, Cd$_{3}$As$_{2}$\ possesses a simple band structure with an electron charge carrier concentration of $\sim 10^{18}$~cm$^{-3}$. It has long been known for its large mobility of $\sim 10^{4}$~cm$^{2}$/Vs at room temperature \cite{turner61a}.
Recently, an even higher value of almost $\sim 10^{7}$~cm$^{2}$/Vs was reported at low temperatures due to a linear band dispersion and strongly suppressed backscattering events of the charge carriers \cite{tliang14a}.
The nontrivial topology of this system, namely, an inversion of conduction and valence bands which are of different character, manifests in two Dirac nodes in the proximity of the Fermi energy \ensuremath{E_{\rm F}}\ \cite{zwang13a}, which are protected by both time-reversal symmetry and rotational symmetry of the crystal lattice \cite{zwang13a}. For example, it has been demonstrated that the breakdown of time-reversal symmetry via the application of a magnetic field creates a Weyl semimetalic state with negative magnetoresistance due to the chiral anomaly \cite{czli15a,jcao15a,hli16a,zjia16a}, a hallmark of the underlying nontrivial physics.
Another way to control Dirac nodes in such systems is to manipulate the band inversion directly.
It has been proposed that
the chemical substitution of Cd with Zn changes the sign of band gap from negative (band inversion) to positive,
resulting in the topological transition from a Dirac semimetal to a trivial insulator \cite{zdanowicz64a,zdanowicz64b,zdanowicz75a,hlu17a}.
Indeed, in contrast to Cd$_{3}$As$_{2}$, Zn$_{3}$As$_{2}$\ is a topologically trivial semiconductor with a hole carrier concentration of $\sim 10^{17}$~cm$^{-3}$
and a much lower room-temperature mobility of only $\sim 10$~cm$^2$/Vs \cite{turner61a}.
Hence, a depletion of the charge carriers and a topological phase transition from the Dirac semimetal Cd$_{3}$As$_{2}$\ to trivial Zn$_{3}$As$_{2}$\ is expected when alloying these two systems. Indeed, Lu {\it et. al,} found experimental indications of this transition in Cd$_{3-x}$Zn$_{x}$As$_{2}$\ in magnetotransport measurements \cite{hlu17a} around $x\sim 1.1$ on the basis of an enhanced resistivity upon cooling as well as a thorough analysis of Shubnikov-de-Haas (SdH) oscillations. Recent studies on thin films of Cd$_{3-x}$Zn$_{x}$As$_{2}$\ also support this scenario qualitatively \cite{nishihaya18a,nishihaya18b,nishihaya19a}, although the topological phase transition takes place already around $x\sim 0.6$ \cite{nishihaya19a}. We note that a similar transition is proposed to occur in related Cd$_3$As$_{2-x}$P$_x$ on the basis of angle-resolved photoemission spectroscopy data \cite{thirupathaiah18a}.
Given the remarkably high mobility of the electron charge carriers, Cd$_{3}$As$_{2}$\ is expected to bear potential for a good thermoelectric performance
with possibly large power factors $\ensuremath{S_{xx}}^2/\ensuremath{\rho_{xx}}$; \ensuremath{S_{xx}}\ and \ensuremath{\rho_{xx}} ~being the longitudinal thermopower and resistivity, respectively \cite{pei11a}.
Indeed, a recent study reported $\ensuremath{S_{xx}}^2/\ensuremath{\rho_{xx}} \sim 1.6 \times 10^{-3}$~W/K$^2$/m along with a fairly small thermal conductivity $\ensuremath{\kappa_{xx}} \sim 5$~W/K/m,
yielding $ZT \sim 0.1$ at room temperature \cite{czhang16a}; ZT represents the figure of merit $ZT= \ensuremath{S_{xx}}^2 T/(\ensuremath{\rho_{xx}}\ensuremath{\kappa_{xx}})$ as a measure of the thermoelectric efficiency.
This value further increases in presence of a magnetic field $B$, exceeding unity at $B = 7$~T and $T=375$~K \cite{hwang18a} mainly due to field-induced suppression of \ensuremath{\kappa_{xx}}.
Since these parameters also depend on the actual charge carrier concentration \cite{tzhou16a}, it is promising to study the thermoelectric performance upon doping.
In this study, we have measured transport, thermoelectric properties, and the charge dynamics upon Zn doping in Cd$_{3-x}$Zn$_x$As$_2$ with $0\le x \le 1.2$.
With increasing $x$, the carrier density monotonically reduces and the Seebeck coefficient is largely enhanced, exceeding 300 $\mu$V/K at 300 K for $x=1.2$.
At low temperatures, we could confirm the reported metal-insulator transition with Zn doping \cite{hlu17a}. At the same time, Zn doping suppresses the thermal conductivity while the resistivity above the metal-insulator transition temperature is enhanced only modestly due to the doping-induced disorder. Hence, the thermoelectric figure of merit is greatly enhanced, exceeding 0.3 at room temperature, i.e., more than three times the value reported for pure Cd$_{3}$As$_{2}$. Complementary analyses of quantum oscillation and optical conductivity data suggest an $x$-dependent change in the band-structure dispersion in the higher doping region which promotes the enhancement of the figure of merit.
This paper is organized as follows: First, we will present electric and thermal transport data with enhanced $ZT$ values. Then we will analyze magnetotransport and optical spectroscopy data which point toward the scenario of an $x$-dependent change of the band structure at \ensuremath{E_{\rm F}}\ giving rise to the observed large room-temperature $ZT$ values. We will finish with a discussion of our findings and conclude with a summary of the paper.
\section{Experimental Methods}
Single-crystalline samples of Cd$_{3}$As$_2$ were grown by the Bridgmann technique, while polycrystalline samples of Cd$_{3-x}$Zn$_x$As$_2$ were synthesized by conventional melt-growth. In both cases, stoichiometric ratios of the constituent elements were mixed inside a glove box, transferred into quartz tubes, and eventually sealed while evacuated.
In the Bridgman-method growth,
the temperature of the upper (lower) heater was set to 900$^{\circ}$C (600$^{\circ}$C).
The evacuated quartz tubes were kept for 12~h at 900$^{\circ}$C and then lowered with a speed of 2~mm/h.
After the quartz tubes had reached the lower heater, they were slowly cooled down to room temperature.
Melt-grown batches were kept for 48~h at 800$^{\circ}$C -- 950$^{\circ}$C depending on the composition and slowly cooled down to room temperature afterwards.
Resistivity and Hall effect were measured by a conventional five-probe method in a commercial system (PPMS, physical property measurement system, Quantum Design). The thermopower and thermal conductivity were measured in a home-built setup inserted into a PPMS while applying a temperature gradient by using a chip-heater attached on one side of the sample. The temperature gradient is monitored by employing commercial thermocouples. The reflectivity spectra at nearly normal incidence were measured between room temperature and 10~K in the energy region of 0.008 -- 5~eV. In the case of single-crystalline Cd$_{3}$As$_{2}$, a sample surface with $[1 1 \bar{2}]$-orientation was polished. Then the spectra were measured with $[1\bar{1}0]$ light polarization. As for Cd$_{3-x}$Zn$_{x}$As$_{2}$, reflectivity spectra were measured with unpolarized light. A Fourier transform spectrometer and a grating-type monochromator equipped with a microscope were employed in the photon energy range 0.008 -- 0.7~eV and 0.5 -- 5~eV, respectively. Measurements in the energy range of 3 -- 40~eV were carried out at room temperature by using synchrotron radiation at UV-SOR, Institute for Molecular Science (Okazaki). For Kramers-Kronig transformations, we adopted suitable extrapolation procedures for energy ranges which were not accessible by the used experimental setups: below 0.008~eV the Hagen-Rubens-type (metal) or constant-reflectivity (insulator) extrapolation was used, respectively. Above 40~eV an $\omega^{-4}$-type extrapolation was utilized.
\section{Results}
Figure~\ref{fig1}(a) shows the temperature dependence of the longitudinal resistivity \ensuremath{\rho_{xx}}\ for Cd$_{3-x}$Zn$_{x}$As$_{2}$. In the low-doped region ($0 \le x \le 0.6$), the resistivity decreases upon lowering temperature, i.e., the system behaves like a metal. The residual resistivity at 5~K is enhanced with increasing $x$ as compared to our pure Cd$_{3}$As$_{2}$\ sample except for $x=0.2$. An upturn is clearly observed around $\sim 120$~K and $\sim 170$~K for $x=1.0$ and $x=1.2$, respectively, highlighting the metal-to-insulator transition in these higher-doped samples. The overall qualitative temperature dependence of the resistivities of $x = 1.0$ and 1.2 is similar. However, at very low temperatures there is a downturn in \ensuremath{\rho_{xx}}\ of the sample with $x=1.0$, while the resistivity of the sample with $x=1.2$ increases again after exhibiting a broad plateau between $\sim 30$~K and $\sim 80$~K. These features are clearly distinct from what is expected for a conventional insulator, the resistivity of which monotonically increases upon decreasing temperature.
Figure~\ref{fig1}(b) summarizes the temperature dependence of the absolute value of the Hall coefficient \ensuremath{R_{\rm H}}. For all $x$, \ensuremath{R_{\rm H}}\ is nearly temperature independent and its sign is negative, indicating that the conduction in all examined samples is of electron type. Estimated carrier densities \ensuremath{n_{\rm H}}\ at room temperature assuming a single carrier model are plotted against respective Zn concentrations in Fig.~\ref{fig1}(c), together with \ensuremath{n_{\rm Q}}\ estimated from quantum-oscillation data (see Fig.~\ref{fig3}).
As expected the absolute value of the carrier density monotonically decreases as a function of $x$ from the order of a few times $10^{18}$cm$^{-3}$ for $x=0$ down to $1.2 \times 10^{17}$cm$^{-3}$ for $x=1.2$, reflecting the depletion of the electron-type carriers when going from $n$-type Cd$_{3}$As$_{2}$\ to $p$-type Zn$_{3}$As$_{2}$. However, the charge neutrality point, i.e., the Cd:Zn ratio where the sign change of \ensuremath{R_{\rm H}}\ takes place, is not reached up to $x=1.2$.
We observe this crossover in slightly higher-doped samples around $x\sim 1.5$ (not shown).
The metallic samples $x \leq 0.8$ investigated here exhibit mobilities of about $\sim 10^5$~cm$^2$/V/s at 2~K and $\sim 10^4$~cm$^2$/V/s at 300~K, respectively.
We note that several properties, such as residual resistivity, charge carrier concentration etc.\
of this material are rather sample dependent as shown in Fig.~\ref{fig1}(c);
for $x=0$ and 0.4, there are exemplarily shown two charge carrier concentrations measured on two different samples, respectively.
Such and even larger variations have been also reported for the parent material Cd$_{3}$As$_{2}$, see, e.g., Ref.~\onlinecite{tliang14a}.
This is possibly related to differences in the (Cd,Zn):As ratio. In Cd$_{3}$As$_{2}$, ideally one fourth of the Cd lattice sites are unoccupied and these vacancies seem to order in a chiral way along the $c$ axis which may differ from sample to sample even if these samples were cut from the same initial batch \cite{prvComm}, cf.\ also the discussions in Refs.~\onlinecite{ali14a} and \onlinecite{tliang14a}.
Thermoelectric and thermal-transport data are summarized in Figs.~\ref{fig2}.
The temperature dependence of the Seebeck coefficient \ensuremath{S_{xx}}\ is shown in Figs.~\ref{fig2}(a) and (b) for $x \leq 0.6$ and $x\geq 0.8$, respectively.
In the lightly-doped region $x\leq 0.8$, \ensuremath{S_{xx}}\ is negative and nearly proportional to temperature, which is often observed in conventional metals and semiconductors.
By contrast, \ensuremath{S_{xx}}\ exhibits a nonmonotonic temperature dependence for larger $x$: below approximately 100~K and 170~K,
\ensuremath{S_{xx}}\ deviates significantly from a temperature-linear behavior for $x=1.0$ and 1.2, respectively.
In particular, \ensuremath{S_{xx}}\ exhibits a sign change and becomes positive upon further cooling.
Moreover, these temperatures nearly coincide with the upturn observed in resistivity data [see \ Fig.~\ref{fig1}(a)].
The longitudinal thermal conductivity \ensuremath{\kappa_{xx}}\ is shown for selected $x$ in Fig.~\ref{fig2}(c).
For all samples, \ensuremath{\kappa_{xx}}\ is almost temperature independent down to $\sim 100$~K but steeply increases towards lower temperatures possibly due to an enhancement of the phonon mean-free path. Interestingly, in the thermal conductivity there are no characteristic anomalies visible between 50~K and 200~K in clear contrast to resistivity (steep upturn) and thermopower data (clear slope change) even for $x = 1.2$, where these are most pronounced.
Absolute values of the Seebeck coefficient \ensuremath{|S_{xx}|}\ at 300 K are replotted as a function of charge carrier concentration \ensuremath{n_{\rm H}}\ in Fig.~\ref{fig2}(d).
The respective Zn concentrations $x$ are given for each data point. Apparently, \ensuremath{|S_{xx}|}\ increases monotonically with decreasing \ensuremath{n_{\rm H}}: For our pure Cd$_{3}$As$_{2}$\ sample, we find $\ensuremath{|S_{xx}|} = 44~\mu$V/K. For $x=1.2$, \ensuremath{|S_{xx}|}\ is enhanced by more than a factor of six, exceeding 300~$\mu$V/K.
This behavior is qualitatively consistent with the case of typical semiconductors or metals, where, according to Mott's formula, \ensuremath{|S_{xx}|}\ is inversely proportional to \ensuremath{E_{\rm F}}, which decreases here as indicated by the depletion of the electron carrier concentration with $x$, cf.\ Fig.~\ref{fig1}(c). The dashed line in Fig.~\ref{fig2}(d) indicates the expected charge-carrier-concentration dependence of \ensuremath{|S_{xx}|}\ ($\propto n^{-1/3}$) in the semiclassical framework of Mott's formula with the assumption of a $k$-linear band dispersion.
Apparently, this line fits well to the experimental data for $x \leq 0.6$ but clearly falls short for larger $x$.
The presented quantities allow us to calculate the thermoelectric figure of merit $ZT = \ensuremath{S_{xx}}^2 T/(\ensuremath{\rho_{xx}}\ensuremath{\kappa_{xx}})$,
the room-temperature values of which are plotted against \ensuremath{n_{\rm H}}\ in Fig.~\ref{fig2}(e).
As compared to pristine Cd$_{3}$As$_{2}$\ ($ZT = 0.07$), $ZT$ increases with $x$ and exhibits a maximum $ZT= 0.33$ for $x=1.0$,
a fairly large room-temperature value of the figure of merit.
Here, we anticipate error bars of 30\% because the values of \ensuremath{\rho_{xx}}, \ensuremath{S_{xx}}, and \ensuremath{\kappa_{xx}}\ are not precisely reproducible and depend on the sample used for the measurement, as already discussed above.
To obtain further insight into what mechanism might be responsible for the observed enhancement of the thermoelectric efficiency as represented by $ZT$, we investigated the impact of Zn doping on the electronic structure in Cd$_{3-x}$Zn$_{x}$As$_{2}$\ by analyzing magnetoresistivity.
Experimental data along with analyses of SdH oscillations are summarized in Fig.~\ref{fig3}.
The magnetoresistivity for $x=0$, 0.6, and 0.8 are shown in Figs.~\ref{fig3}(a), (b), and (c), respectively.
For $x=0$ and $0.6$, the resistivity is nearly proportional to the magnetic field
and exhibits quantum oscillations, i.e. Shubunikov-de Haas (SdH).
Such a $B$-linear magnetoresistivity is often observed in Dirac semimetals and is one characteristic feature of the highly mobile Dirac electrons \cite{armitage18a}.
Similar SdH oscillations are also observed for $x=0.8$ while the magnetoresistivity is rather quadratic in $B$ in the low-field region.
Figure~\ref{fig3}(d) contains the corresponding Landau level (LL) fan diagrams with the oscillation frequency $1/B$ plotted against the Landau index \ensuremath{n_{\rm L}}.
These were extracted according to the Lifshitz–Onsager quantization rule $B_F/B = \ensuremath{n_{\rm L}} - \phi$ from the data shown in panels (a) -- (c) after subtracting the background magnetoresistivity $\rho_{\rm BG}$ by approximating it with a polynomial. The resulting oscillation part \ensuremath{\rho_{\rm osc}/\rho_{\rm BG}}\ is exemplarily shown for $x=0$ in the inset to Fig.~\ref{fig3}(d). Then we assigned integer and half-integer indexes to the peak and valley positions in the magnetoresistivity data, respectively, as described in more detail, e.g., in Ref.~\onlinecite{maryenko15a}.
The linearity of the fan plot up to the quantum limit may be a consequence of small Zeeman splitting in this system.
From the slope of the LL fan diagrams, the oscillation frequency $B_F$ is estimated to be 58~T, 25~T, and 18~T for $x=0$, 0.6, and 0.8, respectively.
Figure~\ref{fig3}(e) shows the temperature dependence of the background-corrected quantum oscillations $\rho_{\rm osc}/\rho_{\rm BG}$ at selected magnetic fields. From the thermal damping of the oscillation amplitudes upon warming, the cyclotron mass is estimated to be $0.051 m_0$, $0.033 m_0$, and $0.029 m_0$ in units of the bare electron mass $m_0$ for $x=0$, 0.6, and 0.8, respectively, by employing the Lifshitz-Kosevich formula \cite{maryenko15a}. We note that the Fermi velocity is nearly independent of the carrier density, suggesting that the band dispersion is close to $k$ linear in this range of $x$. Table~\ref{tab1} summarizes these and additional parameters extracted from the SdH oscillations.
To obtain further insight into the electronic state, Fig.~\ref{fig4}(a) shows the optical conductivity spectra at 10 K for $x=0$, 0.8, and 1.2.
Spiky structures below 0.1~eV are ascribed to phonon excitations.
As a common feature in all the three samples, the interband electron excitation from the valence to the conduction band manifests itself
as a very slow increase of the optical conductivity as a function of the phonon energy,
which is often observed in gapless or small gap semimetals/semiconductors \cite{akrap16a,neubauer16a,crassee18a, Jenkins2016, Fujioka2021, Chen2015}.
Moreover, for each sample a small peak or kink is observed at about 0.2, 0.3, and 0.4~eV for $x=0$, 0.8, and 1.2, respectively,
as indicated with black triangles in Fig.~\ref{fig4}(a).
We note that similar features are identified in the data taken at different temperatures, assuring that these kinks are an intrinsic feature.
The kink is most remarkable in the case of $x=0.8$.
Such an absorption peak/kink was often observed and interpreted as the threshold of the interband transition \cite{akrap16a,neubauer16a,crassee18a}.
Apparently, this threshold energy is enhanced as $x$ is increased.
Taking into account that the carrier density is monotonically reduced upon increasing $x$, it is likely that the topological transition has occurred and a gap has opened in the case of the larger Zn concentration $x=1.2$ as schematically illustrated in Fig.~\ref{fig4}(c), in comparison with $x = 0$ shown in Fig.~\ref{fig4}(b).
\section{Discussion}
Finally we will discuss the relevance of the observed electronic structure to the observed enhancement of the figure of merit exceeding 0.3 at room temperature.
In the present case of a Dirac dispersion, the Fermi energy should be scaled to the Fermi wave number \ensuremath{k_{\rm F}}\ which is proportional to $n^{1/3}$. According to Mott's formula, \ensuremath{|S_{xx}|}\ is inversely proportional to \ensuremath{E_{\rm F}}\ and, thus, is expected to scale with $n^{-1/3}$.
As shown in Fig.~\ref{fig2}(d) (dashed curve), the charge-carrier-concentration dependence of \ensuremath{|S_{xx}|}\ is consistent with this semiclassical scaling for higher carrier densities, i.e., above $6\times 10^{17}$~cm$^{-3}$ which corresponds to $x \lesssim 0.6$.
However, when further increasing the Zn concentration, the coincidence becomes worse and eventually deviates significantly when the electron carriers become very diluted.
In general, quantum oscillations are a highly sensitive probe of the electronic states in the vicinity of \ensuremath{E_{\rm F}}\ while the Seebeck coefficient is strongly influenced or determined by the electronic states in an energy range of $\pm 4\ensuremath{k_{\rm B}} T$ around \ensuremath{E_{\rm F}} \cite{Usui2017}.
Hence, in the present case, the Seebeck coefficient may probe the energy dispersion in the energy range of $\ensuremath{E_{\rm F}}/\ensuremath{k_{\rm B}}\pm 1200$~K. Thus, the significant discrepancy of the experimental Seebeck coefficients and the expectation in the semiclassical model is likely to indicate that the band dispersion away from \ensuremath{E_{\rm F}}\ is not linear in $k$ any more in the heavily Zn-doped samples with $x>0.6$ as sketched in Fig.~\ref{fig4}(c). This strongly supports our initial working hypothesis that Zn doping is an efficient tool to tailor and finely tune the band structure in the Dirac semimetal Cd$_{3}$As$_{2}$\ and should eventually trigger the topological phase transition.
The remaining question to be addressed is the origin of the thermally induced metal-insulator transition
as indicated by the pronounced enhancement of \ensuremath{\rho_{xx}}\ below $\sim 200$~K for $x \geq 0.8$,
which is also reflected in the nonmonotonous temperature dependence of the thermopower.
Older literature reported a doping-induced structural transition in Cd$_{3-x}$Zn$_{x}$As$_{2}$\ \cite{zdanowicz64b}.
In order to look for a possible link between these two features, we performed temperature-dependent powder x-ray diffraction experiments on a sample with $x = 1.2$,
but could not find any hint for a structural change upon cooling \cite{Supple}.
Hence, the origin of this remarkable temperature-dependent change in resistivity and thermopower remains unclear and remains to be an interesting
phenomenon to be elucidated for future studies.
\section{Summary}
In summary, we demonstrate a topological transition in the Dirac semimetal Cd$_{3}$As$_{2}$\ by engineering the band structure by replacing Cd with its lighter counterpart Zn with weaker spin-orbit interaction.
Associated with this transition, the bands at the Fermi level are flattened and a strong enhancement of the thermopower is successfully induced.
Moreover, the thermal conductivity is suppressed while the resistivity remains reasonably small,
yielding a fairly large figure of merit $ZT = 0.33$ at $T=300$~K.
Our findings demonstrate that doping is an easy but highly efficient tool to control the topologically nontrivial band structure in Dirac semimetals and that such systems can be very promising starting points to look for an enhanced thermoelectric performance.
\section*{Acknoledgements}
\noindent
We thank D. Maryenko, T. Koretsune, R. Arita, T. Ideue and T. Liang for useful discussions and technical support.
This work was partly supported by Grant-In-Aid for Science Research (Nos. 24224009, 15K05140, 16H00981, 18H01171, 18H04214, 16H06345) from the MEXT,
and by PRESTO(No. JPMJPR15R5) and CREST(No. JPMJCR16F1), JST (No. JP16H00924), Japan.
JF and MK contributed equally to this work.
|
1,477,468,751,396 | arxiv | \section*{References}
\bibitem [{\citenamefont {Steurer}(2004)}]{Steurer2004}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Steurer}},\ }\href {\doibase 10.1524/zkri.219.7.391.35643} {\bibfield
{journal} {\bibinfo {journal} {Z.\ Kristallogr.}\ }\textbf {\bibinfo
{volume} {219}},\ \bibinfo {pages} {391} (\bibinfo {year}
{2004})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Steurer}(2012)}]{Steurer2012}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Steurer}},\ }\href {\doibase 10.1039/C2CS35063G} {\bibfield {journal}
{\bibinfo {journal} {Chem.\ Soc.\ Rev.}\ }\textbf {\bibinfo {volume} {41}},\
\bibinfo {pages} {6719} (\bibinfo {year} {2012})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Shechtman}\ \emph {et~al.}(1984)\citenamefont
{Shechtman}, \citenamefont {Blech}, \citenamefont {Gratias},\ and\
\citenamefont {Cahn}}]{Shechtman1984}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Shechtman}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Blech}},
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gratias}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~W.}\ \bibnamefont {Cahn}},\ }\href
{\doibase 10.1103/PhysRevLett.53.1951} {\bibfield {journal} {\bibinfo
{journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo
{pages} {1951} (\bibinfo {year} {1984})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Zeng}(2005)}]{Zeng2005}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Zeng}},\ }\href {\doibase 10.1016/j.cocis.2004.12.003} {\bibfield {journal}
{\bibinfo {journal} {Curr.\ Opin.\ Colloid In.}\ }\textbf {\bibinfo {volume}
{9}},\ \bibinfo {pages} {384} (\bibinfo {year} {2005})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Fischer}\ \emph {et~al.}(2011)\citenamefont
{Fischer}, \citenamefont {Exner}, \citenamefont {Zielske}, \citenamefont
{Perlich}, \citenamefont {Deloudi}, \citenamefont {Steurer}, \citenamefont
{Lindner},\ and\ \citenamefont {F{\"o}rster}}]{Fischer2011}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Fischer}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Exner}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Zielske}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Perlich}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Deloudi}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {Steurer}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Lindner}}, \ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {F{\"o}rster}},\ }\href {\doibase 10.1073/pnas.1008695108}
{\bibfield {journal} {\bibinfo {journal} {Proc.\ Natl.\ Acad.\ Sci.\
U.S.A.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {1810}
(\bibinfo {year} {2011})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Dotera}(2011)}]{Dotera2011}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Dotera}},\ }\href {\doibase 10.1002/ijch.201100146} {\bibfield {journal}
{\bibinfo {journal} {Isr.\ J.\ Chem.}\ }\textbf {\bibinfo {volume} {51}},\
\bibinfo {pages} {1197} (\bibinfo {year} {2011})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Widom}, \citenamefont {Strandburg},\ and\
\citenamefont {Swendsen}(1987)}]{Widom1987}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Widom}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont
{Strandburg}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont
{Swendsen}},\ }\href {\doibase 10.1103/PhysRevLett.58.706} {\bibfield
{journal} {\bibinfo {journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo
{volume} {58}},\ \bibinfo {pages} {706} (\bibinfo {year} {1987})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Leung}, \citenamefont {Henley},\ and\ \citenamefont
{Chester}(1989)}]{Leung1989}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Leung}}, \bibinfo {author} {\bibfnamefont {C.~L.}\ \bibnamefont {Henley}}, \
and\ \bibinfo {author} {\bibfnamefont {G.~V.}\ \bibnamefont {Chester}},\
}\href {\doibase 10.1103/PhysRevB.39.446} {\bibfield {journal} {\bibinfo
{journal} {Phys.\ Rev.\ B}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo
{pages} {446} (\bibinfo {year} {1989})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Dzugutov}(1993)}]{Dzugutov1993}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Dzugutov}},\ }\href {\doibase 10.1103/PhysRevLett.70.2924} {\bibfield
{journal} {\bibinfo {journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo
{volume} {70}},\ \bibinfo {pages} {2924} (\bibinfo {year}
{1993})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Skibinsky}\ \emph {et~al.}(1999)\citenamefont
{Skibinsky}, \citenamefont {Buldyrev}, \citenamefont {Scala}, \citenamefont
{Havlin},\ and\ \citenamefont {Stanley}}]{Skibinsky1999}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Skibinsky}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont
{Buldyrev}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Scala}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Havlin}}, \ and\ \bibinfo
{author} {\bibfnamefont {H.~E.}\ \bibnamefont {Stanley}},\ }\href {\doibase
10.1103/PhysRevE.60.2664} {\bibfield {journal} {\bibinfo {journal} {Phys.\
Rev.\ E}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages} {2664}
(\bibinfo {year} {1999})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Engel}\ and\ \citenamefont
{Trebin}(2007)}]{Engel2007}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Engel}}\ and\ \bibinfo {author} {\bibfnamefont {H.-R.}\ \bibnamefont
{Trebin}},\ }\href {\doibase 10.1103/PhysRevLett.98.225505} {\bibfield
{journal} {\bibinfo {journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo
{volume} {98}},\ \bibinfo {pages} {225505} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Keys}\ and\ \citenamefont
{Glotzer}(2007)}]{Keys2007}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{Keys}}\ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont
{Glotzer}},\ }\href {\doibase 10.1103/PhysRevLett.99.235503} {\bibfield
{journal} {\bibinfo {journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo
{volume} {99}},\ \bibinfo {pages} {235503} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Johnston}, \citenamefont {Phippen},\ and\
\citenamefont {Molinero}(2011)}]{Johnston2010b}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont
{Johnston}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Phippen}}, \
and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Molinero}},\ }\href
{\doibase 10.1021/jz101706k} {\bibfield {journal} {\bibinfo {journal} {J.\
Phys.\ Chem.\ Lett.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages}
{384} (\bibinfo {year} {2011})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Johnston}, \citenamefont {Kastelowitz},\ and\
\citenamefont {Molinero}(2010)}]{Johnston2010c}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont
{Johnston}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Kastelowitz}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Molinero}},\ }\href {\doibase 10.1063/1.3499323} {\bibfield {journal}
{\bibinfo {journal} {J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {133}},\
\bibinfo {pages} {154516} (\bibinfo {year} {2010})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Zhi-Wei}\ and\ \citenamefont
{Xiu-Jun}(2012)}]{ZhiWei2012}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Zhi-Wei}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Xiu-Jun}},\ }\href {\doibase 10.1088/0256-307X/29/5/050204} {\bibfield
{journal} {\bibinfo {journal} {Chinese Phys.\ Lett.}\ }\textbf {\bibinfo
{volume} {29}},\ \bibinfo {pages} {050204} (\bibinfo {year}
{2012})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Dotera}(2012)}]{Dotera2012}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Dotera}},\ }\href {\doibase 10.1002/polb.22395} {\bibfield {journal}
{\bibinfo {journal} {J.\ Polym.\ Sci.: Pol.\ Phys.}\ }\textbf {\bibinfo
{volume} {50}},\ \bibinfo {pages} {155} (\bibinfo {year} {2012})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Haji-Akbari}\ \emph {et~al.}(2009)\citenamefont
{Haji-Akbari}, \citenamefont {Engel}, \citenamefont {Keys}, \citenamefont
{Zheng}, \citenamefont {Petschek}, \citenamefont {Palffy-Muhoray},\ and\
\citenamefont {Glotzer}}]{HajiAkbari2009}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Haji-Akbari}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Engel}},
\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Keys}}, \bibinfo
{author} {\bibfnamefont {X.}~\bibnamefont {Zheng}}, \bibinfo {author}
{\bibfnamefont {R.~G.}\ \bibnamefont {Petschek}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Palffy-Muhoray}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.~C.}\ \bibnamefont {Glotzer}},\ }\href {\doibase
10.1038/nature08641} {\bibfield {journal} {\bibinfo {journal} {Nature}\
}\textbf {\bibinfo {volume} {462}},\ \bibinfo {pages} {773} (\bibinfo {year}
{2009})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Haji-Akbari}, \citenamefont {Engel},\ and\
\citenamefont {Glotzer}(2011{\natexlab{a}})}]{HajiAkbari2011}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Haji-Akbari}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Engel}},
\ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Glotzer}},\
}\href {\doibase 10.1103/PhysRevLett.107.215702} {\bibfield {journal}
{\bibinfo {journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo {volume}
{107}},\ \bibinfo {pages} {215702} (\bibinfo {year}
{2011}{\natexlab{a}})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Haji-Akbari}, \citenamefont {Engel},\ and\
\citenamefont {Glotzer}(2011{\natexlab{b}})}]{HajiAkbari2011b}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Haji-Akbari}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Engel}},
\ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Glotzer}},\
}\href {\doibase 10.1063/1.3651370} {\bibfield {journal} {\bibinfo
{journal} {J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {135}},\ \bibinfo
{pages} {194101} (\bibinfo {year} {2011}{\natexlab{b}})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Iacovella}, \citenamefont {Keys},\ and\ \citenamefont
{Glotzer}(2011)}]{Iacovella2011}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~R.}\ \bibnamefont
{Iacovella}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {Keys}},
\ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Glotzer}},\
}\href {\doibase 10.1073/pnas.1019763108} {\bibfield {journal} {\bibinfo
{journal} {Proc.\ Natl.\ Acad.\ Sci.\ U.S.A.}\ }\textbf {\bibinfo {volume}
{108}},\ \bibinfo {pages} {20935} (\bibinfo {year} {2011})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {van~der Linden}, \citenamefont {Doye},\ and\
\citenamefont {Louis}(2012)}]{VanDerLinden2012}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont
{van~der Linden}}, \bibinfo {author} {\bibfnamefont {J.~P.~K.}\ \bibnamefont
{Doye}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Louis}},\ }\href {\doibase 10.1063/1.3679653} {\bibfield {journal}
{\bibinfo {journal} {J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {136}},\
\bibinfo {pages} {054904} (\bibinfo {year} {2012})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Tsai}(2003)}]{Tsai2003}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.-P.}\ \bibnamefont
{Tsai}},\ }\href {\doibase 10.1021/ar010013x} {\bibfield {journal} {\bibinfo
{journal} {Acc.\ Chem.\ Res.}\ }\textbf {\bibinfo {volume} {36}},\ \bibinfo
{pages} {31} (\bibinfo {year} {2003})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Vega}\ \emph {et~al.}(2008)\citenamefont {Vega},
\citenamefont {Sanz}, \citenamefont {Abascal},\ and\ \citenamefont
{Noya}}]{Vega2008}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Vega}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Sanz}}, \bibinfo
{author} {\bibfnamefont {J.~L.~F.}\ \bibnamefont {Abascal}}, \ and\ \bibinfo
{author} {\bibfnamefont {E.~G.}\ \bibnamefont {Noya}},\ }\href {\doibase
10.1088/0953-8984/20/15/153101} {\bibfield {journal} {\bibinfo {journal}
{J.\ Phys.:\ Cond.\ Matt.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo
{pages} {153101} (\bibinfo {year} {2008})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Frenkel}\ and\ \citenamefont
{Ladd}(1984)}]{Frenkel1984}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Frenkel}}\ and\ \bibinfo {author} {\bibfnamefont {A.~J.~C.}\ \bibnamefont
{Ladd}},\ }\href {\doibase 10.1063/1.448024} {\bibfield {journal} {\bibinfo
{journal} {J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo
{pages} {3188} (\bibinfo {year} {1984})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Kiselev}, \citenamefont {Engel},\ and\ \citenamefont
{Trebin}(2012)}]{Kiselev2012}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kiselev}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Engel}}, \
and\ \bibinfo {author} {\bibfnamefont {H.-R.}\ \bibnamefont {Trebin}},\
}\href {\doibase 10.1103/PhysRevLett.109.225502} {\bibfield {journal}
{\bibinfo {journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo {volume}
{109}},\ \bibinfo {pages} {225502} (\bibinfo {year} {2012})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Glotzer}\ and\ \citenamefont
{Solomon}(2007)}]{Glotzer2007}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont
{Glotzer}}\ and\ \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont
{Solomon}},\ }\href {\doibase 10.1038/nmat1949} {\bibfield {journal}
{\bibinfo {journal} {Nat.\ Mater.}\ }\textbf {\bibinfo {volume} {6}},\
\bibinfo {pages} {557} (\bibinfo {year} {2007})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Pawar}\ and\ \citenamefont
{Kretzschmar}(2010)}]{Pawar2010}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~B.}\ \bibnamefont
{Pawar}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Kretzschmar}},\ }\href {\doibase 10.1002/marc.200900614} {\bibfield
{journal} {\bibinfo {journal} {Macromol.\ Rapid Comm.}\ }\textbf {\bibinfo
{volume} {31}},\ \bibinfo {pages} {150} (\bibinfo {year} {2010})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Wilber}\ \emph {et~al.}(2007)\citenamefont {Wilber},
\citenamefont {Doye}, \citenamefont {Louis}, \citenamefont {Noya},
\citenamefont {Miller},\ and\ \citenamefont {Wong}}]{Wilber2007}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont
{Wilber}}, \bibinfo {author} {\bibfnamefont {J.~P.~K.}\ \bibnamefont {Doye}},
\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Louis}}, \bibinfo
{author} {\bibfnamefont {E.~G.}\ \bibnamefont {Noya}}, \bibinfo {author}
{\bibfnamefont {M.~A.}\ \bibnamefont {Miller}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Wong}},\ }\href {\doibase
10.1063/1.2759922} {\bibfield {journal} {\bibinfo {journal} {J.~Chem.\
Phys.}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {085106}
(\bibinfo {year} {2007})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Noya}\ \emph {et~al.}(2007)\citenamefont {Noya},
\citenamefont {Vega}, \citenamefont {Doye},\ and\ \citenamefont
{Louis}}]{Noya2007}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont
{Noya}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Vega}}, \bibinfo
{author} {\bibfnamefont {J.~P.~K.}\ \bibnamefont {Doye}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.~A.}\ \bibnamefont {Louis}},\ }\href {\doibase
10.1063/1.2752155} {\bibfield {journal} {\bibinfo {journal} {J.~Chem.\
Phys.}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {054501}
(\bibinfo {year} {2007})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Noya}\ \emph {et~al.}(2010)\citenamefont {Noya},
\citenamefont {Vega}, \citenamefont {Doye},\ and\ \citenamefont
{Louis}}]{Noya2010}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont
{Noya}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Vega}}, \bibinfo
{author} {\bibfnamefont {J.~P.~K.}\ \bibnamefont {Doye}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.~A.}\ \bibnamefont {Louis}},\ }\href {\doibase
10.1063/1.3454907} {\bibfield {journal} {\bibinfo {journal} {J.~Chem.\
Phys.}\ }\textbf {\bibinfo {volume} {132}},\ \bibinfo {pages} {234511}
(\bibinfo {year} {2010})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Williamson}\ \emph {et~al.}(2011)\citenamefont
{Williamson}, \citenamefont {Wilber}, \citenamefont {Doye},\ and\
\citenamefont {Louis}}]{Williamson2011b}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Williamson}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont
{Wilber}}, \bibinfo {author} {\bibfnamefont {J.~P.~K.}\ \bibnamefont {Doye}},
\ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Louis}},\
}\href {\doibase 10.1039/C0SM01377C} {\bibfield {journal} {\bibinfo
{journal} {Soft Matter}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages}
{3423} (\bibinfo {year} {2011})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Doppelbauer}\ \emph {et~al.}(2012)\citenamefont
{Doppelbauer}, \citenamefont {Noya}, \citenamefont {Bianchi},\ and\
\citenamefont {Kahl}}]{Doppelbauer2012b}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Doppelbauer}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont
{Noya}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Bianchi}}, \
and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kahl}},\ }\href
{\doibase 10.1088/0953-8984/24/28/284124} {\bibfield {journal} {\bibinfo
{journal} {J.\ Phys.:\ Cond.\ Matt.}\ }\textbf {\bibinfo {volume} {24}},\
\bibinfo {pages} {284124} (\bibinfo {year} {2012})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Doye}\ \emph {et~al.}(2007)\citenamefont {Doye},
\citenamefont {Louis}, \citenamefont {Lin}, \citenamefont {Allen},
\citenamefont {Noya}, \citenamefont {Wilber}, \citenamefont {Kok},\ and\
\citenamefont {Lyus}}]{Doye2007}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~P.~K.}\
\bibnamefont {Doye}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Louis}}, \bibinfo {author} {\bibfnamefont {I.~C.}\ \bibnamefont {Lin}},
\bibinfo {author} {\bibfnamefont {L.~R.}\ \bibnamefont {Allen}}, \bibinfo
{author} {\bibfnamefont {E.~G.}\ \bibnamefont {Noya}}, \bibinfo {author}
{\bibfnamefont {A.~W.}\ \bibnamefont {Wilber}}, \bibinfo {author}
{\bibfnamefont {H.~C.}\ \bibnamefont {Kok}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Lyus}},\ }\href {\doibase 10.1039/b614955c}
{\bibfield {journal} {\bibinfo {journal} {Phys.\ Chem.\ Chem.\ Phys.}\
}\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {2197} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Doppelbauer}, \citenamefont {Bianchi},\ and\
\citenamefont {Kahl}(2010)}]{Doppelbauer2010}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Doppelbauer}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Bianchi}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Kahl}},\ }\href {\doibase 10.1088/0953-8984/22/10/104105} {\bibfield
{journal} {\bibinfo {journal} {J.\ Phys.:\ Cond.\ Matt.}\ }\textbf {\bibinfo
{volume} {22}},\ \bibinfo {pages} {104105} (\bibinfo {year}
{2010})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {DeVries}\ \emph {et~al.}(2007)\citenamefont
{DeVries}, \citenamefont {Brunnbauer}, \citenamefont {Hu}, \citenamefont
{Jackson}, \citenamefont {Long}, \citenamefont {Neltner}, \citenamefont
{Uzun}, \citenamefont {Wunsch},\ and\ \citenamefont
{Stellacci}}]{DeVries2007}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont
{DeVries}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Brunnbauer}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hu}}, \bibinfo {author}
{\bibfnamefont {A.~M.}\ \bibnamefont {Jackson}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Long}}, \bibinfo {author} {\bibfnamefont
{B.~T.}\ \bibnamefont {Neltner}}, \bibinfo {author} {\bibfnamefont
{O.}~\bibnamefont {Uzun}}, \bibinfo {author} {\bibfnamefont {B.~H.}\
\bibnamefont {Wunsch}}, \ and\ \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Stellacci}},\ }\href {\doibase 10.1126/science.1133162}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {315}},\ \bibinfo {pages} {358} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Cho}\ \emph {et~al.}(2007)\citenamefont {Cho},
\citenamefont {Yi}, \citenamefont {Kim}, \citenamefont {Jeon}, \citenamefont
{Elsesser}, \citenamefont {Yu}, \citenamefont {Yang},\ and\ \citenamefont
{Pine}}]{Cho2007}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-S.}\ \bibnamefont
{Cho}}, \bibinfo {author} {\bibfnamefont {G.-R.}\ \bibnamefont {Yi}},
\bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Kim}}, \bibinfo
{author} {\bibfnamefont {S.-J.}\ \bibnamefont {Jeon}}, \bibinfo {author}
{\bibfnamefont {M.~T.}\ \bibnamefont {Elsesser}}, \bibinfo {author}
{\bibfnamefont {H.~K.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont
{S.-M.}\ \bibnamefont {Yang}}, \ and\ \bibinfo {author} {\bibfnamefont
{D.~J.}\ \bibnamefont {Pine}},\ }\href {\doibase 10.1021/cm070051w}
{\bibfield {journal} {\bibinfo {journal} {Chem.\ Mater.}\ }\textbf
{\bibinfo {volume} {19}},\ \bibinfo {pages} {3183} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Yang}\ \emph {et~al.}(2008)\citenamefont {Yang},
\citenamefont {Kim}, \citenamefont {Lim},\ and\ \citenamefont
{Yi}}]{Yang2008}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-M.}\ \bibnamefont
{Yang}}, \bibinfo {author} {\bibfnamefont {S.-H.}\ \bibnamefont {Kim}},
\bibinfo {author} {\bibfnamefont {J.-M.}\ \bibnamefont {Lim}}, \ and\
\bibinfo {author} {\bibfnamefont {G.-R.}\ \bibnamefont {Yi}},\ }\href
{\doibase 10.1039/b716393b} {\bibfield {journal} {\bibinfo {journal} {J.
Mater. Chem.}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {2177}
(\bibinfo {year} {2008})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Kraft}\ \emph {et~al.}(2009)\citenamefont {Kraft},
\citenamefont {Vlug}, \citenamefont {van Kats}, \citenamefont {van
Blaaderen}, \citenamefont {Imhof},\ and\ \citenamefont {Kegel}}]{Kraft2009}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Kraft}}, \bibinfo {author} {\bibfnamefont {W.~S.}\ \bibnamefont {Vlug}},
\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {van Kats}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {van Blaaderen}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Imhof}}, \ and\ \bibinfo {author}
{\bibfnamefont {W.~K.}\ \bibnamefont {Kegel}},\ }\href {\doibase
10.1021/ja8079803} {\bibfield {journal} {\bibinfo {journal} {J.~Am.\ Chem.\
Soc.}\ }\textbf {\bibinfo {volume} {131}},\ \bibinfo {pages} {1182} (\bibinfo
{year} {2009})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2008)\citenamefont {Wang},
\citenamefont {Xia}, \citenamefont {Li}, \citenamefont {Ravaine},\ and\
\citenamefont {Zhao}}]{Wang2008}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Xia}}, \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Ravaine}}, \ and\ \bibinfo {author}
{\bibfnamefont {X.~S.}\ \bibnamefont {Zhao}},\ }\href {\doibase
10.1002/anie.200801061} {\bibfield {journal} {\bibinfo {journal} {Angew.\
Chem., Int.\ Ed.}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages}
{4725} (\bibinfo {year} {2008})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Mao}, \citenamefont {Xu},\ and\ \citenamefont
{Wang}(2010)}]{Mao2010}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Mao}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Xu}}, \ and\
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wang}},\ }\href {\doibase
10.1002/adfm.200902076} {\bibfield {journal} {\bibinfo {journal} {Adv.\
Funct.\ Mater.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {1053}
(\bibinfo {year} {2010})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Duguet}\ \emph {et~al.}(2011)\citenamefont {Duguet},
\citenamefont {Desert}, \citenamefont {Perro},\ and\ \citenamefont
{Ravaine}}]{Duguet2011}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Duguet}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Desert}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Perro}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Ravaine}},\ }\href {\doibase
10.1039/C0CS00048E} {\bibfield {journal} {\bibinfo {journal} {Chem.\ Soc.\
Rev.}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {941} (\bibinfo
{year} {2011})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2012)\citenamefont {Wang},
\citenamefont {Wang}, \citenamefont {Breed}, \citenamefont {Manoharan},
\citenamefont {Feng}, \citenamefont {Hollingsworth}, \citenamefont {Weck},\
and\ \citenamefont {Pine}}]{Wang2012}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wang}}, \bibinfo
{author} {\bibfnamefont {D.~R.}\ \bibnamefont {Breed}}, \bibinfo {author}
{\bibfnamefont {V.~N.}\ \bibnamefont {Manoharan}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Feng}}, \bibinfo {author} {\bibfnamefont
{A.~D.}\ \bibnamefont {Hollingsworth}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Weck}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\
\bibnamefont {Pine}},\ }\href {\doibase 10.1038/nature11564} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {491}},\
\bibinfo {pages} {51} (\bibinfo {year} {2012})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Chen}, \citenamefont {Bae},\ and\ \citenamefont
{Granick}(2011)}]{Chen2011}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Bae}}, \
and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Granick}},\ }\href
{\doibase 10.1038/nature09713} {\bibfield {journal} {\bibinfo {journal}
{Nature}\ }\textbf {\bibinfo {volume} {469}},\ \bibinfo {pages} {381}
(\bibinfo {year} {2011})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Kern}\ and\ \citenamefont
{Frenkel}(2003)}]{Kern2003}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Kern}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Frenkel}},\
}\href {\doibase 10.1063/1.1569473} {\bibfield {journal} {\bibinfo
{journal} {J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo
{pages} {9882} (\bibinfo {year} {2003})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Zhang}\ and\ \citenamefont
{Glotzer}(2004)}]{Zhang2004}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Zhang}}\ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont
{Glotzer}},\ }\href {\doibase 10.1021/nl0493500} {\bibfield {journal}
{\bibinfo {journal} {Nano Lett.}\ }\textbf {\bibinfo {volume} {4}},\
\bibinfo {pages} {1407} (\bibinfo {year} {2004})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Bianchi}\ \emph {et~al.}(2006)\citenamefont
{Bianchi}, \citenamefont {Largo}, \citenamefont {Tartaglia}, \citenamefont
{Zaccarelli},\ and\ \citenamefont {Sciortino}}]{Bianchi2006}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Bianchi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Largo}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Tartaglia}}, \bibinfo
{author} {\bibfnamefont {E.}~\bibnamefont {Zaccarelli}}, \ and\ \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Sciortino}},\ }\href {\doibase
10.1103/PhysRevLett.97.168301} {\bibfield {journal} {\bibinfo {journal}
{Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages}
{168301} (\bibinfo {year} {2006})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Sciortino}, \citenamefont {Giacometti},\ and\
\citenamefont {Pastore}(2009)}]{Sciortino2009}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Sciortino}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Giacometti}}, \ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Pastore}},\ }\href {\doibase 10.1103/PhysRevLett.103.237801} {\bibfield
{journal} {\bibinfo {journal} {Phys.\ Rev.\ Lett.}\ }\textbf {\bibinfo
{volume} {103}},\ \bibinfo {pages} {237801} (\bibinfo {year}
{2009})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Romano}\ and\ \citenamefont
{Sciortino}(2011)}]{Romano2011b}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Romano}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Sciortino}},\ }\href {\doibase 10.1039/C0SM01494J} {\bibfield {journal}
{\bibinfo {journal} {Soft Matter}\ }\textbf {\bibinfo {volume} {7}},\
\bibinfo {pages} {5799} (\bibinfo {year} {2011})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Bianchi}, \citenamefont {Blaak},\ and\ \citenamefont
{Likos}(2011)}]{Bianchi2011}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Bianchi}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blaak}}, \
and\ \bibinfo {author} {\bibfnamefont {C.~N.}\ \bibnamefont {Likos}},\ }\href
{\doibase 10.1039/C0CP02296A} {\bibfield {journal} {\bibinfo {journal}
{Phys.\ Chem.\ Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo
{pages} {6397} (\bibinfo {year} {2011})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Frank}\ and\ \citenamefont
{Kasper}(1958)}]{Frank1958}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~C.}\ \bibnamefont
{Frank}}\ and\ \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Kasper}},\ }\href {\doibase 10.1107/S0365110X58000487} {\bibfield {journal}
{\bibinfo {journal} {Acta Crystallogr.}\ }\textbf {\bibinfo {volume} {11}},\
\bibinfo {pages} {184} (\bibinfo {year} {1958})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Frank}\ and\ \citenamefont
{Kasper}(1959)}]{Frank1959}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~C.}\ \bibnamefont
{Frank}}\ and\ \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Kasper}},\ }\href {\doibase 10.1107/S0365110X59001499} {\bibfield {journal}
{\bibinfo {journal} {Acta Crystallogr.}\ }\textbf {\bibinfo {volume} {12}},\
\bibinfo {pages} {483} (\bibinfo {year} {1959})}\BibitemShut {NoStop}%
\bibitem [{Note1()}]{Note1}%
\BibitemOpen
\bibinfo {note} {See supplementary material in the appendix for implementation
details and a summary of simulation methods used.}\BibitemShut {Stop}%
\bibitem [{Note2()}]{Note2}%
\BibitemOpen
\bibinfo {note} {This value of $\upDelta s$ is significantly larger than the
tiling entropy of a random Stampfli tiling, but somewhat smaller than that of
a maximally random square-triangle tiling, where both have been evaluated at
zero phason strain \cite {Oxborrow1993}.}\BibitemShut {Stop}%
\bibitem [{\citenamefont {Oxborrow}\ and\ \citenamefont
{Henley}(1993)}]{Oxborrow1993}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Oxborrow}}\ and\ \bibinfo {author} {\bibfnamefont {C.~L.}\ \bibnamefont
{Henley}},\ }\href {\doibase 10.1103/PhysRevB.48.6966} {\bibfield {journal}
{\bibinfo {journal} {Phys.\ Rev.\ B}\ }\textbf {\bibinfo {volume} {48}},\
\bibinfo {pages} {6966} (\bibinfo {year} {1993})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Kofke}(1993{\natexlab{a}})}]{Kofke1993}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Kofke}},\ }\href {\doibase 10.1080/00268979300100881} {\bibfield {journal}
{\bibinfo {journal} {Mol.\ Phys.}\ }\textbf {\bibinfo {volume} {78}},\
\bibinfo {pages} {1331} (\bibinfo {year} {1993}{\natexlab{a}})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Kofke}(1993{\natexlab{b}})}]{Kofke1993b}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Kofke}},\ }\href {\doibase 10.1063/1.465023} {\bibfield {journal} {\bibinfo
{journal} {J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo
{pages} {4149} (\bibinfo {year} {1993}{\natexlab{b}})}\BibitemShut {NoStop}%
\bibitem [{Note3()}]{Note3}%
\BibitemOpen
\bibinfo {note} {We have also considered the additional non-plastic hexagonal
crystal structures suggested by Doppelbauer \protect \textit {et al.}\ for
pentavalent patchy particles \cite {Doppelbauer2010}, but these were found
quickly to lose their rotational specificity at the temperatures used in our
simulations.}\BibitemShut {Stop}%
\bibitem [{\citenamefont {Hagen}\ and\ \citenamefont
{Frenkel}(1994)}]{Hagen1994}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.~J.}\
\bibnamefont {Hagen}}\ and\ \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Frenkel}},\ }\href {\doibase 10.1063/1.467526} {\bibfield
{journal} {\bibinfo {journal} {J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume}
{101}},\ \bibinfo {pages} {4093} (\bibinfo {year} {1994})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Vliegenthart}, \citenamefont {Lodge},\ and\
\citenamefont {Lekkerkerker}(1999)}]{Vliegenthart1999}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont
{Vliegenthart}}, \bibinfo {author} {\bibfnamefont {J.~F.~M.}\ \bibnamefont
{Lodge}}, \ and\ \bibinfo {author} {\bibfnamefont {H.~N.~W.}\ \bibnamefont
{Lekkerkerker}},\ }\href {\doibase 10.1016/S0378-4371(98)00515-9} {\bibfield
{journal} {\bibinfo {journal} {Physica A}\ }\textbf {\bibinfo {volume}
{263}},\ \bibinfo {pages} {378} (\bibinfo {year} {1999})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Doye}\ and\ \citenamefont {Wales}(1996)}]{Doye1996}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~P.~K.}\
\bibnamefont {Doye}}\ and\ \bibinfo {author} {\bibfnamefont {D.~J.}\
\bibnamefont {Wales}},\ }\href {\doibase 10.1126/science.271.5248.484}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {271}},\ \bibinfo {pages} {484} (\bibinfo {year}
{1996})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Yan}\ \emph {et~al.}(2003)\citenamefont {Yan},
\citenamefont {Park}, \citenamefont {Finkelstein}, \citenamefont {Reif},\
and\ \citenamefont {LaBean}}]{Yan2003}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Yan}}, \bibinfo {author} {\bibfnamefont {S.~H.}\ \bibnamefont {Park}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Finkelstein}}, \bibinfo
{author} {\bibfnamefont {J.~H.}\ \bibnamefont {Reif}}, \ and\ \bibinfo
{author} {\bibfnamefont {T.~H.}\ \bibnamefont {LaBean}},\ }\href {\doibase
10.1126/science.1089389} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {301}},\ \bibinfo {pages} {1882}
(\bibinfo {year} {2003})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {He}\ \emph {et~al.}(2005)\citenamefont {He},
\citenamefont {Chen}, \citenamefont {Liu}, \citenamefont {Ribbe},\ and\
\citenamefont {Mao}}]{He2005b}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{He}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Liu}}, \bibinfo {author}
{\bibfnamefont {A.~E.}\ \bibnamefont {Ribbe}}, \ and\ \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Mao}},\ }\href {\doibase 10.1021/ja0541938}
{\bibfield {journal} {\bibinfo {journal} {J.~Am.\ Chem.\ Soc.}\ }\textbf
{\bibinfo {volume} {127}},\ \bibinfo {pages} {12202} (\bibinfo {year}
{2005})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {He}\ \emph {et~al.}(2006)\citenamefont {He},
\citenamefont {Tian}, \citenamefont {Ribbe},\ and\ \citenamefont
{Mao}}]{He2006}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{He}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Tian}}, \bibinfo
{author} {\bibfnamefont {A.~E.}\ \bibnamefont {Ribbe}}, \ and\ \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Mao}},\ }\href {\doibase
10.1021/ja0665141} {\bibfield {journal} {\bibinfo {journal} {J.~Am.\ Chem.\
Soc.}\ }\textbf {\bibinfo {volume} {128}},\ \bibinfo {pages} {15978}
(\bibinfo {year} {2006})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Zhang}\ \emph {et~al.}(2008)\citenamefont {Zhang},
\citenamefont {Su}, \citenamefont {He}, \citenamefont {Zhao}, \citenamefont
{Fang}, \citenamefont {Ribbe}, \citenamefont {Jiang},\ and\ \citenamefont
{Mao}}]{Zhang2008}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Su}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {He}}, \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont
{P.-a.}\ \bibnamefont {Fang}}, \bibinfo {author} {\bibfnamefont {A.~E.}\
\bibnamefont {Ribbe}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Jiang}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Mao}},\
}\href {\doibase 10.1073/pnas.0803841105} {\bibfield {journal} {\bibinfo
{journal} {Proc.\ Natl.\ Acad.\ Sci.\ U.S.A.}\ }\textbf {\bibinfo {volume}
{105}},\ \bibinfo {pages} {10665} (\bibinfo {year} {2008})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Metropolis}\ \emph {et~al.}(1953)\citenamefont
{Metropolis}, \citenamefont {Rosenbluth}, \citenamefont {Rosenbluth},
\citenamefont {Teller},\ and\ \citenamefont {Teller}}]{Metropolis1953}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Metropolis}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont
{Rosenbluth}}, \bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont
{Rosenbluth}}, \bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont
{Teller}}, \ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Teller}},\ }\href {\doibase 10.1063/1.1699114} {\bibfield {journal}
{\bibinfo {journal} {J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {21}},\
\bibinfo {pages} {1087} (\bibinfo {year} {1953})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Frenkel}\ and\ \citenamefont
{Smit}(2002)}]{Frenkel2002}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Frenkel}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Smit}},\
}\href@noop {} {\emph {\bibinfo {title} {Understanding molecular simulation:
{F}rom algorithms to applications}}},\ \bibinfo {edition} {2nd}\ ed.\
(\bibinfo {publisher} {Elsevier Academic Press},\ \bibinfo {address} {San
Diego, London},\ \bibinfo {year} {2002})\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Eppenga}\ and\ \citenamefont
{Frenkel}(1984)}]{Eppenga1984}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Eppenga}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Frenkel}},\ }\href {\doibase 10.1080/00268978400101951} {\bibfield
{journal} {\bibinfo {journal} {Mol.\ Phys.}\ }\textbf {\bibinfo {volume}
{52}},\ \bibinfo {pages} {1303} (\bibinfo {year} {1984})}\BibitemShut
{NoStop}%
\bibitem [{\citenamefont {Ladd}\ and\ \citenamefont
{Woodcock}(1977)}]{Ladd1977}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.~C.}\
\bibnamefont {Ladd}}\ and\ \bibinfo {author} {\bibfnamefont {L.~V.}\
\bibnamefont {Woodcock}},\ }\href {\doibase 10.1016/0009-2614(77)85375-X}
{\bibfield {journal} {\bibinfo {journal} {Chem.\ Phys.\ Lett.}\ }\textbf
{\bibinfo {volume} {51}},\ \bibinfo {pages} {155} (\bibinfo {year}
{1977})}\BibitemShut {NoStop}%
\bibitem [{\citenamefont {Gao}, \citenamefont {Zeng},\ and\ \citenamefont
{Tanaka}(2000)}]{Gao2000}%
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~T.}\ \bibnamefont
{Gao}}, \bibinfo {author} {\bibfnamefont {X.~C.}\ \bibnamefont {Zeng}}, \
and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Tanaka}},\ }\href
{\doibase 10.1063/1.481457} {\bibfield {journal} {\bibinfo {journal}
{J.~Chem.\ Phys.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages}
{8534} (\bibinfo {year} {2000})}\BibitemShut {NoStop}%
\end{thebibliography}%
\vspace{2cm}
|
1,477,468,751,397 | arxiv | \section{Introduction}\label{intro}
Magnetism is an inherently quantum mechanical effect that ultimately
arises from the exchange processes in an interacting many-particle
quantum system. Thus, magnetic systems can reveal much about the
richness of quantum mechanics itself, especially when cooperative
effects are operational and emergent effective low-energy theories
describe the relevant physics. A few notable theoretical examples are the spin
liquid phases obtained in frustrated
magnets\cite{Fazekas:pm74,Hermele:prb04,Alicea:prl05,Moessner:prb01,Sachdev:prb92}.
Quantum effects may play a central role in other types of order as well.
Recent experiments on a number of insulating chromate compounds,
namely CdCr$_2$O$_4$\, and HgCr$_2$O$_4$,\, have shown peculiar features in the
low-temperature magnetization as a function of applied magnetic field.
At low temperatures the magnetization grows linearly with magnetic
field up to some critical value at which point there is a sharp jump
in magnetization onto a rather wide plateau with half the full
saturation magnetization.\cite{Ueda:prb06,Ueda:prl05,Matsuda:06} With
sufficiently large fields it is possible to observe a smooth
transition off the half-magnetization plateau and a gradual increase
in magnetization up to what may be a fully polarized plateau
state.\cite{Ueda:prb06} As described in
Ref.~\onlinecite{Bergman:prl05}, it is expected that the magnetism in
these compounds is well described by the Heisenberg antiferromagnet
(AFM) of spin $s = \frac{3}{2}$ on the pyrochlore lattice.
Motivated by these experimental examples of a half polarization
plateau in the pyrochlore Heisenberg AFM, we conduct a theoretical
study of the quantum pyrochlore Heisenberg AFM for any spin value $s$,
in a strong magnetic field, focusing on a half polarization plateau.
While the physics of HgCr$_2$O$_4$\; is probably determined to a large degree by
the classical physics of spin-lattice interactions,\cite{Bergman:prb06,Penc:prl04,Penc:05,Shannon:05,Fennie:06} it
may be that in other similar compounds where coupling to phonons is
weak, quantum effects could play a significant role.
In any case, the general problem of determining the spin state on
plateaus of non-zero magnetization in frustrated magnets occurs in a
large number of materials\cite{Chandra:prb04,Narumi:el04,Matsumoto:prb03,Tanaka:02,Miyahara:03,Misguich:prl01,Uchida:jpsj01,Momoi:prb00}.
At fields large enough to induce
substantial magnetization, the ground state is expected to be very
different from the zero field state and one would ideally pursue a
theoretical approach that takes advantage of the large external field.
The methods developed indeed use this explicitly, and the particular
application to the half-polarized pyrochlore magnetization plateau
provides a rather non-trivial test bed. We make use of the large
field to justify an easy-axis approximation to a nearest-neighbor XXZ
antiferromagnetic in an external field. Physically, at large fields
the spin is oriented on average more along the field axis than
transverse to it. Furthermore, specifically on a magnetization
plateau, general arguments imply that the static transverse moment
vanishes {\sl on every site}, $\langle S_i^\pm\rangle=0$. Thus we
expect that Degenerate Perturbation Theory (DPT) about the easy-axis
(Ising) limit should be justified on the plateau, and one may thereby
derive an effective Hamiltonian. This effective Hamiltonian acts in
the {\sl constrained} ``3:1'' space of states with $3$ majority spins with
$S_i^z=+s$ and $1$ minority spin with $S_i^z=-s$ on each tetrahedron.
This space is macroscopically degenerate, and all its members have
half the saturation magnetization.
The reader may well wonder whether there is any need for an approach
of this sort, given the successes of the large $s$ semiclassical
spin-wave method in many other contexts. Indeed, for
unfrustrated antiferromagnets, it is known that the $1/s$ expansion
gives reasonably convergent results even down to $s=1/2$. However,
this convergence is strongly dependent upon the lattice -- large
corrections to the spin-wave dispersion have recently been obtained
even for the rather weakly-frustrated triangular
lattice.\cite{Starykh:06,Zheng:06} In highly frustrated magnets such as the
pyrochlore, another approach is warranted. Particularly worrisome in
the $1/s$ expansion is the difficulty of treating {\sl tunneling},
which is non-perturbative in this method\cite{Henley:prb93,Hizi:prb06,Henley:prl06}. By contrast, in the
easy-axis expansion, tunneling and virtual exchange are treated on the
same footing. Of course, for large $s$ both approaches must agree,
and we will indeed check that this is the case in our specific
application.
The effective Hamiltonian describing splittings within the degenerate
manifold of Ising ground states will generally take the form of a
constrained quantum Ising model. As explained in a previous
publication,\cite{Bergman:prl05} for the case of the pyrochlore
half-magnetization plateau in the easy-axis limit, it can be cast in
the form of a ``Quantum Dimer Model'' (QDM) on the bipartite diamond
lattice. Such QDMs are known to display both ordered and disordered
(spin-liquid) ground states in different regions of their phase
diagrams.\cite{RK:prl88,Moessner:prb01,Hermele:prb04,Moessner:prl01,Huse:prl03,Syljuasen:05}
We derive the parameters of this QDM for general $s$, and
discuss several limits and the expectations for the plateau ground
state. For simplicity of presentation we perform this calculation
here for the simplest XXZ spin model with no additional anisotropy or
other spin interaction terms. However, the method is
straightforwardly generalized to include other on-site (e.g. uniaxial
anisotropy\cite{Damle:06}) or nearest-neighbor (e.g. biquadratic) interactions
without substantial increase in computational complexity.
More generally, the flexibility to include such effects allows one to
consider the quantum effects upon the ground state selection within a
magnetization plateau even when the dominant mechanism of plateau {\sl
stabilization} is a classical one.
A remarkable feature of the DPT is that all diagonal terms describing
splitting of the low energy manifold {\sl vanish below sixth order}!
For $s\geq 1$, off-diagonal tunneling terms also vanish up to this
order, so that the entire effective Hamiltonian is determined by terms
of sixth order and higher. This behavior is similar to a result of
Henley\cite{Hizi:prb06} that in the large-$s$ limit, the effective
Hamiltonian is expressed entirely in terms of a ``spin flux'' variable
involving a product of $6$ spins around an elementary loop of the
lattice. We show here that our result is rather general, and
originates from two basic features: the absence of non-trivial loops
of length less than $6$ links on the pyrochlore lattice, and the fact
that all low-energy spin states on a single tetrahedron are
permutations of one another. From our proof of this result, it can
readily be seen that similar behavior holds for any lattice of corner
sharing simplexes with only on-site and nearest-neighbor interactions
and permutation-related ground states on a single simplex. We will
apply the methods of this paper to other such problems of interest in
future work.
For the pyrochlore magnetization plateau and QDM studied here, the
conclusions are as follows. For $s>3/2$, we find that {\sl diagonal}
terms in the QDM are much larger than off-diagonal ones. In this
case, the latter are negligible, and because the diagonal QDM is
effectively classical, it is soluble and the ground state is {\sl
ordered}. We discuss the preferred spin ordered states as a
function of $s$. For $s\leq 3/2$, the off-diagonal terms are
non-negligible, and a simple solution is no longer availed. For
$s=3/2$, various arguments lead us to still expect an ordered state.
After correction of an error in Ref.\onlinecite{Bergman:prl05}, we
find two candidate states for this case. One of these is the {\bf R}
state discussed previously in Ref.\onlinecite{Bergman:prl05}, which is
the state containing the maximal number of hexagonal loops with
alternating spins (flippable plaquettes in the QDM language). Another
candidate is a $\sqrt{3}\times\sqrt{3}$ state with a planar structure.
In fact, the diagonal terms in the effective Hamiltonian do not
entirely fix the relation between adjacent planes in the latter state,
so there is additional degeneracy whose breaking we cannot resolve at
the present time. Because the off-diagonal and diagonal terms are
comparable in this case, however, some other states may also be
possible, and a definite conclusion must await more serious
computational (e.g. quantum Monte Carlo) analysis. For $s\leq 1$, the
off-diagonal term in the QDM is dominant. In this case, either the
${\bf R}$ state or $U(1)$ spin liquid is the most likely candidate
ground state. Indeed, as argued in Ref.\onlinecite{Bergman:prb05}, it
is quite possible that the simplest QDM displays a direct quantum
phase transition between these two states.
The ground state of the QDM just discussed is determined only by the
{\sl dimensionless} ratios of coupling constants. However, the DPT
calculation also gives the overall scale of the effective interaction
in terms of the microscopic exchange $J$. For $s=3/2$, the largest
interaction energy (extrapolated from the easy axis perturbation
theory to the Heisenberg limit) generated by quantum fluctuations is
only $\approx 0.02J$. Were this the true scale for ground state
selection in the degenerate 3:1 manifold in HgCr$_2$O$_4$, the magnetic
ordering would occur at a temperature of this order, i.e. $\approx
0.2K$. Experimentally, however, magnetic ordering is observed at a
substantial fraction of the temperature of onset of the plateau
formation, which is around $6K$. The closeness of the ordering and
plateau scales in experiment suggests that both are determined by
the same physical mechanism, and argues against the importance of
quantum fluctuations in the ground state selection in HgCr$_2$O$_4$. Indeed,
we have recently shown\cite{Bergman:prb06} that the same spin-lattice
coupling which leads to plateau formation can also account for the
state selection. Curiously, the {\bf R} state is also stabilized by
the lattice mechanism. This is symptomatic of the very strong
constraints defining the 3:1 QDM states, which lead rather different
microscopic interactions to favor the same ground state. For $s=1$
and $s=1/2$, the DPT gives much larger characteristic scales for the
QDM, the off-diagonal term being of order $0.16J$ and $1.5J$ in the
two cases. Thus such $s\leq 1$ antiferromagnets, if realized
experimentally, would be promising systems to observe quantum
fluctuation effects.
This manuscript is organized as follows. In Section~\ref{easy_axis},
we describe our theoretical model, the nearest-neighbor quantum
Heisenberg antiferromagnet on a pyrochlore lattice in an external
field. An easy-axis limit is taken under the assumption of the
suppression of transverse spin fluctuations in large magnetic fields.
After applying degenerate perturbation theory (DPT) in the transverse
spin fluctuations, an effective dimer model emerges in Section~\ref{sec:DPT_Leon} that can be used
to obtain an approximate ground state of the original model. In
Section~\ref{large_s} we carry out a large $s$ analysis of the XXZ
model, deriving a different effective Hamiltonian splitting the 3:1
manifold of degenerate states. This new effective Hamiltonian turns out
to coincide with the $s \rightarrow \infty$ limit of the effective
Hamiltonian from the DPT analysis. In Section~\ref{diagonal_gs} we
explore the ground state of the diagonal part of the
effective Hamiltonian from DPT. In Section~\ref{Effective_QDM} we
explore in more generality the appropriate Quantum Dimer Model (QDM)
of which all our effective Hamiltonians are special cases.
We conclude the main text of this paper with a discussion of our results in
Section~\ref{sec:discussion}.
In appendix~\ref{app:PlateauWidth} we analyze how the half polarization plateau
is modified by quantum fluctuations. An alternative method of performing DPT
is presented in Appendix~\ref{app:Other_DPT}, and shows perfect agreement with
the result of Section~\ref{sec:DPT_Leon}. Finally, in Appendix~\ref{app:root3_degeneracy}
we explore the states degenerate with the $\sqrt{3}\times\sqrt{3}$ states, found for $s = \frac{3}{2}$.
\section{Models}
\label{easy_axis}
\subsection{Hamiltonians and Limits}
We begin with the simple spin-$s$ Heisenberg antiferromagnet (AFM) residing
on the sites of the pyrochlore lattice in the presence of a magnetic field ${\bf H}$,
\begin{equation}
{\mathcal H} = J \sum_{\langle i j \rangle} {\bf S}_i \cdot {\bf S}_j - {\bf H} \cdot \sum_j {\bf S}_j
\; .
\end{equation}
On the pyrochlore lattice one may recast the nearest-neighbor exchange
in terms of the total spin on tetrahedra using the identity
\begin{equation}
2 \sum_{\langle i j \rangle} {\bf S}_i \cdot {\bf S}_j =
\sum_t ({\bf S}_t)^2 - \sum_t \sum_{j \in t} ({\bf S}_j)^2
\; ,
\end{equation}
where ${\bf S}_t = \sum_{j \in t} {\bf S}_j$ is the sum of spins on a
tetrahedron labeled by $t$, and $({\bf S}_j)^2|j\rangle =
S(S+1)|j\rangle$. This gives the more convenient form
\begin{equation}\label{Heisenberg}
{\mathcal H} = \frac{J}{2} \sum_t \left[ ({\bf S}_t - {\bf h})^2 - {\bf h}^2 \right]
\; ,
\end{equation}
where we have introduced the dimensionless magnetic field ${\bf h} =
{\bf H}/2J = h {\hat z}$,
and ignored a trivial constant term in the Hamiltonian.
\subsubsection{Classical limit}
The form in Eq.\eqref{Heisenberg} makes the behavior in the large $s$
limit apparent. In this limit the spins behave classically, and one
may replace ${\bf S}_i\rightarrow s\hat{n}_i$, where $\hat{n}_i$ is a
unit vector. The ground states then consist simply of all states for
which ${\bf S}_t = s\sum_{i\in t} \hat{n}_i = {\bf h}$ on every
tetrahedron. This set has a large continuous degeneracy.
Furthermore, since the magnetization is simply half the sum of the ${\bf S}_t$
(because each spin is contained in two tetrahedra), this implies a
continuous linear behavior of the magnetization with field. Thus,
in this model magnetization plateaus can emerge only from quantum corrections to the
classical limit.
\subsubsection{Easy axis limit}
An alternative approach exploits the fact that, with the application of
the magnetic field, the global $SU(2)$ symmetry of the bare Heisenberg
model is broken down to a $U(1)$ symmetry (rotations about the magnetic
field direction). Moreover, when the magnetization per spin is
substantial, on average the transverse components $S_i^\pm$ are smaller
in magnitude than the longitudinal ones. It is therefore natural to
treat transverse and longitudinal exchange couplings on a different
footing, with the latter taking the dominant role. Formally, this is
accomplished by replacing the isotropic Heisenberg Hamiltonian by an XXZ
model:
\begin{equation}\label{XXZ}
{\mathcal H} = {\mathcal H}_0 + {\mathcal H}_1\;,
\end{equation}
where
\begin{equation}\label{H_0}
{\mathcal H}_0 = \frac{J_z}{2} \sum_t \left[ (S_t^z - h)^2 - h^2 \right] - J_z \sum_i \left( S_i^z \right)^2
\; ,
\end{equation}
and
\begin{equation}\label{H_1}
{\mathcal H}_1 = \frac{J_{\perp}}{2} \sum_{\langle i j \rangle} \left( S_i^+ S_j^- + h.c. \right)
\; .
\end{equation}
We use the notation
$
S_t^z = \sum_{i \in t} S_i^z
$ ,
and we have made use of the identity
\begin{equation}
\sum_{\langle i j \rangle} S_i^z S_j^z =
\frac{1}{2} \sum_t (S_t^z)^2 - \sum_i \left( S_i^z \right)^2
\; .
\end{equation}
In the equations above, and elsewhere in this manuscript, ${\langle i
j \rangle}$ denotes a sum over nearest neighbor sites on the
pyrochlore lattice, and $S_i^{\pm}$ are the spin ladder operators.
Note that in the Heisenberg model $J_\perp=J_z=J$, but the more
general XXZ model has all the same symmetries as the former even when
this condition is not obeyed. From the above reasoning, we expect
that the transverse terms involving $J_\perp$ may be treated as
``small'' perturbations in the strong-field regime of interest. We
note that this is expected to be a particularly good approximation
when the system exhibits a magnetization plateau. This is because, as
described in the introduction, $\langle S_i^\pm\rangle$ {\sl must}
vanish in such a state. Formally, this ``easy axis'' limit consists
of taking $J_{\perp} \ll J_z$ and doing degenerate perturbation theory in
$\alpha = \frac{J_{\perp}}{J_z}$. We will assess the validity of this
approximation later by considering the {\sl magnitude} the
perturbative corrections extrapolated to $\alpha=1$. Finally, we note
that several other effects can stabilize a collinear state. One is the
addition of easy axis anisotropy,
\begin{equation}
\label{eq:6}
{\mathcal H}_0^\prime = - K \sum_i \left( S_i^z\right)^2,
\end{equation}
with $K\gg J_\perp>0$. A second mechanism is biquadratic exchange,
\begin{equation}
\label{eq:biquad}
{\mathcal H}_0^{\prime\prime} = - b J \sum_{\langle ij\rangle}
\left( {\bf S}_i\cdot {\bf S}_j\right)^2,
\end{equation}
with $b>0$. A term of this form can be generated dynamically from
spin-lattice interactions, known to be strong in HgCr$_2$O$_4$. The DPT
treatment discussed below can readily be generalized to include either
or both of the terms in Eqs.(\ref{eq:6},\ref{eq:biquad}). For
simplicity of presentation we do not do so here. While easy-axis terms
similar to Eq.\eqref{eq:6} are allowed for $s>1/2$, this particular
simple form, with the same spatial direction for the local easy axis
of all spins, is not physically appropriate for the cubic pyrochlore
spinels, and the proper anisotropy terms allowed by symmetry in these
materials are likely to be very small in any case.
\subsection{Magnetization process in the Ising model}
\label{plateau_structure}
The evolution of the ground state with field in the extreme easy-axis
limit $\alpha=0$ is less trivial than in the classical limit. The
system is then described by the Ising Hamiltonian, Eq.\eqref{H_0}. We
shall focus on the $h \geq 0$ case, as the case $h \leq 0$ is
equivalent. The expression Eq.\eqref{H_0} can be written as a sum over
tetrahedra ${\mathcal H}_0 = \sum_t {\mathcal H}_t$ with
\begin{equation}\label{H_t}
{\mathcal H}_t = \frac{J_z}{2} \left[ (S_t^z - h)^2 - h^2 - \sum_{j \in t} \left( S_j^z \right)^2 \right]
\; ,
\end{equation}
and therefore if one can minimize this energy on each single tetrahedron, one will have attained the minimum energy of the many-body system.
The magnetization $S_t^z$ of any individual tetrahedron is \emph{quantized} to the values
$ S_t^z = 0, \pm 1, \pm 2, \ldots \pm 4 s$.
The 1st term in \eqref{H_t} favors the magnetization of the tetrahedron to take on a value close to the integer part of the dimensionless magnetic field. The 2nd term in \eqref{H_t} favors larger $z$-components of the spin values $S_j^z = \pm s$.
This 2nd term is trivial only in the spin $\frac{1}{2}$ case where the spin-1/2 Pauli matrices square to the identity.
A state with the magnetization $ S_t^z = m = [h] $ (the integer part of $h$) clearly minimizes the energy of the first
term in \eqref{H_t}.
However, given the magnetization, there is some freedom for the values of the spins on each tetrahedron. The 2nd term in \eqref{H_t} reduces this freedom by adding an energy cost for small $S^z$ components.
Consider 4 spins on a tetrahedron with total magnetization $m$, and individual values of $S_1^z,S_2^z,S_3^z,S_4^z$.
Now compare the energy of this state with that of $S_1^z -1,S_2^z +1,S_3^z,S_4^z$. The only energy difference comes from the second term in \eqref{H_t}
\begin{equation}
\begin{split}
\Delta E = & - \left( S_1^z + 1 \right)^2 - \left( S_2^z - 1 \right)^2 + \left( S_1^z \right)^2 + \left( S_2^z \right)^2
\\
= & -2 \left( 1 - S_1^z + S_2^z \right)\,.
\end{split}
\end{equation}
$ $From this expression one deduces that if one begins with $S_2^z > S_1^z$, then it is energetically favorable to increase $S_2^z$
even more at the expense of $S_1^z$. This increase in $S_2^z$ can only be halted by one of $S_{1,2}^z$ reaching an \emph{extreme}
spin value of $\pm s$. From this reasoning we conclude that the lowest energy state on a single tetrahedron with
a fixed total magnetization $m$ has a spin configuration with the largest possible number of extreme valued spins.
This also makes intuitive sense, since ideally \emph{all} the spins should take on extreme values if possible.
In three particular choices of $m$, all the spins take on extreme values.
Zero magnetization $m=0$ with $s,s,-s,-s$, half polarization of $m=2s$ with $s,-s,-s,-s$ and full polarization
$m= 4s$ with $s,s,s,s$. For $m < 2s$ we find the lowest energy configuration for the spins $s,s,-s,m - s$,
and for $m > 2s$, we find $s,s,s, m - 3s$.
Finally, we can find the minimal energy for given magnetization $m$ by using the spin configurations described above for every value of $m$, and plugging them into \eqref{H_t}. For $m < 2s$ we find
\begin{equation}
E_m = \frac{J_z}{2} \left[ 2 m (s - h) - 4 s^2 \right],
\end{equation}
and for $m > 2s$ we find
\begin{equation}
E_m = \frac{J_z}{2} \left[ 2 (m - 2s) (3s - h) - 4 h s \right].
\end{equation}
$ $From the above expressions it is easy to see that at $h=s$ all $m<2s$ yield the \emph{same} energy - all these states are \emph{degenerate} at this field value. Similarly, for $h = 3s$ all $m>2s$ yield the \emph{same} energy. For other
values of the magnetic field, we find for $h<s$ the lowest energy state is the $m=0$ zero magnetization state,
for $h < s < 3s$ the lowest energy state is the $m=2s$ half polarized state, and for $h>3s$ the lowest energy state
is the $m=4s$ fully polarized state.
The three lowest energy states in the various magnetic field ranges have all spins at extreme values $\pm s$,
and can be realized on every tetrahedron in the pyrochlore lattice. The $m=0$ state induces a degenerate
manifold of states with every tetrahedron having $s,s,-s,-s$ on it. This 2:2 proportionality is well known
as the ``ice rules'' encountered in a particular phase of water ice,\cite{Pauling:35}
as well as spin ice compounds.\cite{Bramwell:01} The half
polarization states are also massively degenerate, with every tetrahedron in
a 3:1 proportionality of $S_j^z = +s$ to $S_j^z = -s$ spins (or 3 up one down). This particular degenerate manifold will be the focus of the remainder of our discussion. To summarize, the magnetization curve for \eqref{H_0} exhibits 3 plateaus at zero, half and full polarization, for all values of $s$.
\section{Easy axis degenerate perturbation theory}
\label{sec:DPT_Leon}
\subsection{Structure of perturbation theory}
\subsubsection{Basic formulation}
\label{sec:dpt_formulation}
In the previous section, we observed that the extreme easy axis limit of
the Heisenberg model exhibits a broad magnetization plateau at half
polarization. However, the ground states on this plateau are
macroscopically degenerate, consisting of all states with a 3:1 ratio of
majority and minority spins on each tetrahedron. In this section, we
study the splitting of this degeneracy by perturbation theory in
$J_\perp$. We employ the following formulation of Degenerate
Perturbation Theory (DPT). Define the projection operator, ${\mathcal
P}$, onto any degenerate manifold of states $M$. Consider any exact
eigenstate $|\Psi\rangle$. Its projection $|\Psi_0\rangle={\mathcal
P}|\Psi\rangle$ satisfies the ``effective Schr\"odinger equation''
\begin{equation}
\label{eq:effshrod}
\left[ E_0 +
{\mathcal P} {\mathcal H}_1 \sum_{n=0}^{\infty} {\mathcal G}^n
{\mathcal P} \right] |\Psi_0\rangle = E |\Psi_0\rangle = {\mathcal H}_{\textrm{eff}} |\Psi_0\rangle,
\end{equation}
where the operator ${\mathcal G} = \frac{1}{E - {\mathcal H}_0} \left( 1 -
{\mathcal P} \right) {\mathcal H}_1 $. Because the
resolvent contains the exact energy $E$, Eq.~(\ref{eq:effshrod}) is
actually a non-linear eigenvalue problem. However, to any given order
of DPT, $E$ may be expanded in a series in $J_{\perp}$ to obtain an equation
with a true Hamiltonian form within the degenerate manifold. Each
factor of ${\mathcal G}$ is at least of $O(J_\perp)$ due to the
explicit factor in ${\mathcal H}_1$, with higher order corrections
coming from the expansion of $E$. Once
$|\Psi_0\rangle$ and $E$ are known, the full wavefunction can be reconstructed
as $\ket{\Psi} = (1-{\mathcal G})^{-1} \ket{\Psi_0} =
\sum_{n=0}^{\infty} {\mathcal G}^n \ket{\Psi_0}$.
Considering the lowest order term in DPT that breaks the degeneracy, the
precise energy $E = E_0 + O(\alpha)$ in the resolvent can be replaced by
$E_0$, where $O(\alpha)$ represents possible energy shifts from lower
order terms that do \emph{not} break the degeneracy, and $E_0$ is the
0-th order energy of the degenerate manifold of states.
\subsubsection{Order of off-diagonal terms}
\label{sec:order-diagonal-terms}
Every order in DPT can in principle have {\sl diagonal} (in the $S_i^z$
basis) as well as {\sl off-diagonal} terms in which the degeneracy is
removed. Any {\sl off-diagonal} term in the effective Hamiltonian must
flip spins in such a way as to preserve the 3:1 constraint on each
tetrahedron. This can only be accomplished by flipping spins around a
non-trivial closed loop on the pyrochlore lattice (see, e.g.
Ref.\onlinecite{Hermele:prb04}). The smallest such loop involves
flipping spins on 3 different bonds, and flipping a spin from $S^z = \pm
s$ to $S^z = \mp s$ requires ${\mathcal H}_1$ to act $2s$ times, so
off-diagonal processes occur first at order $O(J_{\perp}^{6s})$. Therefore,
below this order of DPT, one need consider only diagonal terms. In
subsection~\ref{symbolic-red}
we will demonstrate that the lowest order diagonal energy
splitting term {\sl for any $s$} can occur only at 6th order. For spin
$s=\frac{1}{2}$ an off-diagonal term appears at 3rd order in DPT, and no
diagonal energy splitting occurs at this order, resulting in a purely
off-diagonal effective Hamiltonian. For spin $s=1$ the lowest order
diagonal and off-diagonal terms simultaneously appear at 6th order. For
any higher value of $s$, the diagonal energy splitting appears at a
lower order than any off-diagonal term can occur, and therefore the
leading order effective Hamiltonian is purely diagonal in the 3:1
states. We will nevertheless compute the first non-vanishing
off-diagonal term for various values of $s$ in Section~\ref{Effective_QDM}, to use its magnitude for an assessment of
the validity of the truncation of the DPT expansion.
\subsubsection{Unitarily transformed formalism for diagonal terms}
We next develop a scheme to compute the diagonal terms, by unitarily
transforming the expression in Eq.\eqref{eq:effshrod} to obtain a
formula for the diagonal effective Hamiltonian with all dependence upon
the spin state explicit. The 3:1 manifold can be described using Ising
variables to indicate which spins are minority sites. That is, in the
3:1 states, we denote $S_j^z = \sigma_j s$ with $\sigma_j = \pm 1$ the
Ising variable. At $n^{\rm th}$ order, presuming that all lower order
terms are constants, the diagonal terms in the effective Hamiltonian
constitute the function of the set $\{\sigma_i\}$ given by
\begin{equation}
\label{eq:7}
{\mathcal H}_n[\{\sigma_i\}] = (-1)^{n+1} \langle
\psi[\{\sigma_i\}]|\left({\mathcal H}_1 {\mathcal R}{\mathcal Q}\right)^{n-1} {\mathcal
H}_1 |\psi[\{\sigma_i\}]\rangle,
\end{equation}
where the resolvent ${\mathcal R} =({\mathcal H}_0 - E_0)^{-1}$,
${\mathcal Q}= 1 - {\mathcal P}$, and
\begin{equation}
\label{eq:8}
|\psi[\{\sigma_i\}]\rangle=\otimes_i | S_i^z=\sigma_i
s\rangle.
\end{equation}
The assumption that all lower order terms are
constant allows us to replace $E$ by $E_0$ in the denominators in
Eq.\eqref{eq:effshrod}, since the constant corrections to $E$ lead to
higher order terms in the effective Hamiltonian.
The dependence upon the $\sigma_i$ in Eq.\eqref{eq:7}, is not explicit,
but, following Hizi {\rm et al.}~\cite{Hizi:prb06}, it can be made so by a
unitary transformation. The operator
\begin{equation}\label{unitary}
{\hat U} = e^{+ i \pi \sum_j \frac{\left( 1-\sigma_j \right)}{2} {\hat S}_j^x}
\;
\end{equation}
effects a rotation about the $x$-axis in spin space only for the minority
spins. This interchanges raising and lowering operators, and reverses
the orientation of $S_i^z$ for these sites. We may therefore write
\begin{equation}
\label{eq:9}
|\psi[\{\sigma_i\}]\rangle= U |\psi_0\rangle,
\end{equation}
where
\begin{equation}
\label{eq:12}
|\psi_0\rangle = \otimes_i |S_i^z=s\rangle.
\end{equation}
is the fully polarized state, which is now independent of $\sigma_i$.
Then we have
\begin{equation}
\label{eq:7a}
{\mathcal H}_n[\{\sigma_i\}] = (-1)^{n+1} \langle
\psi_0|\left(\tilde{\mathcal H}_1 \tilde{\mathcal R}\tilde{\mathcal
Q}\right)^{n-1} \tilde{\mathcal
H}_1 |\psi_0\rangle,
\end{equation}
where
\begin{equation}
\label{eq:13}
\tilde{\mathcal O} = U^\dagger {\mathcal O} U,
\end{equation}
for any operator ${\mathcal O}$. In what follows, all the operators
appearing in Eq.\eqref{eq:7a} above, will be simplified so that their dependence upon
$\sigma_i$ becomes explicit.
First consider $\tilde{\mathcal H}_1$. It consists, from
Eq.\eqref{H_1}, of a sum of operators
transferring spin 1 between two nearest neighbor sites, i.e. a bond of the
pyrochlore lattice. We define the nearest-neighbor connectivity matrix
of the lattice $\Gamma_{i j} = \Gamma_{j i}=1$ when $i$ and $j$ are
nearest neighbors, and $\Gamma_{i j}=0$ otherwise. With this terminology
we write \eqref{H_1} as
\begin{equation}
{\mathcal H}_1 = J_z \frac{\alpha}{4} \sum_{i j} \Gamma_{i j} \left(
S_i^{+} S_j^{-} +h.c. \right)
\; .
\end{equation}
After the unitary transformation, one obtains
\begin{equation}
\label{rotated_H_1}
\begin{split}
{\tilde {\mathcal H}}_1 = U^{\dagger} {\mathcal H}_1 U & = J_z \frac{\alpha}{4} \sum_{i j} \Gamma_{i j} \left( S_i^{+\sigma_i} S_j^{-\sigma_j} +h.c. \right)
\\
& =
J_z \frac{\alpha}{4} \sum_{i j} \Gamma_{i j}
\Bigg[
\frac{\left( 1 + \sigma_i \sigma_j \right)}{2}
\left( S_i^+ S_j^- + h.c. \right)
\\ &
+
\frac{\left( 1 - \sigma_i \sigma_j \right)}{2}
\left( S_i^+ S_j^+ + h.c. \right)
\Bigg].
\end{split}
\end{equation}
Here the expressions $\frac{\left( 1 \pm \sigma_i \sigma_j \right)}{2}$
denote ``Ising delta functions'' that select the cases in which the two
$\sigma_{i,j}$ have the same or opposite sign.
Assuming the lowest order term in DPT that splits the 3:1 configurations
is a diagonal term of order $n_0$, the only 3:1 configuration which can
be reached as an intermediate state in
Eq.\eqref{eq:7} for any $n \leq n_0$ is the
starting state $|\psi[\{\sigma_i\}]\rangle$.
Under the unitary transformation, this state maps to $|\psi_0\rangle$,
and therefore the projection operator $\tilde{\mathcal Q}$
may be replaced by
\begin{equation}
\label{eq:16}
\tilde{\mathcal Q} \rightarrow 1- |\psi_0\rangle \langle \psi_0|
\end{equation}
in Eq.\eqref{eq:7a}.
Finally, we consider the resolvent. Using $U^\dagger S_i^z
U = \sigma_i S_i^z$, one finds
\begin{equation}
\label{eq:19}
\tilde{\mathcal R}^{-1} = \frac{J_z}{2}\sum_{ij} \Gamma_{ij} \sigma_i
\sigma_j S_i^z S_j^z - 2 J_z h \sum_j \sigma_j S_j^z - E_0.
\end{equation}
First we note that because both ${\mathcal H}_0$ and ${\mathcal H}_1$
conserve the total magnetization of the lattice (this is just the conserved quantity
arising from the global $U(1)$ symmetry), the term $\sum_j \sigma_j S_j^z$
remains unchanged at every stage in a DPT process, and
we can therefore absorb this term into the constant energy $E_0$.
Clearly, the inverse resolvent should vanish when acting upon the fully
polarized state $|\psi_0\rangle$. Hence we may absorb the constant
energy $E_0$ into the sum as
\begin{equation}
\label{eq:20}
\tilde{\mathcal R}^{-1} = \frac{J_z}{2}\sum_{ij} \Gamma_{ij} \sigma_i
\sigma_j \left(S_i^z S_j^z - s^2\right).
\end{equation}
We can simplify the resolvent in the
restricted space of virtual states which will be accessed in evaluating
Eq.\eqref{eq:7a}. In particular, the $\sigma_i$ configurations are
restricted to the 3:1 manifold.
Furthermore we note that all intermediate states
will have only some small finite set of spins whose $S_i^z$ quantum
numbers are different from $s$, due to the action of $\tilde{\mathcal
H}_1$. Let us consider then the action of the resolvent on a state
for which this set of sites is denoted by ${\sf F}$. In this case, only
terms in Eq.\eqref{eq:20} for which at least one of $i$ or $j$ is in
${\sf F}$ can contribute. Thus
\begin{eqnarray}
\label{eq:22}
\tilde{\mathcal R}^{-1} & = & \frac{J_z}{2}\sum_{ij\in {\sf F}}
\Gamma_{ij} \sigma_i \sigma_j \left(S_i^z S_j^z - s^2\right)
\nonumber \\
&& + J_z s \sum_{i\in {\sf F}}\sum_{j \not\in {\sf F}} \Gamma_{ij}
\sigma_i \sigma_j \left(S_i^z - s\right).
\end{eqnarray}
One may replace the sum over $j$ by $\sum_{j \not\in {\sf F}} =\sum_j -
\sum_{j \in {\sf F}}$ to obtain
\begin{eqnarray}
\label{eq:23}
\tilde{\mathcal R}^{-1} & = & \frac{J_z}{2}\sum_{ij\in {\sf F}}
\Gamma_{ij} \sigma_i \sigma_j \left(S_i^z -s\right)\left(S_j^z - s\right)
\nonumber \\
&& + J_z s \sum_{i\in {\sf F}}\sigma_i \left(\sum_{j}
\Gamma_{ij}\sigma_j \right) \left(S_i^z - s\right).
\end{eqnarray}
The crucial observation is that the 3:1 constraint implies
\begin{equation}
\label{eq:24}
\sum_j \Gamma_{ij}\sigma_j = 4-2\sigma_i.
\end{equation}
This is because once $\sigma_i$ is specified, the {\sl set} of its
neighbors is also specified (see also Fig.~\ref{fig:configs}). Eq.\eqref{eq:24} allows one to eliminate
the latter sum and obtain
\begin{eqnarray}
\tilde{\mathcal R}^{-1} & = & \frac{J_z}{2}\sum_{ij\in {\sf F}}
\Gamma_{ij} \sigma_i \sigma_j \left(S_i^z -s\right)\left(S_j^z - s\right)
\nonumber \\
&& +2 J_z s \sum_{i\in {\sf F}}(2 \sigma_i - 1)\left(S_i^z - s\right).
\end{eqnarray}
Using again the observation that $\sum_j \sigma_j S_j^z$ remains unchanged throughout the stages of any
DPT process, it is equal to the constant $\sum_j \sigma_j S_j^z = \sum_j \sigma_j s$. Using this fact,
we finally obtain
\begin{eqnarray}
\label{eq:25}
\tilde{\mathcal R}^{-1} & = & \frac{J_z}{2}\sum_{ij\in {\sf F}}
\Gamma_{ij} \sigma_i \sigma_j \left(S_i^z -s\right)\left(S_j^z - s\right)
\nonumber \\
&& - 2 J_z s \sum_{i\in {\sf F}} \left(S_i^z - s\right).
\end{eqnarray}
By successive action of $\tilde{\mathcal H}_1$, $\tilde{\mathcal Q}$,
and $\tilde{\mathcal R}$ using
Eqs.~(\ref{rotated_H_1},\ref{eq:16},\ref{eq:25}), one can obtain
explicit expressions for any intermediate state in the DPT expression of
Eq.\eqref{eq:7a}, with $n \leq n_0$. For example, one action of each of these operators gives
\begin{eqnarray}
\label{eq:oneact}
&& \tilde{\mathcal R}\tilde{\mathcal Q}\tilde{\mathcal H}_1
|\psi_0\rangle = \\
&& \frac{\alpha s}{4(4s-1)}\sum_{a_1 a_2} \Gamma_{a_1 a_2}
(1-\sigma_{a_1}\sigma_{a_2}) \left| 1_{a_1}
1_{a_2}\right\rangle, \nonumber
\end{eqnarray}
where we have introduced the compact notation
\begin{eqnarray}
\label{eq:notation}
\!\!\!\left| (m_1)_{a_1}\cdots (m_n)_{a_n}\right\rangle & = &
|S_{a_1}^z=s-m_1\rangle \cdots |S_{a_n}^z=s-m_n\rangle\nonumber \\
&& \otimes_{i\neq
a_1\cdots a_n}|S_i^z=s\rangle.
\end{eqnarray}
Acting twice with the same sequence of operators gives
\begin{widetext}
\begin{eqnarray}
\label{eq:twoacts}
\left(\tilde{\mathcal R}\tilde{\mathcal Q}\tilde{\mathcal
H}_1\right)^2 &&
|\psi_0\rangle = \frac{\alpha^2 s}{16(4s-1)} \sum_{a_1 a_2} \Gamma_{a_1 a_2}
(1-\sigma_{a_1}\sigma_{a_2}) \left| 2_{a_1}
2_{a_2}\right\rangle \\
&& + \frac{\alpha^2 s^2}{4(4s-1)} \sum_{a_1 a_2 a_3}\frac{\Gamma_{a_1
a_3}\Gamma_{a_2 a_3} \eta_{a_1 a_2}}{4s-\Gamma_{a_1 a_2}}[(\sigma_{a_1}+\sigma_{a_3})\sigma_{a_2}
-\sigma_{a_1}\sigma_{a_3}-1]\left| 1_{a_1} 1_{a_2}\right\rangle
\nonumber \\
&& + \frac{\alpha^2 s^{3/2} \sqrt{2s-1}}{4(4s-1)} \sum_{a_1 a_2 a_3}
\frac{\Gamma_{a_1 a_2}\Gamma_{a_1 a_3} \eta_{a_2
a_3}}{8s-4+\Gamma_{a_2 a_3}}
\left[1+\sigma_{a_2}\sigma_{a_3}-\sigma_{a_1}(\sigma_{a_2}+\sigma_{a_3})\right]
\left|2_{a_1} 1_{a_2} 1_{a_3}\right\rangle \nonumber \\
&& + \frac{\alpha^2 s^2}{16(4s-1)} \sum_{a_1\cdots a_4}
\frac{\Gamma_{a_1 a_2} \Gamma_{a_3 a_4} \eta_{a_1 a_4} \eta_{a_2 a_3}
\eta_{a_2 a_4}}{8s-2+\sigma_{a_1}\sigma_{a_3}\left(\Gamma_{a_1
a_3}-\Gamma_{a_1 a_4}-\Gamma_{a_2 a_3}+\Gamma_{a_2 a_4}\right)}
(1-\sigma_{a_1} \sigma_{a_2})(1-\sigma_{a_3}\sigma_{a_4})
\left|1_{a_1}1_{a_2}1_{a_3}1_{a_4}\right\rangle, \nonumber
\end{eqnarray}
\end{widetext}
where we have introduced the ``non-coincident'' symbol
\begin{equation}
\label{eq:nco}
\eta_{ab} = 1-\delta_{ab}.
\end{equation}
The corresponding expressions for more successive actions of these
operators upon $|\psi_0\rangle$ can also be obtained, but are too
unwieldy to present here.
Using such expressions, one may readily evaluate the terms, ${\mathcal
H}_n[\{\sigma_i\}] $ in the diagonal effective Hamiltonian,
Eq.\eqref{eq:7a}. For $n_0$ an even number, a convenient way to calculate
the $n_0$-th order term is to consider the state
\begin{equation}
\ket{\Psi} =
{\tilde {\mathcal R}}^{1/2} \tilde{\mathcal Q} {\tilde {\mathcal H}}_1
\left({\tilde {\mathcal R}} \tilde{\mathcal Q}{\tilde {\mathcal
H}}_1\right)^{\frac{n_0}{2}-1}
\ket{\psi_0}\label{eq:26}
\end{equation}
and then find the magnitude of this wavefunction:
\begin{equation}
\label{eq:mag6}
{\mathcal
H}_{n_0}[\{\sigma_i\}] = -{\dirac {\Psi}{\Psi}}.
\end{equation}
Note that the square-root of $\tilde{\mathcal R}$ in Eq.\eqref{eq:26}
is easily evaluated by just taking the square-root of
Eq.\eqref{eq:25}, since it is diagonal in the basis of 3:1 configurations. Other terms can be obtained
similarly.
\subsection{Restricting the Hilbert space to the 3:1 manifold}
\label{symbolic-red}
Calculating each such magnitude as defined in the previous subsection
leads to an explicit expression for the corresponding term in DPT.
These expressions appear to be extremely complex and formidable
functions of the Ising spin variables $\{ \sigma_i \}$. In this
subsection, we show that the projection of these functions to the 3:1
manifold of allowed $\{ \sigma_i\}$ configurations affords a
tremendous simplification. In fact, we will demonstrate that all
terms in DPT below 6th order can give only constant functions --
i.e. no splitting -- within the 3:1 states. At 6th order, the full
functional dependence can be characterized by only 3 independent
numbers which may be defined on plaquettes of the pyrochlore lattice.
We show how these numbers can be extracted from the expressions
obtained by the analysis of the previous subsection.
\subsubsection{Functional form of diagonal DPT terms}
$ $From the analysis of the previous subsection, the $n^{\rm th}$ order
effective diagonal Hamiltonian in DPT must take the form of a multiple
sum of $n$ site indices $a_1\cdots a_n$, where each site index
is summed over all lattice sites.
The summand is a function only of
$s$ and of the set of $\sigma_i, \Gamma_{ij}, \eta_{ij},\delta_{ij}$
where $i,j$ {\sl must belong to the set of the site indices}.
The general form can be somewhat simplified by noting first that the
dependence upon the $\eta_{ij}$ can be eliminating by rewriting them
in terms of $\delta_{ij}$ using Eq.\eqref{eq:nco}, and then
eliminating all $\delta_{ij}$ by collapsing the sums
containing these factors. Finally, we note that any factors of
$\Gamma_{ij}$ in the denominators in these expressions can be moved to
the numerator using the identity
\begin{equation}
g(\Gamma_{i j}) = g(0) + \Gamma_{i j} \left[ g(1) - g(0) \right]
\; ,
\end{equation}
for any function $g$ (which may also depend upon any other set of
variables), since $\Gamma_{i j} = 0,1$.
By these manipulations, one may write the effective Hamiltonian
${\mathcal H}_{\rm eff}[\{\sigma_i\}] = \sum_n {\mathcal H}_n[\{\sigma_i\}]$ as
\begin{eqnarray}
\label{eq:1}
&& {\mathcal H}_{\rm eff}[\{\sigma_i\}] = \\
&& \sum_n \sum_{G_n} \sum_{a_1 \ldots
a_n}\!\!\! \left(\prod_{(ij)\in G_n} \Gamma_{a_i a_j}\right)
f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_n}) .\nonumber
\end{eqnarray}
Here we have divided the effective Hamiltonian into terms involving
$n$ independent sites variables, $a_1\ldots a_n$ that are summed over
the lattice sites.
A given order $N$ in DPT contributes terms with $n\leq N$. For a
given $n$, all possible products of $\Gamma_{a_i a_j}$ can appear.
The different such products are specified by $G_n$, which may be
considered as a ``diagram'' in the following fashion. Each $G_n$ can be
represented by drawing $n$ points, corresponding to $i=1\ldots n$, and
connecting some arbitrary set of pairs of these points by lines. For
each (unordered) pair of points $(ij)$ which is connected in $G_n$, we
include one factor of $\Gamma_{a_i a_j}$. Since there are $n(n-1)/2$
pairs of points, and each pair may or may not be connected, there are
$2^{n(n-1)/2}$ distinct diagrams $G_n$.
For example,
in our conventions, $\Gamma_{a_1a_2}\Gamma_{a_2a_3}\Gamma_{a_3a_4}$ and
$\Gamma_{a_1a_2}\Gamma_{a_2a_3}\Gamma_{a_3a_5}$ are represented
by different diagrams (see Fig.~\ref{fig:disc_g}), which means that $f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_6})$ is
not necessarily symmetric with respect to swapping
$\sigma_{a_4}$ and $\sigma_{a_5}$.
We will refer to the number $n$ as the
{\sl order} of the given term, even though it can come from a term of
that order or higher in DPT.
\begin{figure}
\centering
\subfigure{
\label{fig:disc1}
\includegraphics[width=1.0in]{fig1.eps}}
\subfigure{
\label{fig:disc2}
\includegraphics[width=1.0in]{fig2.eps}}
\caption{Examples of contractible diagrams.}
\label{fig:disc_g}
\end{figure}
\subsubsection{Contractible diagrams}
\label{sec:contraction-rules}
First we would like to show that any such term represented by a diagram
containing a point $i$ with less than two connections to other points
can be reduced to a term of one lower order. These diagrams are
``contractible'' (see Fig.~\ref{fig:contractible_examples} for
examples). We prove this by showing that the sum over $a_i$ can be
carried out explicitly to obtain an expression of the same form of
Eq.\eqref{eq:1} in terms of the $n-1$ remaining sum variables. There
are two cases. Suppose in $G_n$ the point $i$ in question has no lines
connected to it. Taking $i=n$ without loss of generality, we note that
the sum on $a_n$ is unconstrained, i.e. it runs over all lattice sites.
Thus we may write
\begin{eqnarray}
\label{eq:3}
& & \hspace{-0.1in}2 \sum_{a_n}
f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_n}) = \sum_t \sum_{a\in t}
f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a}) \\
& = & N_t \left[3 f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_{n-1}},+)+
f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_{n-1}},-) \right].\nonumber
\end{eqnarray}
The second line applies because on every tetrahedron there is the same
set of four single-spin states. By inserting Eq.\eqref{eq:3} into
Eq.\eqref{eq:1}, one reduces the order of this term, as asserted above.
\begin{figure}
\centering
\includegraphics[width=3.0in]{fig3.eps}
\caption{Examples of contractible ((a) and (b)) and non-contractible (c) diagrams.}
\label{fig:contractible_examples}
\end{figure}
\begin{figure}[hbt]
\centering
\subfigure[Site adjoining the tetrahedra is the only minority site.]{
\label{fig:conf1}
\includegraphics[width=1.5in]{fig4.eps}}
\subfigure[Two minority sites connected by two parallel links.]{
\label{fig:conf2}
\includegraphics[width=1.5in]{fig5.eps}}
\hspace{2.0in}
\subfigure[Two minority sites connected by two links bending.]{
\label{fig:conf3}
\includegraphics[width=1.5in]{fig6.eps}}
\caption{(Color online) The three possible configurations
of minority sites (red arrow) on two adjacent
tetrahedra, in the 3:1 manifold of states.}
\label{fig:configs}
\end{figure}
Consider the second case, in which there is one connection to the
point $i=n$. We may suppose this connection is to the point $j<n$.
The sum over $a_n$ is then constrained {\sl only} by the requirement that
$a_n$ be a nearest-neighbor of $a_j$. For fixed $a_j$, this includes
just 6 sites on the pyrochlore lattice. Moreover, the {\sl set} of
spins on these six sites is entirely determined by the spin at site
$a_j$. In particular, if $\sigma_{a_j}=+1$, the sum contains $4$
terms with $\sigma_{a_n}=+1$ and $2$ terms with $\sigma_{a_n}=-1$; if
$\sigma_{a_j}=-1$, the sum contains $6$ spins with $\sigma_{a_n}=+1$.
This can easily be understood from Fig.~\ref{fig:configs}.
Therefore the sum can again be carried out explicitly:
\begin{eqnarray}
\label{eq:5}
&& \sum_{a_n}
\Gamma_{a_n a_j} f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_n}) = \nonumber \\
&&
\frac{1+\sigma_{a_j}}{2}
\left[4f_{G_n}(\sigma_{a_1},\ldots,+)+2f_{G_n}(\sigma_{a_1},\ldots,-)\right]
\nonumber \\
&& + \frac{1-\sigma_{a_j}}{2} 6 f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_{n-1}},+) .
\end{eqnarray}
Once again, Eq.\eqref{eq:5} can be inserted into Eq.\eqref{eq:1}
to reduce the order by one.
\subsubsection{Non-contractible diagrams}
Since all contractible diagrams can be reduced using the above rules
until they become either non-contractible or constant, we therefore
need to consider only non-contractible diagrams. In these diagrams,
each point in $G_n$ is connected to at least two other points. Let us
first make a few general observations about these diagrams.
One can readily see that for these diagrams at order $n\leq 5$, all points
must be connected, i.e. it is possible to pass from one point to any
other by a sequence of links. It is useful to consider the notion of
a {\sl loop}, or sequence of points, each connected to the next by a
link, which visits no point twice and returns to the first point of
the sequence. For $n\leq 4$, there is always at least one loop which
includes all $n$ points. For $n=5$, all but three non-contractible
diagrams contain a loop of length $5$. The three remaining diagrams
at $n=5$ contain smaller loops
(see part (c) of Fig.~\ref{fig:diagrams}).
All the non-contractible single loop diagrams for $n \leq 5$ are shown
in Fig.~\ref{fig:diagrams}.
For $n=6$, there is one possible
\emph{disconnected} diagram, which
contains two disjoint loops of length $3$. Apart from this last diagram,
all others are fully connected.
\begin{figure}
\centering
\includegraphics[width=2.5in]{fig7.eps}
\caption{All $n\leq 5$ non-contractible diagrams.
(a)The triangle diagram is the only possible such
diagram for $n=3$. (b) The square framed diagrams are
all the possibilities for $n=4$. (c) The
pentagon-framed diagrams together with the three
right-most diagrams comprise all the possibilities for
$n=5$.}
\label{fig:diagrams}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=2.5in]{fig8.eps}
\caption{(Color online) 3 example topologies of closed paths of
five steps on the pyrochlore lattice. These are all the
possible topologies of clusters corresponding to diagrams at
order $n=5$ containing a loop of length 5.}
\label{fig:path5}
\end{figure}
Let us consider the physical pyrochlore sites which are summed over in
a given term. They comprise a set ${\sf S}(G_n) = \{
(a_1^{(1)},\ldots,a_n^{(1)}),(a_1^{(2)},\ldots,a_n^{(2)}),\ldots\}$ of
solutions, $(a_1^{(i)},\ldots,a_n^{(i)})$, to the conditions
\begin{eqnarray}
\label{eq:conds}
\Gamma_{a_i a_j}=1 \qquad {\rm for}\,\, (ij)\in G_n .
\end{eqnarray}
We will call these solutions ``clusters''. In an infinite system,
${\sf S}$ is of course infinite because of translational symmetry, but
this is immaterial. A given term may then be written simply as
\begin{eqnarray}
\label{eq:termset}
\sum_{(a_1,\ldots,a_n)\in {\sf S}(G_n)} f(\sigma_{a_1},\ldots,\sigma_{a_n}).
\end{eqnarray}
We note that all the clusters for $n\leq 5$ are confined to one or two
adjacent tetrahedra. This can be seen by considering the constraints
imposed on clusters by the non-contractibility of the diagram. For
instance, all but three diagrams at order $n=5$ contain a loop of length
5, and this allows only three topologies of clusters, which are
illustrated in Fig.~\ref{fig:path5}. The remaining three diagrams only allow
clusters that are confined to two or less adjacent tetrahedra.
We will show more generally that
any term containing only clusters confined to three or fewer adjacent
tetrahedra is a constant.
The set ${\sf S}$ can therefore be broken up into three components,
comprising clusters which contain $1$, $2$, or only $3$
multiply-occupied tetrahedra,
\begin{eqnarray}
\label{eq:Ssplit}
{\sf S}(G_n) = {\sf S}_1(G_n) + {\sf S}_2(G_n) + {\sf S}_3(G_n).
\end{eqnarray}
The sum in Eq.\eqref{eq:termset} can be carried out separately over
these three sets. Let us consider first the sum over ${\sf S}_1$.
The clusters in ${\sf S}_1$ can be divided into subsets of those
residing on a specific tetrahedron ${\sf S}_1^t$.
An arbitrary permutation $P$ of the 4 sites on tetrahedron $t$
leaves the set ${\sf S}_1^t(G_n)$ invariant. This is because
each solution obeys Eq.\eqref{eq:conds}, and
$\Gamma_{a_i a_j} = \Gamma_{P(a_i) P(a_j)}$ for $a_i,a_j \in t$
(this is a set of permutations that leaves
nearest neighbor pairs invariant).
The contribution of all clusters on $t$ to the term in question
can only be a function of the 4 Ising variables of the 4 sites $q = 1,2,3,4 \in t$
\begin{eqnarray}
\label{eq:cl1}
\sum_{(a_1,\ldots,a_n)\in {\sf S}_1^t}
f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_n}) = F(\sigma_1,\sigma_2,\sigma_3,\sigma_4)
\; .
\end{eqnarray}
Now we can use the fact that the spin configurations on one
tetrahedron are always constrained to be of the 3:1 form, i.e. they
are a permutation $P$ of the specific configuration $+++-$:
\begin{eqnarray}
\label{eq:permsig}
\sigma_q = \sigma^0_{P(q)},
\end{eqnarray}
with $(\sigma^0_1,\sigma^0_2,\sigma^0_3,\sigma^0_4)=(+,+,+,-)$. Here
$q\rightarrow P(q)$ is a permutation of the $4$ sites.
The specific (cyclic) permutation $P$ now encodes the spin state on
this tetrahedron
\begin{equation}\label{eq:cl3}
\begin{split}
& F(\sigma_{P(1)}^0,\sigma_{P(2)}^0,\sigma_{P(3)}^0,\sigma_{P(4)}^0)
\\ = &
\sum_{(a_1,\ldots,a_n)\in {\sf S}_1^t}
f_{G_n}(\sigma_{P(a_1)}^0,\ldots,\sigma_{P(a_n)}^0)
\\ = &
\sum_{(a_1,\ldots,a_n)\in P^{-1}({\sf S}_1^t)}
f_{G_n}(\sigma_{a_1}^0,\ldots,\sigma_{a_n}^0)
\; .
\end{split}
\end{equation}
Since the set ${\sf S}_1^t(G_n)$ is invariant under
these permutations, we find from the last expression, that
$F(\sigma_1,\sigma_2,\sigma_3,\sigma_4)$ is also invariant under the permutations.
Hence this contribution is identical for {\sl all} spin
configurations, and is a constant within the 3:1 manifold.
Let us next consider the clusters in ${\sf S}_2$. For each cluster,
there are two neighboring tetrahedra $t,t'$ which each contain two or
more sites $a_i$. These tetrahedra share one specific site, which we
call $A$. The pair of tetrahedra in question are determined by $A$ (the
tetrahedra $t$ and $t'$ are determined by the choice of $A$). For one
such cluster, the sites $a_i$ with $i=1\ldots n$ may be partitioned into
three groups: the site $A$, and those which are on $t$ or $t'$ but are
not $A$:
\begin{eqnarray}
\label{eq:14}
{\sf t} & = & \{ a_i | a_i \in t, a_i \neq A\}, \\
{\sf t'} & = & \{ a_i | a_i \in t', a_i \neq A\}.
\end{eqnarray}
Similarly to ${\sf S}_1$, we can divide ${\sf S}_2$ into subsets ${\sf S}_2^A$
residing on tetrahedron pairs defined by the site $A$.
We can then rewrite the sum by summing $A$ over all lattice sites,
and summing the set of sites $a_1 \ldots a_n$ over ${\sf S}_2^A$
\begin{equation}
\label{eq:15}
\begin{split} &
\sum_{(a_1,\ldots,a_n)\in {\sf S}_2}
f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_n}) \\ & =
\sum_A \sum_{(a_1,\ldots,a_n)\in {\sf S}_2^A}
f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_n}).
\end{split}
\end{equation}
We now observe that the set of solutions ${\sf S}_2^A$ is invariant
under any permutation $P_t$ ($P_{t'}$) of the 3 sites in ${\sf t}$
(${\sf t'}$). Exactly as for ${\sf S}_1^t$ this is because each
solution in ${\sf S}_2^A$ obeys Eq.\eqref{eq:conds}, and $\Gamma_{a_i
a_j} = \Gamma_{P_t(a_i) P_t(a_j)}$ for $a_i,a_j \in A\cup {\sf t}\cup
{\sf t'}$ (and the same holds if $P_t$ is replaced by $P_{t'}$).
The sum
\begin{equation}\label{SevenIsing}
\sum_{(a_1,\ldots,a_n)\in {\sf S}_2^A}
f_{G_n}(\sigma_{a_1},\ldots,\sigma_{a_n})
\end{equation}
can only be a function of the 7 Ising variables of the sites in $A\cup
{\sf t}\cup{\sf t'}$. Due to the 3:1 constraint, if
$\sigma_A=+$, then the Ising variables $\sigma_q$ for $q \in {\sf t}$ must be a
permutation $P_t$ of $\sigma_q^{(1)}=(++-)$. If $\sigma_A=-$, then all
the $\sigma_q=+$.
Hence we may write
\begin{eqnarray}
\label{eq:17}
\sigma_{q} = \left\{ \begin{array}{ll} \frac{(1+\sigma_A)}{2}
\sigma^{(1)}_{P_t(q)} + \frac{(1-\sigma_A)}{2} (+1) & {\rm for}\;
q\in {\sf t} \\
\frac{(1+\sigma_A)}{2}
\sigma^{(1)}_{P_{t'}(q)} + \frac{(1-\sigma_A)}{2} (+1) & {\rm for}\;
q\in {\sf t'}\end{array}\right. .
\end{eqnarray}
Using these expressions, and the fact that ${\sf S}_2^A$ is invariant
under these two permutations, the sum in Eq.~\eqref{SevenIsing} is found
to depend only on $\sigma_A$.
This leaves finally
\begin{eqnarray}
\label{eq:18}
\sum_{(a_1,\ldots,a_n)\in {\sf S}_2} = \sum_{A} \tilde{f}(\sigma_A),
\end{eqnarray}
where $\tilde{f}(\sigma_A)$ is a complicated function obtained from the
above manipulations -- which however does not depend upon $A$ itself.
The sum is clearly then constant, as the number of + and - spins are
fixed for the lattice. Thus all terms in ${\sf S}_2$ are also
constants.
Finally, consider ${\sf S}_3$. In these clusters there are three
adjoining tetrahedra, and one may identify a ``central'' tetrahedron $t$
which shares a site with each of the other two tetrahedra $t',t''$.
Here one may divide the sum variables into five groups: two
corresponding to the site shared by $t,t'$ and the site shared by
$t,t''$, and three others corresponding to the sites on $t,t',t''$ but
not shared. One can again sum over the unshared sites on $t'$ and
$t''$, and obtain an expression for the cluster sum which involves sites
only on $t$. By manipulations of the type used to analyze ${\sf S}_1$,
one finds that this remaining single-tetrahedron sum must also be
constant.
We conclude that any term for which the corresponding clusters are
confined to three or fewer adjacent tetrahedra must be constant. Therefore
all terms up to and including 5th order are constant. At
sixth order, amongst the non-contractible diagrams there are a few
exceptions. First, there is one disconnected diagram containing
two loops of length three. In
this term, the sum over variables in the first and second groups is
independent, and therefore each can be carried out separately as for a
third order term. This gives immediately a constant contribution.
The remaining diagrams are connected. All but one of these diagrams
contains a loop of length $5$ or less (possibly in addition to other
larger loops). Such terms are confined to three or fewer tetrahedra,
and are constant by the above arguments. What remains is the single
diagram consisting of {\sl only} a single
loop of length six, shown in Fig.\ref{fig:plaquette_loop}.
This ``large loop'' diagram is thus the sole non-trivial contribution. It
can be written in the form
\begin{eqnarray}
\label{eq:form6}
&& {\mathcal H}_6^{L}[\{\sigma_i\}] = \sum_{a_1\ldots a_6}
\left(\prod_{i=1}^6 \Gamma_{a_i a_{i+1}}\right) f_L(\sigma_{a_1},\ldots,\sigma_{a_6}),
\end{eqnarray}
where we identify $a_7=a_1$. To analyze each term (given a particular set $a_1 \ldots a_6$),
we employ a trick:
multiplying it by a carefully chosen representation of the
identity
\begin{eqnarray}
\label{eq:10}
1=\prod_{\langle\langle ij\rangle\rangle} \left( \delta_{a_i a_j} +
\eta_{a_i a_j}\right),
\end{eqnarray}
with $\eta_{ab}=1-\delta_{ab}$. Here the product is over distinct pairs
$i,j$ which are {\sl not} connected in the loop diagram. We multiply
the loop term by this expression and expand the product fully. All but
one term involves at least one Kronecker $\delta$-function. In each of
these summand terms, at least one sum can be collapsed, leading to a lower-order
term, which we have already shown is necessarily a constant. The
remaining non-vanishing part is the original summand term multiplied by the
product,
\begin{eqnarray}
\label{eq:11}
\prod_{\langle\langle ij\rangle\rangle} \eta_{a_i a_j}.
\end{eqnarray}
This factor is non-zero if and only if all $n=6$ sites $a_i$ are
\emph{distinct}. Thus the sites $a_i$ must comprise a closed walk on
the lattice in which each site on the walk is visited only once. On
the pyrochlore lattice, this is exactly the set of hexagonal
plaquettes. A specific plaquette on the lattice containing sites
$i_1\ldots i_6$ in sequence around the plaquette appears 12 times in
the sum in Eq.\eqref{eq:form6}, with $a_1\ldots a_6$ taking the six cyclic
permutations of $i_1\ldots i_6$ {\sl and} the six cyclic permutations
of these sites in reverse order. Hence the non-constant contribution
to the diagonal energy at 6th order in DPT can be written:
\begin{eqnarray}
\label{eq:Hnotconst}
{\mathcal H}_6 & = & \sum_{\mathcal P} {\mathcal E}_{\mathcal
P}(i_1,\ldots,i_6),
\end{eqnarray}
where $i_1,\ldots,i_6$ are the six sites moving clockwise around
plaquette ${\mathcal P}$, and
\begin{eqnarray}
\label{eq:Eform}
&& {\mathcal E}_{\mathcal
P}(\sigma_{i_1},\ldots,\sigma_{i_6})= \nonumber \\
&& \sum_{k=1}^6 \left[ f_L(\sigma_{i_{k}},\ldots,\sigma_{i_{k+5}})
+f_L(\sigma_{i_{k+5}},\ldots,\sigma_{i_{k}}) \right],
\end{eqnarray}
where $i_{k+6}\equiv i_k$.
\begin{figure}
\centering
\includegraphics[width=1.0in]{fig9.eps}
\caption{The only diagram at order $n=6$ giving a non-constant
diagonal contribution in degenerate perturbation theory.}
\label{fig:plaquette_loop}
\end{figure}
\subsection{Results}
\label{DPT_result}
We have carried out the calculations detailed in the previous
subsections. Specifically, by explicitly constructing $|\Psi\rangle$
in Eq.\eqref{eq:26}, we obtained ${\mathcal H}_6[\{\sigma_i\}]$ in
Eq.\eqref{eq:mag6}. From this, we extracted the function $f_L$ in
Eq.\eqref{eq:form6} and thereby determined the plaquette energies
${\mathcal E}_{\mathcal P}$ using Eq.\eqref{eq:Eform}. Using the 3:1
constraint, there are 5 configurations possible on any plaquette,
which we denote ``type 0'' to ``type 4''. These are enumerated in
Table~\ref{table1}. The DPT calculation gives a specific energy
(proportional to $J_z \alpha^6$) for each type.
\begin{table}[h]
\begin{tabular}{|c|c|c|}
\hline
Type & Configuration & Fraction of \\
& & minority spins \\
\hline
$0$ & $\uparrow\uparrow\uparrow\uparrow\uparrow\uparrow$ & $0$ \\
$1$ & $\downarrow\uparrow\downarrow\uparrow\downarrow\uparrow$ & $\frac{1}{2}$\\
$2$ & $\downarrow\uparrow\uparrow\uparrow\uparrow\uparrow$ & $\frac{1}{6}$\\
$3$ & $\downarrow\uparrow\downarrow\uparrow\uparrow\uparrow$ & $\frac{1}{3}$\\
$4$ & $\downarrow\uparrow\uparrow\downarrow\uparrow\uparrow$ & $\frac{1}{3}$\\
\hline
\end{tabular}
\caption{\label{table1} The different plaquette types, with the fraction of minority sites in each one.}
\end{table}
There is some freedom in the choice of these 5 energies. That is,
certain changes of the plaquette energies leave the {\sl differences}
of total energy amongst distinct 3:1 states unchanged. One obvious
such ``gauge'' change is a global shift of all 5 energies by the same
amount. Another less obvious constraint comes directly from the 3:1
rule. If one denotes the fraction of plaquettes in the lattice in
configuration $a$ by $x_a$, the total fraction of minority sites must
always be $1/4$. Each plaquette configuration has a fixed fraction of
minority sites $M_a$, given in Table~\ref{table1}. Thus
\begin{equation}\label{magnetization}
\frac{1}{4} = \sum_{a=0}^4 M_a x_a .
\end{equation}
The energy per plaquette is then
\begin{equation}\label{diagonal_energy}
{\mathcal H}_6 = \sum_{a=0}^4 {\mathcal E}_a x_a
\end{equation}
Using \eqref{magnetization}, one sees that a shift $\Delta{\mathcal
E}_a = c M_a$, with arbitrary constant $c$ shifts the energy by a
constant. The obvious global energy shift remarked on above derives
similarly from the normalization condition
$\sum_a x_a = 1$. Using these two constraints, we see there are only
3 independent plaquette fractions.
We (arbitrarily) choose to keep $x_{1,2,4}$ as our independent variables.
Substituting the solutions for the other fractions ($x_{0,3}$) into
\eqref{diagonal_energy}, we find
\begin{equation}
{\mathcal H}_6 =x_1 V_1 + x_2 V_2 + x_4 V_4,
\end{equation}
with the 3 ``gauge invariant'' physical energy parameters
\begin{eqnarray}
\label{eq:ginv}
V_1 & = & \frac{1}{2} \left({\mathcal E}_0+2 {\mathcal E}_1-3
{\mathcal E}_3\right) , \nonumber\\
V_2 & = & \frac{1}{2} \left(-{\mathcal E}_0+2 {\mathcal
E}_2-{\mathcal E}_3\right), \nonumber\\
V_4 & = & \left( {\mathcal E}_4-{\mathcal E}_3 \right).
\end{eqnarray}
\begin{widetext}
Our DPT results are:
\begin{equation}
\begin{split}
V_1 = & - J_z \alpha^6
\frac{3 s^4(98304 s^5-139648 s^4+79136 s^3-22040 s^2+3006s-165)}{32 (2 s-1) (4 s-1)^5 (8 s-3)^2 (12 s-5)} ,
\\
V_2 = & J_z \alpha^6
\frac{s^3 \left(256 s^3-51 s+9\right)}{32 (4 s-1)^3 (8 s-3)^2},
\\
V_4 = & J_z \alpha^6
\frac{s^4 \left(272 s^2-136 s+15\right)}{16 (4 s-1)^5 (8 s-3)^2}
\; .
\end{split}
\end{equation}
\end{widetext}
We have made several checks on the above calculation. First, we have
carried out a more direct scheme which
sums the terms in DPT in a completely different manner from the
methods described in this section. We leave the vast details of this
calculation to Appendix~\ref{app:Other_DPT}. The results of this
alternative method agree perfectly with those quoted above. Second,
in the following section we will compare the $s \rightarrow \infty$
limit of the above result with the result of a large $s$ calculation
for the XXZ model. The large $s$ limit of the energies we find in DPT
becomes
\begin{equation}\label{Infty_S_lim_DPT}
\begin{split}
\mathop {\lim }\limits_{s \to \infty } \frac{V_1}{s} = & 0,
\\
\mathop {\lim }\limits_{s \to \infty } \frac{V_2}{s} = & \frac{J_z \alpha^6}{512} ,
\\
\mathop {\lim }\limits_{s \to \infty } \frac{V_4}{s} = & 0
\; .
\end{split}
\end{equation}
We shall see that this result indeed agrees exactly with the
corresponding limit of the large $s$ expansion.
\subsection{Off diagonal term}
In this section we describe how the lowest order off diagonal term in the DPT effective Hamiltonian
is calculated. As explained in Section~\ref{sec:dpt_formulation}, this term appears at order $O(\alpha^{6 s})$.
The lowest order off diagonal term acts only on a hexagonal plaquette in the flippable configuration
(type 1 plaquette, as in Table~\eqref {table1}). It changes the plaquette configuration from one
flippable configuration to the other flippable configuration. Therefore the off diagonal term has
the following general form
\begin{equation}
\begin{split} &
{\mathcal H}_{\textrm{off diagonal}}
= (-1)^{6s+1} \alpha^{6 s} J_z K
\sum_P
\left({\centering \includegraphics[width=0.4in]{fig34.eps}} + {\rm h.c.} \right)
\\
= & (-1)^{6s+1} \alpha^{6 s} J_z K
\sum_P \left( \ket{\d\u\d\u\d\u} \bra{\u\d\u\d\u\d} + {\rm h.c.} \right)
\; ,
\end{split}
\end{equation}
where we denote the two flippable configurations of the plaquette by $\includegraphics[width=0.2in]{fig35.eps}$,
$\includegraphics[width=0.2in]{fig36.eps}$
for the sake of brevity. Note that we can change the $(-1)^{6s+1}$ factor into a $(-1)$ by
a unitary transformation similar to that employed in Ref.~\onlinecite{Hermele:prb04}.
We shall now
describe how the coefficient $K$ is calculated.
Each one of the DPT processes contributing to the off-diagonal term
consists of $2s$ spin transfer operations along each one of 3 links of a
hexagonal plaquette of the pyrochlore lattice (see
Fig.~\ref{fig:off_diagonal}), acting in some particular order.
We can calculate $K$ by adding the contributions from all the DPT
processes occurring on a single plaquette, starting in the state
$\ket{\d\u\d\u\d\u}$ and ending in the state $\ket{\u\d\u\d\u\d}$.
In every such process 3 spins go from an initial state of $+s$ to $-s$, and 3 start with $-s$ and end up as $+s$.
The spins change via ladder operators $S_j^{\pm}$, and therefore we get ``angular momentum factors'' from the
action of these operators. The same set of operators $S_j^{\pm}$ act in every process, and so these
factors are always the same. For the $S^+$ operators taking a single site from $-s$ to $+s$ we find
\begin{equation}
\prod_{m = -s}^{s-1} \sqrt{s(s+1)-m(m+1)} = (2s)!
\; ,
\end{equation}
and for the $S^-$ operators taking a single site from $+s$ to $-s$ we find
\begin{equation}
\prod_{m = -s+1}^{s} \sqrt{s(s+1)-m(m-1)} = (2s)!
\; .
\end{equation}
In total from all the ladder operators, we find a common factor
$ ((2s)!)^6 $. From the 6 spin transfer operators we have another common factor of $1/2^{6s}$.
All that remains to be calculated for a single DPT process is the product of resolvents
of each stage in the spin transfer process.
First let us classify the different processes on a single plaquette. We can choose one
of two sets of three links on which spin transfer will occur (one such choice is shown in Fig.~\ref{fig:off_diagonal}).
The contribution from each one of these
two cases is identical, so we shall calculate the contributions for one set of three links, and
multiply the final result by 2. The processes we will sum over only differ by the order in which the
spin transfer operators act on the 3 predetermined links. We call the three links $A$,$B$, and $C$, and
then each process is described by a string of $6s$ letters $q_1 \ldots q_{6s}$ which contain
$2s$ instances of each one of the three letters $A$,$B$,$C$. For example, a possible string for $s = 1$
is $AABBCC$.
$ $From this classification, it is evident that in total there are $\frac{(6s)!}{ (2s)! (2s)! (2s)!}$ different processes.
At this point we can write a formal expression for the coefficient $K$
\begin{equation}
K = 2 \frac{((2s)!)^6}{2^{6s} } \sum_{ \{ q_n\} } \prod_{\ell = 1}^{6s-1} \tilde{\mathcal R}_{\ell}(\{ q_n\})
\; ,
\end{equation}
where $\tilde{\mathcal R}_{\ell}(\{ q_n\})$ denotes the resolvent at step $\ell$ of the DPT process described by the
string $\{ q_n\}$.
Now we turn to formulating the resolvent in a convenient manner that will facilitate the summation
over all processes.
Starting from Eq.\eqref{eq:25}, in this case the set ${\sf F}$ consists only of the 6 sites surrounding
the hexagonal plaquette. We shall denote these 6 sites $1$ through $6$, as in Fig.~\ref{fig:off_diagonal}
so that $A$ denotes the link $(1,2)$, $B$ denotes the link $(3,4)$, and $C$ denotes the link $(5,6)$.
Since the 6 sites have alternating initial states $\pm s$,
any pair of nearest neighbor sites has $\sigma_i \sigma_j = -1$.
We can therefore rewrite the inverse resolvent operator as
\begin{equation}
\tilde{\mathcal R}^{-1} = -J_z \sum_{j = 1}^6
\left(S_j^z -s\right)\left(S_{j+1}^z - s\right)
- 2 J_z s \sum_{j = 1}^6 \left(S_j^z - s\right)
\; ,
\end{equation}
where the indices are defined modulo 6, so that $S_{6+1}^z = S_1^z$. From this point on, all
index arithmetic is defined modulo 6 as well, for ease of presentation.
\begin{figure}
\centering
\includegraphics[width=2.0in]{fig10.eps}
\caption{(Color online) Off diagonal process on a single plaquette. The (red) circles denote minority sites.}
\label{fig:off_diagonal}
\end{figure}
To further simplify the resolvent, we introduce $n_A(\ell,\{ q_n\})$ as the number of times the
link $A$ has had spin transfer occur on it up to stage $\ell$ in the process
desribed by the string $\{ q_n\}$.
The same numbers can be also introduced for $B$ and $C$.
Then, by definition, the total number of spin transfer operations is
$n_A(\ell,\{ q_n\}) + n_B(\ell,\{ q_n\}) + n_C(\ell,\{ q_n\}) = \ell $.
In what follows we will show that the resolvent can be described only by these 3 numbers.
To see this, notice first that, regardless
of the order of spin transfer operations, a spin transfer operator on the link
$(j,j+1)$ changes $\left(S_j^z - s\right) \rightarrow \left(S_j^z - s - 1\right)$
and $\left(S_{j+1}^z - s\right) \rightarrow \left(S_{j+1}^z - s - 1\right)$.
Note also that, in
the initial state, all $\left(S_{j+1}^z - s\right) = 0$.
Thus, at every stage of any process,
$\left(S_1^z - s\right) = \left(S_2^z - s\right) = - n_A(\ell,\{ q_n\})$
Similarly $\left(S_3^z - s\right) = \left(S_4^z - s\right)= - n_B(\ell,\{ q_n\})$,
and $\left(S_5^z - s\right) = \left(S_6^z - s\right) = - n_C(\ell,\{ q_n\})$.
Using these variables, one can then rewrite the resolvent as
\begin{equation}
\begin{split} &
\tilde{\mathcal R}_{\ell}(\{ q_n\}) = 4 J_z s \ell \\ &
- \frac{J_z}{2} \left[ \left( n_A + n_B \right)^2 + \left( n_B + n_C \right)^2 + \left( n_C + n_A \right)^2 \right]
\; ,
\end{split}
\end{equation}
where we have suppressed the explicit dependence of the $n_{A,B,C}$ numbers on $\ell,\{ q_n\}$
for clarity. It is more convenient to derive a recursion relation for the resolvent at stage $\ell$
\begin{equation}\label{recursion}
\tilde{\mathcal R}_{\ell + 1}(\{ q_n\}) =
\tilde{\mathcal R}_{\ell}(\{ q_n\}) +
J_z \left( 4s - 1 - \ell - n_{q_{\ell +1}}(\ell,\{ q_n\}) \right)
\; .
\end{equation}
The initial condition for this recursive series is $\tilde{\mathcal R}^{-1}_0 = 0$.
Using Eq.\eqref{recursion}, we can calculate the product $\prod_{\ell = 1}^{6s-1} \tilde{\mathcal R}_{\ell}(\{ q_n\})$
for a given process. For every process,
we need to keep track of only the 3 numbers $n_{A,B,C}$
in the various steps of the the process.
We have calculated the coefficient $K$ explicitly for a number
of interesting values of $s$. The results are summarized in Table~\ref {table3}.
\begin{table}[h]
\begin{tabular}{|c|c|}
\hline
$s$ & $K$ \\
\hline
$\frac{1}{2}$ & $\frac{3}{2}$ \\
$1$ & $0.166$ \\
$\frac{3}{2}$ & $0.00839536$ \\
$2$ & $0.000304464 $ \\
$\frac{5}{2}$ & $9.1747 \times 10^{-6}$ \\
\hline
\end{tabular}
\caption{\label{table3} Values $K$ of the coefficient for the lowest order off-diagonal term, for various values of $s$.}
\end{table}
\section{Large $s$ expansion}
\label{large_s}
A large $s$ analysis has recently been employed in
Ref.~\onlinecite{Hizi:prb06} to explore the magnetic order for the
general spin $s$ Heisenberg AFM on the pyrochlore lattice. Restricting
the Hilbert space to collinear spin configurations, the authors of
Ref.~\onlinecite{Hizi:prb06} derived an effective Hamiltonian out of the
harmonic spin wave energy contribution, to order ${\cal O}(s)$. The
effective Hamiltonian prefers spin products around hexagonal plaquettes
to be $-s^6$ in the zero magnetic field, and $+s^6$ in the
half-polarized plateau region. Following a terminology inspired by Ising
gauge theory, these are denoted ``$\pi$ flux'' configurations and ``zero
flux'' configurations, respectively. In order to compare this approach,
which is justified in the large $s$ limit, with the DPT analysis of
Section~\ref{sec:DPT_Leon}, we have repeated the same type of effective
Hamiltonian calculation for the the XXZ model. Our derivation follows
closely that of Ref.~\onlinecite{Hizi:prb06}.
The large $s$ expansion consists of expressing the spin degrees of
freedom in terms of Holstein-Primakoff bosons and expanding in
decreasing powers of $s$. The lowest order term in the large $s$
expansion is of order $s^2$, and corresponds to the classical spin
version of the quantum XXZ Hamiltonian
\begin{equation}\label{H_cl_aniso}
\begin{split}
{\mathcal H}_{\textrm{cl}} = &
J_z \sum_{\langle i j \rangle}
\left[ \alpha \left( {\bf S}_i \cdot {\bf S}_j \right) +
\left( 1 - \alpha \right) \left( {\bf S}_i \cdot {\hat z}\right)
\left( {\bf S}_j \cdot {\hat z}\right)
\right]
\\ &
- 2 J_z h \sum_j S^z_j
\; ,
\end{split}
\end{equation} where as before $\alpha = J_{\perp}/J_z$.
In order to analyze the ground state of this anisotropic classical model \eqref{H_cl_aniso}, we first
calculate the minimum energy
configuration for a \emph{single} tetrahedron. For the single tetrahedron we obtain the magnetization
curve shown in Fig.~\ref{fig:Classical_Anisotropic_Magnetization2}. We find that for
$\alpha < 1$ a half polarization plateau opens up, and the plateau
becomes wider as $\alpha$ decreases from $1$.
In this plateau, the classical spins on the single tetrahedron being analyzed are in a collinear configuration,
with three ${\bf S}_j = s{\hat z}$ and one ${\bf S}_j = -s{\hat z}$ spins.
This is just the classical analog of the 3:1 configuration on a single tetrahedron
found in Section~\ref{easy_axis}.
A 3:1 spin configuration can be realized on each and every tetrahedron of the lattice simultaneously.
We therefore conclude that in the range of magnetic fields where
the single tetrahedron is in a half polarized state,
the ground state of the many body system~\eqref{H_cl_aniso}
is the manifold of 3:1 configurations. This means that the
plateaus in the classical XXZ model on the complete pyrochlore lattice
are at least as wide as in
Fig.\ref{fig:Classical_Anisotropic_Magnetization2}.
\begin{figure}
\centering
\includegraphics[width=3.6in]{fig11.eps}
\caption{(Color online) Magnetization (in units of $s$) of a single tetrahedron of classical spins with an anisotropic XXZ interaction, parametrized by $\alpha$. For any $\alpha <1$ a half polarization plateau exists.}
\label{fig:Classical_Anisotropic_Magnetization2}
\end{figure}
In the following we will discuss only this half magnetization plateau. We then
assume the collinear 3:1 states,
which allows us to
describe the magnetic configuration in terms of
the same Ising variables $\sigma_j = \pm 1$ as in Section~\ref{easy_axis}.
As in Ref.~\onlinecite{Hizi:prb06} we use the unitary transformation~\eqref{unitary} so that we can define the
Holstein-Primakoff bosons, which amounts to replacing the rotated spin operators as follows
\begin{equation}
\begin{split}
S_j^z = & s - {\hat m}_j
\\
S_j^+ = & \sqrt{2 s - {\hat m}_j} \, {\hat b}_j \approx \sqrt{2 s} \, {\hat b}_j
\; ,
\end{split}
\end{equation}
where ${\hat b}_j$ are canonical bosonic operators, and
${\hat m}_j = {\hat b}_j^{\dagger} {\hat b}_j$ is the boson number operator.
We plug these into the Hamiltonian \eqref{XXZ}, and keep only the quadratic terms in the bosonic operators.
Since the spin configurations are now restricted to the 3:1 manifold,
the magnetic field term is the \emph{same} for every 3:1 configuration as the magnetization
is constant on the plateau.
In terms of the \emph{unrotated} spin variables {\bf $S_j^z$}, this amounts to
$\sum_j S_j^z = \frac{s}{2} N$ where $N$ is the
number of sites in the pyrochlore lattice. Varying the magnetic field in the plateau region
causes an overall shift in the spin wave energies of all the 3:1 states,
and thus will not alter the energy differences between different 3:1 states.
Similarly the Ising variables have a sum of $\sum_j \sigma_j = \frac{1}{2} N$,
and we can use these two identities to derive $\sum_j \sigma_j {\hat m}_j = 0$, which is useful in simplifying other terms.
Therefore, we can ignore the magnetic field term, since we are searching for an effective Hamiltonian splitting the energies of
different 3:1 states. The effect of the magnetic field is to determine the energy gap for spin wave excitations.
The vanishing of the spin wave gap signifies an instability of the 3:1 manifold, corresponding to the
edges of the half polarization plateau.
$ $From Eqs.\eqref{virtualE},\eqref{rotated_H_1}, the resulting
harmonic spin wave term reads
\begin{eqnarray}
\label{eq:harmxxx}
&& {\mathcal H}^{3:1}_{\textrm{harm}} =
J_z \frac{\alpha}{2} s \sum_{i,j}
\Gamma_{i j} \Big[
\left( \frac{1 + \sigma_i \sigma_j}{2} \right)
\left( {\hat b}_j^{\dagger} {\hat b}_i + h.c. \right)
\nonumber \\ && +
\left( \frac{1 - \sigma_i \sigma_j}{2} \right)
\left( {\hat b}_j {\hat b}_i + h.c. \right)
\Big]+ J_z 2 s \sum_j {\hat m}_j
\; .
\end{eqnarray}
Following the derivation Ref.~\onlinecite{Hizi:prb06},
the zero point energy of this
harmonic term for a given 3:1 spin configuration (described by $\{ \sigma_{j}\}_{j=1}^N$) is
\begin{equation}
E_{\textrm{harm}} = J_z s \sum_{k=1}^{N} \frac{|\lambda_k|}{2},
\end{equation}
where $\lambda_k$ are the solutions of the eigenvalue equation
\begin{equation}
\left( \frac{\lambda}{2} \right)^2 {\bf v} =
\left[
\mathbf{1} +
\frac{\alpha}{2} \left( {\hat \sigma} {\hat \Gamma} {\hat \sigma} + {\hat \Gamma} \right)
+ \left( \frac{\alpha}{2} {\hat \sigma} {\hat \Gamma} \right)^2
\right] \cdot {\bf v}
\; .
\end{equation}
In the right hand side $\hat \Gamma$ denotes the same $N \times N$ connectivity matrix
introduced in Section~\ref{easy_axis}, and $\hat \sigma$ is
a diagonal $N \times N$ matrix with $\sigma_j$ as its diagonal elements.
Without specifying the 3:1 configuration, we can
write an expression for the harmonic energy in terms of $\sigma_j$
\begin{equation}\label{Eharm}
E_{\textrm{harm}} = J_z s {\textrm{Tr}}\left[
\sqrt{\mathbf{1} + \frac{\alpha}{2} \left({\hat \sigma}{\hat \Gamma}{\hat \sigma} + {\hat \Gamma}\right)
+ \frac{\alpha^2}{4} \left({\hat \sigma}{\hat \Gamma}\right)^2
}
\right]
\;.
\end{equation}
One can calculate the spin wave energies by assuming a particular spin
configuration and computing the trace exactly. However, as in
Ref.~\onlinecite{Hizi:prb06}, if one does not know which candidate spin
configurations to consider, one can derive an effective Hamiltonian to
determine which spin configuration gives the lowest harmonic energy, and
find a favorable spin configuration.
The square root in \eqref{Eharm} can be expanded in powers of matrix
operators. We first observe that $\alpha$ only appears as a multiplier
of the matrix $\Gamma$. Therefore, an expansion in powers of matrix
operators is \emph{equivalent} to expansion in the parameter $\alpha$.
In the present context, this expansion is justified due to the easy axis
anisotropy $\alpha <1 $.
The terms in the expansion can be organized as a sum of traces over
products of $\Gamma$ matrices and Ising variables $\sigma_j$. The order
of $\alpha$ for each term also specifies the number of connectivity
matrices $\Gamma$ appearing in that term.
Due to the trace operation, the product of $\Gamma$ matrices represents
closed loops on the lattice. The Ising variables appearing in each such
term can only involve the sites on the loops defined by the product of
$\Gamma$ matrices. Using the results of
Section~\ref{sec:contraction-rules}, which discuss functions of Ising
variables and $\Gamma$ matrices precisely of the form appearing in this
expansion, it is evident that all terms involving less than six $\Gamma$
matrices will result in constants, which will not split energies of the
3:1 states. As in Section~\ref{easy_axis}, the lowest order term in the
expansion in $\alpha$ causing energy splitting in the 3:1 manifold
involves loops around hexagonal plaquettes of the pyrochlore lattice.
For simplicity, we consider only these terms, and ignore any higher
order term in the expansion in $\alpha$. After extensive simplification,
the $6$-th order term reads \begin{equation}\label{eff_large_s}
\begin{split}
{\mathcal H}_{\textrm{harm}} = &
J_z s \left( \frac{\alpha}{2} \right)^6 \frac{1}{512} \Big[
14 \text{Tr}\left(\sigma .\Gamma.\sigma .\Gamma ^5\right)
+ 14 \text{Tr}\left(\sigma.\Gamma ^2.\sigma .\Gamma ^4\right)
\\ &
+ 7 \text{Tr}\left(\sigma .\Gamma ^3.\sigma .\Gamma^3\right)
-6 \text{Tr}\left(\sigma .\Gamma .\sigma.\Gamma .\sigma .\Gamma .\sigma .\Gamma ^3\right)
\\ &
- 3 \text{Tr}\left(\sigma .\Gamma ^2.\sigma .\Gamma .\sigma.\Gamma ^2.\sigma .\Gamma \right)
\\ &
- 6 \text{Tr}\left(\sigma .\Gamma ^2.\sigma .\Gamma^2.\sigma .\Gamma .\sigma .\Gamma \right)
\\ &
+ \text{Tr}(\sigma .\Gamma .\sigma .\Gamma .\sigma.\Gamma .\sigma .\Gamma .\sigma .\Gamma .\sigma .\Gamma)\Big]
+ O(\alpha^8)
\; .
\end{split}
\end{equation}
$ $From the expression one extracts only those terms corresponding to loops around hexagonal plaquettes.
Eq.\eqref{eff_large_s} takes the form of the function in
Eq.\eqref{eq:1}, with $n = 6$ and the ``loop'' diagram
$G_n=\{(12),(23),(34),(45),(56),(61)\}$.
The corresponding function $f(\sigma_{a_1},\ldots,\sigma_{a_6})$ reads
\begin{eqnarray}
&& f(\sigma_{a_1},\ldots,\sigma_{a_6}) = 14\ \sigma_{a_1}\sigma_{a_2}+ 14\ \sigma_{a_1}\sigma_{a_3}
+ 7\ \sigma_{a_1}\sigma_{a_4} \nonumber \\
&&\ -\ 6\ \sigma_{a_1}\sigma_{a_2}\sigma_{a_3}\sigma_{a_4}
- 3\ \sigma_{a_1}\sigma_{a_3}\sigma_{a_4}\sigma_{a_6}
- 6\ \sigma_{a_1}\sigma_{a_3}\sigma_{a_5}\sigma_{a_6} \nonumber \\
&& \ +\ \sigma_{a_1}\sigma_{a_2}\sigma_{a_3}\sigma_{a_4}\sigma_{a_5}\sigma_{a_6}. \label{eq:f}
\end{eqnarray}
The effective Hamiltonian therefore describes all possible
spin interactions on the hexagonal plaquette of the pyrochlore lattice
-- 2,4 and 6 spin interactions. It is far more convenient to express this complicated Hamiltonian in terms
of energies of plaquette configurations, in the same way we formulated the results of the DPT
in Section~\ref{easy_axis} as ${\mathcal H}_{\textrm{harm}} = \sum_P
{\mathcal E}_P$ (using the same 5 plaquettes in Table~\ref{table1}).
As in Section~\ref{DPT_result}, there are only $3$ independent plaquette
configuration energies, $V_{1,2,4}$ which to $O(\alpha^6)$ are
\begin{equation}\label{large_S_ergs}
\begin{split}
V_1 = &0,
\\
V_2 = &
\frac{J_z \alpha^6 s}{512},
\\
V_4 = & 0
\; .
\end{split}
\end{equation}
Comparing \eqref{large_S_ergs} with \eqref{Infty_S_lim_DPT}, we find
complete agreement between the DPT of Section~\ref{easy_axis} and the
large $s$ expansion of this section, in the limit of both $\alpha
\rightarrow 0$ and $s \rightarrow \infty$, where both approaches are
justified (see Fig.~\ref{fig:overlap}). This serves an excellent check
on the correctness as well as validity of our calculations, in the
parameter regime where the approximations overlap.
\begin{figure}
\centering
\includegraphics[width=2.5in]{fig12.eps}
\caption{This figure shows the regions of parameter space where the DPT
and large S expansions are justified, and their region of overlapping
validity.}
\label{fig:overlap}
\end{figure}
\section{Low energy states of the effective Hamiltonian}
\label{diagonal_gs}
\subsection{Strict easy axis limit for $s\geq 3/2$}
\label{sec:strict-easy-axis}
In this subsection, we consider the $\alpha=J_\perp/J_z \ll 1$ limit,
for which the lowest-order non-vanishing terms in the effective
Hamiltonian are dominant. For any $s\geq 3/2$, this is just the sixth
order diagonal contribution.
\subsubsection{large-$s$ case}
We first consider the large $s$
limit.
As is clear from Eq.~\eqref{large_S_ergs}, at order $s$ only the type 2
plaquette suffers from a \emph{positive} energy correction, while
$V_{1}$ and $V_4$ are only nonzero at order $s^{0}$ or lower. Hence
the type 2 plaquette is strongly disfavored for large $s$.
This in combination with the 3:1 constraint allows us to restrict ourselves
to the ``$0$-flux manifold'' in the large $s$ region (See Section~\ref{large_s} for
the definition of a 0-flux manifold.)
.
To see this, let us first introduce a ``cell'' comprised by 4
link-sharing plaquettes. Choose 4 hexagonal plaquettes such that any
pair of two plaquettes out of these 4 plaquettes always share a link.
Then these four plaquettes form a single polyhedron (a truncated
tetrahedron), with 4 hexagonal faces, and 4 triangular faces (see
Fig.~\ref{fig:cell}). We will refer to this polyhedron as a cell. In
the pyrochlore lattice, one may distinguish two kinds of cells -- when
one completes the tetrahedra enclosing a cell, we can identify {\it
up-headed} and {\it down-headed} cells, according to the direction at
which the tetrahedra are pointing (see Fig.~\ref{fig:cell} for examples
of both kinds ). Each up/down-headed cell shares its faces (hexagonal
plaquettes) with 4 nearest neighboring down/up-headed cells. Thus,
centers of cells constitute a diamond lattice, where those of up-headed
cells take part of one FCC lattice and those of down-headed cells form
the other FCC lattice. It suffices to determine the spin configuration
on only the up-headed (down-headed) cells in order to specify the spin
configuration on all sites of the lattice.
Observing the local constraint, one can readily enumerate the various
minority spin configurations of a cell. In Table.~\ref{cell_table}, all
possible cell configurations allowed in the 3:1 manifold are listed.
Each cell type is described by the configurations of its 4 hexagonal
plaquettes.
To see that the ground state manifold in the large-$s$ region is
composed only of type 0, 3 and 4 plaquettes (i.e. $0$-flux states),
notice from Table.~\ref{cell_table} that any cell type which contains a
type 1 plaquette always contains at least one type 2 hexagonal
plaquette. This implies \begin{equation} 0 \le x_{1} \le x_2 \end{equation} where $x_a$ are the
plaquette type fractions, as introduced in Section~\ref{DPT_result}.
Disallowing the type 2 plaquette inevitably leads to excluding the type
1 plaquette, and therefore, the positive $V_2$ in leading order of
$s^{-1}$ expansion allows us to conclude that the classical ground state
spin configurations in the large $s$ limit consist of \emph{only} the
$0$-flux states.
This ground state, however, is massively degenerate. Higher order
quantum corrections in $s^{-1}$ can select a particular classical state
out of this $0$-flux manifold. To see this, let us expand the plaquette
energies in ${s^{-1}}$ \begin{equation}\label{growing_s}
\begin{split}
\frac{V_1}{J_z \alpha^6} = & -\frac{3}{512}
+ O \left( s^{-1} \right),
\\
\frac{V_2}{J_z \alpha^6} = & \frac{s}{512}+\frac{3}{1024}
+O \left( s^{-1} \right)
\,,
\\
\frac{V_4}{J_z \alpha^6} = & \frac{17}{65536 s}
+O \left( s^{-2} \right)
\; .
\end{split}
\end{equation} Notice first that the ${\cal O}(1)$ negative energy correction to
$V_1$ plays no role in lifting the degeneracy of the $0$-flux manifold,
since this manifold does not contain any type 1 hexagonal plaquettes.
Thus, provided that $V_2$ dominates the other two, the most relevant
correction in the large $s$ limit is $V_4$, which
always disfavors the type 4 hexagonal plaquette, since it is
positive.
\par
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
{\it cell} $\setminus$ {\sf plaquette} & {\sf type 1} & {\sf type 2}
& {\sf type 3} & {\sf type 4} & {\sf type 0} \\
\hline \hline
{\it type 1} & 1 & 1 & 1 & 1 & 0 \\
{\it type 2} & 1 & 3 & 0 & 0 & 0 \\
{\it type 3} & 0 & 4 & 0 & 0 & 0 \\
{\it type 4} & 0 & 2 & 2 & 0 & 0 \\
{\it type 5} & 0 & 2 & 1 & 1 & 0 \\
{\it type 6} & 0 & 2 & 1 & 0 & 1 \\
{\it type 7} & 0 & 2 & 0 & 1 & 1 \\
{\it type 8} & 0 & 2 & 0 & 0 & 2 \\ \hline
{\it type 9} & 0 & 0 & 4 & 0 & 0 \\
{\it type 10} & 0 & 0 & 2 & 1 & 1 \\
{\it type 11} & 0 & 0 & 0 & 3 & 1 \\
{\it type 12} & 0 & 0 & 0 & 0 & 4 \\
\hline
\end{tabular}
\caption{\label{cell_table} The various cell configurations
are described by the number of each plaquette type included
in the plaquettes comprising a cell.
Cell types are indicated by {\it italics} and plaquette type
by {\sf sans serif}. In the zero-flux manifold, only the type 9, 10, 11 and 12 cells are
allowed, since they do not contain type 1 and type 2 hexagonal plaquettes.
Furthermore, a cell must contain a type 2 plaquette
whenever it contains a type 1 plaquette. This is quantified by $0\le x_1 \le x_2 $.
}
\end{center}
\end{table}
\par
\begin{figure}
\centering
\includegraphics[width=2.4in]{fig13.eps}
\caption{(Color online) A cell is composed of four link-sharing hexagonal plaquettes (The polyhedron bounded by thick (blue) lines). The cell on the left is up-headed, and the one on the right is a down-headed cell.}
\label{fig:cell}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=2.8in]{fig14.eps}
\caption{(Color online) The four cell types allowed in the zero-flux manifold. Minority sites are specified by
the (red) circles.}
\label{fig:celltype}
\end{figure}
Since the type 4 plaquette is disfavored, observing the $0$-flux condition
on all plaquettes, we have only to minimize $x_4$
to obtain the ground state in the large $s$ region.
However, in the 3:1 manifold, the $0$-flux condition becomes so strong
that $x_4$ is in fact bounded by $\frac{3}{28}$ from below ($x_4 \geq
\frac{3}{28} $). To see this, notice first that only the type 9, 10, 11
and type 12 cells drawn in Fig.\ref{fig:celltype} are allowed in the
$0$-flux manifold. Next, we denote by $y_{9,10,11,12}$ the fraction of
cell types $9 \ldots 12$ in the entire pyrochlore lattice (we use these
instead of plaquette type fractions $(x_3,x_4,x_0)$ for later
convenience). In the $0$-flux manifold, only these cell types may
occur, and therefore $\sum_{j=9}^{12}y_{j}=1$. Together with the
``global'' 3:1 constraint, i.e. Eq.~\eqref{magnetization} one finds
\begin{eqnarray}
3y_{12} = y_{9}, \label{globalcondition}
\end{eqnarray}
or alternatively,
\begin{eqnarray}
y_{12}=\frac{1}{4}(1-y_{10}-y_{11}), \label{y-magnetization}
\; .
\end{eqnarray}
An important step to identify the lower bound on $x_4$ is to note that
packing these four cell types into a pyrochlore lattice is highly
constrained by the \emph{local} 3:1 rule imposed on each tetrahedron.
For example, a type 12 cell can only have cell types 10,11 and 12 as
neighboring cells. Each type 10 and 11 cell can neighbor at most one
type 12 cell, as they both have only one type 0 plaquette, and the type
12 cell consists only of type 0 plaquettes. One can also show that a
type 12 cell can have at most one neighboring type 12 cell. the
remaining neighboring cells must be of type 10 or 11. These observations
are already sufficient to conclude that
\begin{equation}
3y_{12} \leq y_{10}+y_{11}.
\end{equation}
Now using Eq.~\eqref{y-magnetization}, we obtain the lower bound on
the fraction of type 10 and type 11 cells: $y_{10} + y_{11}\ge
\frac{3}{7}$. Since
these two types of cell are the only cells allowed in the $0$-flux
manifold which have type 4 hexagonal plaquettes, this lower bound
immediately gives us that for the fraction of type 4 hexagonal
plaquette:
\begin{eqnarray}
x_{4}=\frac{1}{4}y_{10}+\frac{3}{4}y_{11}\geq \frac{1}{4}y_{10}+\frac{1}{4}y_{11} \geq \frac{3}{28}. \label{lowerbound}
\end{eqnarray}
$ $From the derivation above, one can easily see that the equal sign is
realized {\it if and only if} $y_{11} = 0$. Excluding type 11 cell
configurations, one can show that a type 12 cell \emph{always} neighbors
three type 10 cells, and one type 12 cell. As a consequence, the
condition $3y_{12} = y_{10}$ is satisfied only when any type 10 cell has
a type 12 cell as its nearest neighboring cell, through its single type
0 plaquette. Since the fraction of type 9 cells is uniquely determined
by the fraction of type 12 cells \eqref{globalcondition}, and we have
already excluded any type 11 cells from a state saturating the bound on
$x_4$, if a state saturating this bound exists, since it minimizes the
fraction $x_4$ it must be configuration with a maximum number of type 12
cells on the lattice. Without any type 12 cells, we cannot have any
type 9 cells either, and are limited to type 10 and 11 cells. A state
comprised only of type 10 and type 11 cell, has fraction of type 4
plaquettes that is always greater than $\frac{1}{4}$.
In what follows, we will show that this lower bound for $x_4$ is {\it uniquely}
(up to a finite degeneracy) realized by the {\it periodic} minority spin
configuration depicted in Fig.~\ref{fig:trig7}. This collinear magnetic ordered state,
which we shall refer to as the ``trigonal$_{7}$'' state,
contains $7$ pyrochlore unit cells. It has a magnetic unit cell with primitive vectors
${\bf E}_1 = 2{\bf a}_1 - {\bf a}_3,
{\bf E}_2 = 2{\bf a}_2 - {\bf a}_1$, and
${\bf E}_3 = 2{\bf a}_3 - {\bf a}_2$, where
${\bf a}_{1,2,3}$ are the primitive unit vectors of
the pyrochlore lattice (FCC lattice vectors ${\bf a}_1 = \frac{a}{2} (0,1,1)$ and cyclic permutations).
$ $From the unit cell vectors, we can find the volume of the magnetic unit cell
\begin{equation}
\left( {\bf E}_1 \times {\bf E}_2 \right) \cdot {\bf E}_3 =
7 \left( {\bf a}_1 \times {\bf a}_2 \right) \cdot {\bf a}_3
\end{equation}
These 3 primitive vectors are of equal length, and are not mutually
perpendicular. Therefore, the magnetic Bravais lattice is in the {\sl
trigonal} crystal system -- whence the name trigonal$_7$ state.
\begin{figure}
\centering
\includegraphics[width=3.6in]{fig15.eps}
\caption{(Color online) The trigonal$_{7}$ state. The spin configuration of a planar layer
of tetrahedra is shown. Triangles with lines connected at their centers represent up pointing
tetrahedra, while the other triangles represent down pointing tetrahedra.
Minority sites are denoted by (red) circles.
Two dashed triangles denote up pointing tetrahedra in the planar layer of tetrahedra immediately above the
one depicted in this figure. These two tetrahedra are used to show the primitive vectors
for the pyrochlore lattice ${\bf a}_{1,2,3}$ and for the magnetic unit cell of the trigonal$_7$ state
${\bf E}_{1,2,3}$. The seven up pointing tetrahedra included in one valid choice of a magnetic unit cell for the
trigonal$_7$ state are marked by (blue) letters indicating one of 4 3:1 configurations for a single tetrahedron.
The type 0 plaquettes residing between pairs of adjacent type 12 cells are marked by large (blue) circles. }
\label{fig:trig7}
\end{figure}
$ $From the planar view in Fig.~\ref{fig:trig7} it is clear that this magnetic state has a three-fold
rotation symmetry about the ${\bf a}_1 + {\bf a}_2 + {\bf a}_3 = a (1,1,1)$ axis perpendicular to the page.
In the direction of the pyrochlore lattice directions, there is a periodicity of 7, giving rise to a seven
fold degeneracy due to FCC lattice translations alone.
The trigonal$_7$ state breaks a reflection symmetry about a plane perpendicular to the Kagome plane, parallel to $a_2$
and passing through the point where the three vectors ${\bf E}_{1,2,3}$ originate in the figure
(see Fig.~\ref{mirror} for a another view of this symmetry operation).
Together with the 4-fold choice of the set of Kagome planes,
it is evident that the degeneracy of this magnetic state
is $4 \times 7 \times 2 = 56$
As is clear from Fig.~\ref{fig:trig7}, the spin configuration satisfies both
the local zero flux condition and the local 3:1 constraint.
To see that this trigonal state saturates the lower bound for $x_4$,
one has only to count the fraction of type 9, 10, 11 and 12 cells:
$y_{9}:y_{10}:y_{11}:y_{12}=3:3:0:1$. From Table~\ref{cell_table}, we find the trigonal$_7$ state
realizes the lower bound $x_{4}=\frac{3}{28}$. We conclude the trigonal$_{7}$ state is {\it at least}
one of the ground states in large-$s$ region.
As argued above, any state saturating the bound must have every type 0 plaquette connecting between a type 12 cell
and a type 10 cell. Starting with a type 12 cell, and using this rule together with the 3:1 constraint
and the zero flux condition, suffice to uniquely construct the trigonal$_7$ state, up to the finite degeneracy
described in above. Starting from the initial type 12 cell, the plaquette connecting this cell to another type 12
cell defines the Kagome plane in Fig.~\ref{fig:trig7}. Next, pick one of the 2 mirror image choices in Fig.~\ref{mirror}
of the type 10 cell configurations neighboring the first type 12 cell. From this point on, the three rules mentioned
above uniquely determine the rest of the magnetic configuration in the entire lattice.
\begin{figure}
\centering
\includegraphics[width=3.0in]{fig16.eps}
\caption{(Color online) The broken reflection symmetry.
Starting from a type 12 (up-headed) cell, and drawing the 10 tetrahedra surrounding it,
we first choose the place of the nearest neighboring type 12 cell (down-headed) - the
two type 12 cells share the hexagonal plaquette marked by thick dashed (blue) lines. With this choice,
the minority sites on 7 tetrahedra are automatically determined (marked by open
(red) circles). However, minority sites for the other 3 tetrahedra (solid (green) circles) are not fully determined
and we still have a ``mirror'' degree of freedom. For convenience, we also draw the
primitive vectors of pyrochlore lattice, in accordance with those defined in Fig.~\ref{fig:trig7}.}
\label{mirror}
\end{figure}
Finally, the energy per plaquette of the trigonal$_7$ state is
\begin{equation}
\frac{1}{N} E_{\textrm{trigonal}_{7}} = \frac{3}{28} V_4
\; .
\end{equation}
\subsubsection{Spin $s \geq 2$}
We expect that the trigonal$_7$ state described above is the
ground state for sufficiently large $s$. In the following, we shall
argue that this is indeed the case for $s \geq 2$. For $s=5/2,2,3/2,1$, the
energy parameters in the effective Hamiltonian are given in Table~\ref{vals}.
For all the cases in Table~\ref{vals}, $V_1$ is the largest and most negative energy.
This would suggest that the lowest energy 3:1 state is one with a maximum number of
plaquettes of type 1. However, the geometry of the lattice as well
as the 3:1 constraint pose stringent restrictions.
By enumerating all possible types of cells in Table~\ref{cell_table},
one finds that every type 1 plaquette must be accompanied by at least one type 2 plaquette
on the same cell. The configuration of the entire lattice can be determined by considering only up-headed cells,
and therefore, the existence of $M$ type 1 plaquettes demands that at
least $M$ type 2 plaquettes are present as well.
We deduce the following inequality $x_1 \leq x_2$, in any 3:1 configuration.
$ $From this we see that the energy of a type 1 plaquette is offset by the energy cost of a type 2
plaquette which is the \emph{highest} energy cost for all the $s$ values in Table~\ref{vals}.
Therefore, the number of type 1 plaquettes is not necessarily maximized
in the ground state even with small $s$.
One observes that the magnitude of the energy $V_1$ is comparable
to $V_2$ already at $s=5/2$, and this trend continues to higher $s$ - $V_2$ becomes more dominant.
Given the restriction $x_1 \leq x_2$, and the large energy cost of type 2 plaquettes,
the analytic arguments in the above subsection suggest the trigonal$_7$ state may be
the lowest energy state for $s\geq 5/2$. The case $s=2$ is close to
the boundary for a change in behavior.
In order to search for other candidate ground states, we have enumerated
all 3:1 states on a variety of periodic finite clusters, and determined
the exact lowest energy state for each one, for
$s=1,3/2,2,5/2,\cdots,6$. For $s \geq 2$ we find no states with lower
energy than that of the trigonal$_7$ state. This strongly suggests that
the trigonal$_7$ state is the ground state for all $s \geq 2$, though of
course this limited numerical investigation does not constitute a proof
that this is the case. Moreover, states with large numbers of type
1 plaquettes, are among the \emph{highest} energy states we have found,
which does give credence to our assessment that when the $V_2$ and $V_1$
are comparable energy scales (with opposite sign), because of the
condition $x_1 \leq x_2$ the energy $V_2$ is still dominant. One can
conclude that {\sl if} there is a state with lower energy for $s \geq
2$, it must have a large unit cell which is incompatible with all the
clusters considered in Table~\ref{table2}.
\begin{table}
\begin{tabular}{|c|r|c|c|c|}
\hline \hline
energy& $s=\frac{5}{2}$& $s=2$ & $s = \frac{3}{2}$ & $s = 1$ \\
\hline
$\frac{V_1}{J_z \alpha^6}$ & $-0.0113$ & $-0.0135$ &$-0.0188$ & $-0.0410$ \\
$\frac{V_2}{J_z \alpha^6}$ & $ 0.0090$ & $ 0.0084$ &$ 0.0083$ & $ 0.0099$ \\
$\frac{V_4}{J_z \alpha^6}$ & $ 0.0002$ & $ 0.0003$ &$ 0.0005$ & $ 0.0015$ \\
\hline
\end{tabular}
\caption{Energies $V_{1,2,4}$ of the plaquette configurations type $1,2,4$ for $s=2,\frac{3}{2},1$}
\label{vals}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline \hline
Number of & Number of & \multicolumn{2}{c|}{$s=\frac{3}{2}$}\\
\cline{3-4} unit cells & 3:1 states & E & gs\\
\hline
$2 \times 2 \times 1 = 4$ & $36$ & $1.3\cdot 10^{-4}$ & $4$ \\
$2 \times 2 \times 2 = 8$ & $272$ & $1.3\cdot 10^{-4}$ & $12$ \\
$4 \times 2 \times 1 = 8$ & $708$ & $1.3\cdot 10^{-4}$ & $4$ \\
$3 \times 3 \times 1 = 9$ & $1,120$ & $-2.9\cdot 10^{-4}$ & $24$ \\
$5 \times 2 \times 1 = 10$ & $3,370$ & $4.\cdot 10^{-4}$ & $4$ \\
$3 \times 2 \times 2 = 12$ & $2,436$ & $1.3\cdot 10^{-4}$ & $4$ \\
$4 \times 2 \times 2 = 16$ & $23,696$ & $1.3\cdot 10^{-4}$ & $12$ \\
$6 \times 3 \times 1 = 18$ & $649,480$ & $-2.9\cdot 10^{-4}$ & $192$ \\
$3 \times 3 \times 2 = 18$ & $61,192$ & $-2.9\cdot 10^{-4}$ & $30$ \\
$5 \times 2 \times 2 = 20$ & $237,156$ & $1.3\cdot 10^{-4}$ & $4$ \\
$4 \times 3 \times 2 = 24$ & $1,685,508$ & $1.3\cdot 10^{-4}$ & $4$ \\
$3 \times 3 \times 3 = 27$ & $7,515,136$ & $-2.9\cdot 10^{-4}$ & $216$ \\
\hline
\end{tabular}
\caption{3:1 configurations on periodic clusters. Energy is given in units of $J_z \alpha^6$}
\label{table2}
\end{table}
\subsubsection{Spin $s=3/2$}
\label{sec:spin-s=32}
Spin $s=3/2$ is the smallest spin value for which in the extreme easy-axis limit $\alpha \ll 1$
the off-diagonal term in the effective Hamiltonian may be ignored. The
corresponding plaquette energies are given in column 4 of
Table~\ref{vals}. The energy for type $1$ plaquettes is approximately
50\% larger (more negative) than for $s=2$. In the extreme limit of
very large and negative $V_1$, the ground state has been determined
previously in Ref.\onlinecite{Bergman:prl05}.
The state, referred to as the {\bf R} state in Ref.\onlinecite{Bergman:prl05}
as well as in the remainder of this manuscript,
maximizes the fraction of type 1 plaquettes, and is unique (up to lattice symmetries).
The numerical investigation mentioned in the previous subsection, shows that the
{\bf R} state is {\sl not} the lowest energy state for the diagonal effective Hamiltonian at
$s=3/2$. Instead, we find a {\sl massively degenerate set of classical
ground states}. One example of these states has all the minority sites contained in a set of
parallel Kagome layers of the pyrochlore lattice. Every Kagome plane will be have the same spin configuration
shown in Fig.~\ref{fig:root3_state3}. This example, and many other states in this degenerate manifold
all have a $\sqrt{3} \times \sqrt{3}$ structure in the Kagome planes, and therefore we shall refer to a large subset
of this manifold of states as the $\sqrt{3} \times \sqrt{3}$ states.
\begin{figure}
\centering
\includegraphics[width=3.0in]{fig17.eps}
\caption{(Color online) 3:1 spin configuration of a single layer of tetrahedra in the ${\sqrt 3} \times {\sqrt 3}$
state. Only minority spin sites are
marked by (red) solid circles.
Flippable plaquettes (type 1) are denoted by a (blue) circle drawn at their center. The same
conventions as in Fig.~\ref{fig:trig7} are used here.}
\label{fig:root3_state3}
\end{figure}
The analysis of this degenerate manifold of states is somewhat involved. We therefore leave the
details to Appendix~\ref{app:root3_degeneracy}, and only mention a number of facts here.
All the states we have found numerically have
plaquette type fractions of
$x_0 = \frac{1}{6}, x_1 = \frac{1}{6}, x_2 = \frac{1}{3}, x_3 = \frac{1}{6}$
and $x_4 = \frac{1}{6}$. As a consequence, the energy
per plaquette of these states is
\begin{equation}
\frac{1}{N} E_{\sqrt{3} \times \sqrt{3}} =
\frac{1}{6} \left( V_1 + 2 V_2 + V_4 \right).
\label{energy33}
\end{equation}
In appendix~\ref{app:root3_degeneracy}, we show by explicit construction
that the degeneracy is at least
\begin{equation}
18 \times 2^{\frac{N}{12L}} + 4 \times 3^L - 36,
\; .
\end{equation}
which grows exponentially with system size. We have not shown that the
above states exhaust the possibilities with energy given by
Eq.~\eqref{energy33}, so the above formula is only a lower bound for the
degeneracy.
\subsection{Effect of off-diagonal term}
\label{Effective_QDM}
In this subsection we add to the effective Hamiltonian the off-diagonal
term where it is likely to be important (low values of $s$). For
$s=1/2$, the off-diagonal term is parametrically larger than the
diagonal terms in the $\alpha \ll 1$ limit.
For $s=1$, it is of the same order
as the diagonal terms. However, our
explicit calculations demonstrate
that even in this case, the off-diagonal term
is numerically more than four times larger than the largest
diagonal plaquette energy.
For $s=3/2$, the off-diagonal term
is negligible in the $\alpha \ll 1$ limit, but extrapolating the DPT results to
the isotropic case $\alpha=1$ indicates that while it is not larger than the diagonal terms, it is likely
not negligible either.
To gauge the importance of the off-diagonal plaquette term,
it is instructive to compare the diagonal energy of the various
candidate ground states studied above.
Their energies per plaquette are shown for small values of $s$
in Table~\ref{tab:gse}. We see that the energy {\sl differences} amongst these
competing states are rather small on the scale of the
off-diagonal amplitude $K$. For instance, for $s=3/2$, the energy
difference between the ``worst'' of these three states (the {\bf R}
state) and the ``best'' (the $\sqrt{3}\times\sqrt{3}$ state) is only
$0.0018$ per plaquette, approximately {\sl four times smaller} than the
diagonal coupling $K=0.008$. Thus we can expect that adding the $K$
term can introduce sufficient quantum fluctuations to alter the balance
between these states, either destabilizing one in favor of the other, or
perhaps stabilizing a superposition of these orders in some form.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Spin & R state & Trigonal$_7$ state & $\sqrt{3} \times \sqrt{3}$ state \\
\hline
$s=1$ & $-2.8 \times 10^{-3}$ & $1.7 \times 10^{-4}$ & $-3.3 \times 10^{-3}$ \\
$s=\frac{3}{2}$ & $ 1.5 \times 10^{-3}$ & $5.7 \times 10^{-5}$ & $-2.9 \times 10^{-4}$ \\
$s=2$ & $ 2.9 \times 10^{-3}$ & $3.1 \times 10^{-5}$ & $6.0 \times 10^{-4}$ \\
$s = \infty$ & $ 1.5 \times 10^{-3} s$ & $0.0$ & $6.5 \times 10^{-4} s$ \\
\hline
\end{tabular}
\caption{Diagonal energy per plaquette in various classical ground states. Energies are given in units of $J_z \alpha^6$}
\label{tab:gse}
\end{table}
We cannot hope to establish the result of such subtle energetics here,
particularly given the non-trivial nature of the effective QDM
Hamiltonian including the off-diagonal term. However, we will discuss
several {\it natural candidate ground states} from the perspective of
order-by-disorder and the general theoretical framework of QDM-type
models.
\subsubsection{Purely off-diagonal QDM {\bf --- $s=1/2$ case ---}}
\label{sec:purely-diagonal-qdm}
Let us consider first the simple case of $s=1/2$, for which the
Hamiltonian is well-approximated by including the off-diagonal {\sl
only}. Clearly low-energy ground states of this Hamiltonian must
have significant amplitude for type 1 plaquettes, as other plaquettes
are annihilated by the off-diagonal term. We note that the trigonal
state has {\sl no} type 1 plaquettes.
This implies that it is an exact zero energy eigenstate of the
purely kinetic Hamiltonian. Since
it is straightforward to construct states with significantly negative
energy per plaquette, the classical trigonal$_7$ state is clearly an
excited state in this case. It seems difficult to imagine any
way that the ground state could be adiabatically connected to the
trigonal state (or any other zero energy state with no type 1
plaquettes).
Let us instead consider what sorts of states might naturally minimize
the energy of the kinetic term. This sort of pure QDM problem has
been considered in numerous places in the literature. Specifically
for the QDM on the diamond lattice, the question has been discussed in
Ref.\cite{Bergman:prb05} (see references therein for a guide to QDMs).
Roughly speaking, the energy is minimized by delocalizing the
wavefunction as much as possible amongst different dimer
configurations. However, the non-trivial connectivity in the
constrained space of dimer coverings makes the nature of this
delocalization subtle.
One possibility in such a 3d QDM is that
the ground state is a $U(1)$ {\sl spin liquid}, in which the
delocalization is sufficiently complete as to prevent any symmetry
breaking (the meaning of the $U(1)$ is discussed in-depth in e.g.
Ref.~\onlinecite{Hermele:prb04}). Roughly
speaking, the wavefunction has support for all possible dimer
coverings, with
equal amplitude for all topologically equivalent configurations. The
existence and stability of such a state can be established in a QDM
with a particular form of diagonal interaction, in the neighborhood of
the so-called ``Rokhsar-Kivelson'' (RK) point. While this point
(corresponding to $V_1=K>0$, $V_2=V_4=0$) is not physically relevant
to the pyrochlore antiferromagnets, it is possible that such a $U(1)$
spin liquid state remains the ground state for the purely off-diagonal
QDM.
A second possibility is that the delocalization is incomplete, due to
``order-by-disorder'' physics. In particular, it may be favorable to
delocalize only over a limited set of classical states, amongst which
the connections are greater than those amongst generic classical
configurations. In this case there is generally some
symmetry-breaking induced by the selection of the states involved.
Two sorts of such ordering have been proposed and observed in other
similar QDM models. The first type of order-by-disorder state is
one in which the set of classical states for which the ground state
wavefunction has the largest amplitude are
``centered'' about a single
classical state having
the maximal number of type 1 plaquettes.
Such a wavefunction may be ``selected'' by the kinetic energy,
since under the action of the kinetic term of the QDM, this is the
classical state is connected to the \emph{largest number} of other classical
states. In our problem, this classical state is just
the {\bf R} state mentioned above and discussed at length in
Refs.\onlinecite{Bergman:prl05,Bergman:prb06}.
A simple form for such a wavefunction is
\begin{eqnarray}
\label{eq:varwfR}
|{\bf R},\{\gamma_P\}\rangle &=& \exp\left[ \sum_{P}
\gamma_P \left({\centering \includegraphics[width=0.4in]{fig34.eps}} + {\rm h.c.} \right) \right] |{\bf R}\rangle,
\end{eqnarray}
where $|{\bf R}\rangle$ is the classical {\bf R} state (with definite
$S_i^z=S\sigma_i$), and $\gamma_P$ are variational parameters which can
be used to optimize the quantum state $ |{\bf R},\{\gamma_P\}\rangle$.
The second type of order-by-disorder state is one in which there are a
maximal number of {\sl independently resonating plaquettes}.
This is based on the observation that the exact ground state for the kinetic
term on a single plaquette is simply an equal amplitude superposition
of the two type 1 states.
However, neighboring plaquettes share sites, and therefore it is not possible to
form a direct product of such resonances on {\it all} plaquettes.
Instead, the best one can na\"ively do along these lines is
to find the classical state with the largest number
of type 1 plaquettes which can be {\it independently flipped}, and
on these type 1 plaquettes form an equal amplitude superposition of these
two states.
A state with the maximal number of independently flippable plaquettes can be
described, starting from the $\sqrt{3} \times \sqrt{3}$ states introduced in
Sec.\ref{sec:spin-s=32}. The largest set of independently flippable plaquettes is
a subset of all the type 1 plaquettes. An
appropriate choice in a single plane is demonstrated in
Fig.~\ref{fig:RPS}, which includes half of the flippable plaquettes in
the plane. It is interesting to point out that in each plane there are
2 possible choices of the plaquettes to be resonated (one half or the
other), so out of each $\sqrt{3} \times \sqrt{3}$ state we can
construct $2^L$ different choices of the plaquettes that will be
resonating. The degeneracy of these state therefore is $2^L \times
3^L \times 4 = 6^L \times 4$. Other states realizing this maximum
number of independently resonating plaquettes may be possible, but
we have not pursued this further. We refer to these states as
``Resonating Plaquette States'' (RPS). A precise wavefunction
describing the RPSs we have derived from the
$\sqrt{3} \times \sqrt{3}$ states is
\begin{equation}
\label{RPS} \ket{RPS} = \prod_{P \in
G} \frac{1}{\sqrt 2} \left( 1 + \left( \ket{\hexagon_A}
\bra{\hexagon_B} + {\rm h.c.} \right) \right) \ket{\Psi} \; ,
\end{equation}
where $G$ denotes the set of non-overlapping resonating plaquettes,
and $\ket{\Psi}$ denotes one of the $\sqrt{3} \times \sqrt{3}$ states.
There are $4 \times 3^L$ choices for $\ket{\Psi}$, and $2^L$ choices
for $G$ given $\ket{\Psi}$. We note that the {\sl symmetry} of the
RPS is distinct and lower than that of the
$\sqrt{3}\times\sqrt{3}$ state -- even
in a single layer.
Thus there is a precise distinction between these
two states independent of the detailed form of their wavefunctions,
for which the above explicit forms are of course only crude
approximations.
While potentially there might be some other state we have not
anticipated, we think that most likely one of the three above states
obtains in the purely kinetic QDM valid for $s=1/2$. We will, however,
refrain from making any definite statement as to which of these is the
true ground state. One may imagine comparing the energies of the
wavefunctions in Eqs.(\ref{eq:varwfR},\ref{RPS}) to gauge the relative
favorability of the ${\bf R}$ and RPS states. Unfortunately, even
evaluating the variational energy of the $|{\bf R}\rangle$ state in
Eq.\eqref{eq:varwfR} is rather challenging. Another difficulty is the
considerable freedom in choosing the RPS wavefunctions. Furthermore, a
good variational wavefunction for the spin liquid is also needed for a
more complete comparison. As always, there is much arbitrariness in
defining each variational wavefunction, making the predictive power of
such an approach unclear. We believe this issue is more likely to
reliably resolved in the future thorough numerically exact methods such
as quantum Monte Carlo or exact diagonalization.
\subsubsection{$s>1/2$ QDMs}
\label{sec:s12-qdm}
For $s=1$ and $s=3/2$, significant diagonal terms enter the QDM
Hamiltonian. These act to alter the balance between the three
candidate states discussed above, and also potentially to introduce
the possibility of other states disfavored in the purely kinetic
Hamiltonian. For both $s=1$ and $s=3/2$, the ground state of the
classical diagonal term alone is actually a massively degenerate set of states
discussed briefly in Section~\ref{sec:spin-s=32},
and in more detail in Appendix.~\ref{app:root3_degeneracy}.
$ $From all the $\sqrt{3}\times\sqrt{3}$ states,
we can construct RPSs. For $s=3/2$, as we have
shown, however, the ${\bf R}$ state is also quite low in diagonal
energy and indeed only slightly worse
than the $\sqrt{3}\times\sqrt{3}$ states, as far as the diagonal term is concerned.
Thus we expect that introducing the
diagonal terms tends to favor both the RPS and the ``renormalized'' {\bf R} state
over the $U(1)$ spin liquid.
If their effects are strong enough, they could
also stabilize the ``non-resonating'' $\sqrt{3}\times\sqrt{3}$ states (or any one
of the other states with the same energy).
We speculate that a spin liquid is unlikely to be realized in these
cases, but that the RPS, ${\bf R}$, and $\sqrt{3}\times\sqrt{3}$
states (or more precisely all the states degenerate with the $\sqrt{3}\times\sqrt{3}$
states) remain very reasonable candidate ground states for these values
of $s$ in the isotropic limit.
\begin{figure}
\centering
\includegraphics[width=3.0in]{fig18.eps}
\caption{(Color online) Choice of non-overlapping flippable plaquettes to resonate in a plane of the
$\sqrt{3} \times \sqrt{3}$ state. The chosen plaquettes are marked with (red) crosses in the middle.}
\label{fig:RPS}
\end{figure}
\section{Discussion}
\label{sec:discussion}
Since the development of the DPT and its analysis in this paper is
rather involved, we begin in the first subsection by recapitulating the
central points. In the second subsection we will then turn to a brief
discussion of the implications on experiments and future directions of
this work.
\subsection{Summary}
\label{sec:summary}
As a prototypical model of a magnetization plateau in a strongly
frustrated quantum antiferromagnet, we considered in this paper a
nearest-neighbor spin-$s$ model on the pyrochlore lattice at half the
saturation magnetization. Such plateaus have been observed in the
spinel materials HgCr$_2$O$_4$\;, and CdCr$_2$O$_4$. We argued that a useful starting
model is the easy-axis XXZ Heisenberg model in an external field,
Eq.~\eqref{XXZ}. This model possesses all same symmetries as the
isotropic Heisenberg model in an external field, and indeed we were able
to extrapolate our results to this limit. This model
has the advantage that the transverse spin fluctuations can be
treated systematically as a perturbation
to the underlying Ising model. The resulting Ising model can be
rewritten as a sum over the elementary tetrahedra of the pyrochlore
lattice. In this Ising limit on the plateau, the spins on each
tetrahedron satisfy a 3:1 constraint, comprising a set of 3 majority
spins fully polarized parallel to the field, and 1 minority spin
antiparallel to the field. The half-polarized state has a macroscopic
degeneracy corresponding to the number of possible positions for all the
down pointing spins in the lattice. It is expected that the transverse
spin fluctuations will play
a role in selecting a ground state or set
of ground states from the massively degenerate 3:1 manifold.
In this way, we are lead to a theoretical model involving a ``constrained''
degenerate perturbation theory in the 3:1 manifold. Our paper is
devoted to a detailed analysis of such a theory and many parts are
couched in sufficiently general
language to be applicable to a broad class of
systems.
We began our discussion of the constrained easy-axis degenerate
perturbation theory by deriving the general structure of the effective
Hamiltonian that occurs at each order of dimensionless coupling
$\alpha=J_\perp/J_z$, Eq.~\eqref{eq:7}. We found that the effective
Hamiltonian could be cast into a convenient form by performing a unitary
transformation that rotates all down pointing (minority) spins to up
pointing spins and also by introducing a connectivity matrix (whose elements are one
for nearest neighbor spins and zero otherwise).
The latter makes it possible to convert the sums over
nearest-neighbor lattice sites to sums over the entire lattice
\eqref{rotated_H_1}. These transformations
cast the terms of the
effective Hamiltonian coming from each order of perturbation theory into
a form rather convenient for analysis. The
resulting terms are expressed
explicitly in terms of the Ising variables on the lattice sites, the
spin $s$, and the connectivity matrix. These terms were studied
order-by-order in perturbation theory. We found that diagrams
representing these terms naturally fell into two categories:
contractible and non-contractible. Contractible diagrams are those
whose dependence on some of the Ising variables
is eliminated by summing with respect to their site index
over all lattice points.
Thus, a function of $N$ Ising variables can be reduced to a
function of less than $N$
Ising variables after this
``contraction'' process. The allowed contractions depend on
the lattice geometry, the 3:1 constraint, and the Ising nature of the spin variables.
Diagrams for which it is not possible to perform a contraction (equivalently, a reduction in the
number of relevant Ising variables) we termed non-contractible.
The central result of the analysis of contractible and non-contractible
diagrams is that all terms in the constrained degenerate perturbation
theory up to and including 5th order are constant {\it within the 3:1 manifold}.
Individual terms are shown to be constant by first contracting the
diagrams as much as possible and then noting that the value of the
diagram is unchanged under permutation among site indices associated with
the Ising variables. The latter statement implies that the value of the diagram
is independent of spin configurations allowed in the 3:1 manifold
and hence a constant.
In a similar manner, most terms at 6th order are also shown to be
constant. However, we also observe that, at 6th order, there appears a ``single large loop'' diagram
which cannot be contracted, and also defies the permutation arguments mentioned above.
In fact, this loop diagram brings about
non-constant contributions to the effective Hamiltonian in the 3:1 manifold.
Therefore, this is the lowest order term which
lifts the degeneracy of the 3:1 manifold (at least for $s > 1$). The
non-constant 6th order term includes effective interactions among spins on each
hexagonal plaquette of the pyrochlore lattice.
Depending on the arrangement of minority sites, there
are five distinct kinds of plaquettes that may appear
and we label them 0,1...4 (See Table~\ref{table1}). Using the results of our degenerate
perturbation theory, we evaluate the energy of each of these plaquettes
as a function of $\alpha$ and $s$, and correct a mistake in
Ref.~\onlinecite{Bergman:prl05}. The 3:1 condition constrains the
allowed ratios of the various plaquettes in the lattice and allows us to
express the total energy of the system (up to an overall constant),
Eq.~\eqref{diagonal_energy}, in terms of only 3 energies \eqref{eq:ginv}.
As a check on the results immediately above and as a further test of the
robustness of those results, we also performed a large-$s$ expansion in
the easy-axis limit. As with the fully quantum theory, we {expanded
the harmonic spin wave energy
in powers of $\alpha$ up to the 6th order, applied the diagrammatic
analysis above involving contractible and non-contractible diagrams, and
studied the resulting energy of the non-constant 6th order terms.
The result \eqref{large_S_ergs} agrees exactly with the
${\cal O}(s)$ term obtained from the quantum degenerate perturbation theory,
\eqref{Infty_S_lim_DPT}. This satisfying consistency tells us that
the large-$s$ limit and small-$\alpha$ limit commute, and thus
our analysis is likely well controlled.
In the final section of the paper we used the results of the degenerate
perturbation theory to determine the low energy states on the plateau as
a function of $s$. Our result that the first non-constant diagonal term
in perturbation theory comes at 6th order is independent of the spin
value $s$. However, terms that allow plaquettes (such as type 1) to
resonate occur at order $6s$, which can be either larger or smaller than
6 depending on $s$. In the strict easy-axis limit, therefore, for $s\geq 3/2$,
the low energy states are therefore determined only by a diagonal
effective Hamiltonian, which can be analyzed classically. In the large
but finite $s$ limit, we are able
to resolve the degeneracy
of the ``zero flux'' manifold
found in the large-$s$ analysis (extended from that of Hizi and Henley\cite{Hizi:prb06} to the XXZ model).
We predict a ``trigonal$_{7}$'' state (see Fig.~\ref{fig:trig7})
to be the exact ground state in this easy-axis limit and for large $s$,
and numerical analysis suggests this obtains for $s\geq 2$.
For $s=3/2$, the lowest energy configuration we have found in the Ising
limit is a massively degenerate set of states (for example the $\sqrt{3}\times\sqrt{3}$ states, see Fig.\ref{fig:root3_state3}). For $s\leq 1$, and for $s=3/2$ extrapolated to the isotropic limit, we find
that the off-diagonal term in the effective Hamiltonian becomes
significant, and we suggest several likely candidates for the ground states in these
cases. This includes a possible $U(1)$ spin liquid state, which would be quite
remarkable if realized.
\subsection{Implications and future directions}
\label{sec:impl-future-direct}
First let us comment briefly upon the relevance to the spinel chromites.
For HgCr$_2$O$_4$, it is known that the temperature at which the plateau forms
($\approx 7^\circ K$) is comparable to the highest temperature at which
magnetic order is observed. The theoretical estimate of the magnitude
of the couplings in the effective Hamiltonian due to quantum
fluctuations for $s=3/2$ is however small, e.g. $V_1 \approx 0.02J$ from
Eq.\eqref{eq:ginv}. Thus the temperature at which quantum fluctuations
are expected to induce magnetic ordering would be very low. A crude
estimate based on the measured Curie-Weiss temperature in
HgCr$_2$O$_4$\cite{Ueda:prb06} would
predict an ordering temperature $\lesssim 0.2K$. This strong quantitative
disagreement with experiment indicates that a stronger classical
mechanism -- i.e. physics outside the Heisenberg model -- must be behind
the plateau formation. Indeed, a recent study of a simple model of
spin-lattice coupling gives a reasonable explanation of the plateau and
its order, predicting stabilization of the ${\bf R}$
state.\cite{Bergman:prb06} It would be
quite interesting to see whether quantum fluctuations might however play
a role in the other chromite spinels, e.g. CdCr$_2$O$_4$.
We now move away from the experiments on HgCr$_2$O$_4$, where the pure
nearest-neighbor Heisenberg antiferromagnet neglecting spin-lattice
interactions is clearly inadequate. Instead, we would like to address a
basic question that may be in the mind of the reader. In the pure
spin-$s$ isotropic Heisenberg model (i.e. $J_\perp=J_z=J$), is there a
plateau at half-magnetization? At $s=\infty$, i.e. the strict classical
limit, the answer is {\sl no}, and indeed the magnetization is a simple
linear function of field in this case. In principle this question can
be addressed by the $1/s$ expansion. However, to the order studied, the
situation remains unclear: the leading-order spin-wave spectrum remains
gapless even in a field. Higher-order calculations in $1/s$ are
required to resolve this question via that approach. Within the XXZ
model, for any amount of anisotropy ($\alpha<1$), a plateau is expected
even in the classical limit, so by continuity it is likely to persist at
smaller $s$. However, the extrapolation to $\alpha=1$ is not clear. In
Appendix~\ref{app:PlateauWidth}, we present some simple calculations
aimed at addressing the plateau width. In particular, we show that the
plateau narrows both from above and below upon perturbing away from the
Ising limit, where it is maximal. The plateau edges are determined by
the points at which the gap to excitations with non-zero $S^z$ vanishes.
Unfortunately, unlike the calculation of the splitting {\sl within} the
plateau states (the main focus of this paper), the energy difference
{\sl between} the plateau ground state and excited states with
higher/lower $S^z$ is non-vanishing already at quadratic/linear order in
$\alpha$. Hence, a high-order calculation of this gap becomes much more
involved than those in the bulk of this paper, and an extrapolation to
the isotropic limit is probably not reliable. The existence of a
plateau in the isotropic limit is a subject worthy of study by other
methods.
Next we turn to future applications of the formalism developed here to
other problems. From our exposition, it should be evident that our
methods generalize rather straightforwardly to other models of quantum
antiferromagnets with Ising anisotropy, provided a few
conditions hold.
First, the lattices should be composed of site-sharing simplexes. A
simplex is a collection of sites in which every pair of sites is
connected by a bond; examples include triangle, crossed square,
and tetrahedron.
Second, the ground states of the Ising part of the Hamiltonian on a single
simplex should all be permutations of one another. This allows Ising
exchange, single-site anisotropies, biquadratic and other interactions.
Third, the interactions should be the same on each bond, but could
include quite arbitrary combinations of exchange, biquadratic couplings
etc. There are quite a number of
interesting models of frustrated magnetism which share these features.
For instance, the XXZ models on the Kagome and checkerboard lattices can
be studied this way at several values of the magnetization. The XXZ
model on the pyrochlore lattice at zero field is also such a system.
It will be interesting to explore the behavior of these models at
various values of $s$ using the methods of this paper.
More generally, the methods of this paper are possible because of a key
simplification: in a strong magnetic field, the symmetry of the spin
Hamiltonian is
$U(1)$ rather than $SU(2)$. Many
more theoretical methods are available to treat systems with {\sl
abelian}
conserved charges than for $SU(2)$-invariant spin
models. Furthermore, in the interesting search for spin-liquid states
of quantum antiferromagnets, much theoretical success has been achieved
in recent years in realizing such states in $U(1)$-symmetric models,
while examples of $SU(2)$-invariant spin liquids, even in models, are
much more limited. Therefore it seems likely that quantized
magnetization plateaux may be an excellent hunting ground for such
exotic states of matter, and moreover there is hope for theory and
experiment to meet on this plain.
\acknowledgments
This work was supported by NSF Grant DMR04-57440, PHY99-07949, and the
Packard Foundation. R.S. is supported by JSPS as a Postdoctoral Fellow.
|
1,477,468,751,398 | arxiv | \section{Introduction}
Let $N$ be a compact subinterval of either $\mathbb{R}$ or the circle $S^1$, and let $f:N\to N$ be piecewise
continuous. We say that a subinterval $J\subset N$ is a {\it wandering interval} of the map $f$ if the forward iterates $f^n(J)$, $n=0,1,2,\ldots$ are pairwise disjoint intervals, each not reduced to a point, and
the $\omega$-limit set of $J$ is an infinite set.
A great deal of information about the topological dynamics of a map $f:N\to N$ is revealed when one knows
whether $f$ has wandering intervals. This turns out to be a subtle question whose answer depends on
both the topological and regularity properties of the map $f$.
The question of the existence of wandering intervals first arose when $f$ is a diffeomorphism of the circle
$S^1$. The Denjoy counterexample shows that even a
$C^1$ diffeomorphism $f:S^1\to S^1$ may have wandering intervals. This behaviour
is ruled out when $f$ is smoother. More specifically, if $f$ is a $C^1$ diffeomorphism of the circle such that the
logarithm of its derivative has bounded variation then $f$ has no wandering intervals \cite{Den}. In this case the topological dynamics of $f$ is simple: if $f$ has no periodic points, then $f$ is topologically conjugate to a rotation.
The first results ensuring the absence of wandering intervals on continous maps satisfying some smoothness conditions
were provided by Guckenheimer \cite{Guc}, Yoccoz \cite{Yoc}, and Blokh and Lyubich \cite{B-L}. Later on, de Melo et al.~\cite{MMS} generalised these results proving that if $N$ is compact and $f:N\to N$ is a $C^2$-map with non-flat critical points then $f$ has no wandering intervals.
Concerning discontinous maps, Berry and Mestel \cite{B-M} found a condition which excludes wandering intervals in Lorenz
maps --- interval maps with a single discontinuity. Of course, conservative maps and, in particular, interval exchange transformations, admit no wandering intervals. We consider the following generalisation of interval exchange transformations.
Let $0\le a<b$ and let $\{a,b\}\subset D\subset [a,b]$ be a discrete set containing $n$
points. We say that an injective, continuously differentiable map $T:[a,b]\to [a,b]$ defined
on $\mathcal{D}\,(T)=[a,b]\setminus D$ is an {\it affine interval exchange transformation of $n$-subintervals}, shortly an {\it $n$-AIET}, if $\vert DT\vert$ is a positive,
locally constant function such that $T([a,b])$ is all of $[a,b]$ except for finitely many points. We also assume
that the points in $D\setminus\{a,b\}$ are non-removable discontinuities of $T$. We say that an AIET
is {\it oriented} if $DT>0$, otherwise we say that $T$ has {\it flips}. An {\it isometric IET
of $n$ subintervals}, shortly an $n$-IET, is an $n$-AIET satisfying $\vert DT\vert=1$ everywhere.
Levitt \cite{Lev} found an example of a non-uniquely ergodic oriented AIET with wandering intervals. Therefore there are Denjoy counterexamples of arbitrary smoothness. Gutierrez and Camelier \cite{C-G} constructed an AIET with wandering intervals that is semiconjugate to a self-similar IET. The regularity of conjugacies between AIETs and self-similar IETs is examined by Cobo \cite{Cob} and by Liousse and Marzougui \cite{L-M}. Recently, Bressaud, Hubert and Maass \cite{BHM} provided sufficient conditions for a self-similar IET to have an AIET with a wandering interval semiconjugate to it.
In this paper we present an example of a self-similar IET with flips having the particular
property that we can apply the main result of the work \cite{BHM} to obtain a $5$-AIET with flips
semiconjugate to the referred IET and having densely distributed wandering intervals. The AIET so obtained
is uniquely ergodic \cite{Ve1} (see \cite{Mas,Ve2}) and the support of the invariant measure is a Cantor set.
A few remarks are due in order to place this example in context. The existence of minimal non-uniquely ergodic AIETs with flips and wandering intervals would follow by the same argument of Levitt \cite{Lev},
provided we knew a minimal non-uniquely ergodic IET with flips. However, no example of minimal non-uniquely
ergodic IET with flips is known, although it is possible to insert flips in the example of Keane \cite{Kea} (for oriented IETs) to get a transitive non-uniquely IET with flips having saddle-connections. Computational evaluations indicate that it is impossible to obtain, via Rauzy induction, examples of self-similar $4$-IETs with flips meeting the hypotheses of \cite{BHM}, despite
this being possible in the case of oriented $4$-IETs (see \cite{C-G,Cob}). Thus the example we present here is the simplest possible, in the sense that wandering intervals do not occur for AIETs with flips semiconjugate to a self--similar IET, obtained via Rauzy induction, defined on a smaller number of intervals.
\section{Self-similar interval exchange transformations}
Let $T:[a,b]\to [a,b]$ be an $n $-AIET defined on $[a,b]\setminus D$,
where $D=\{x_0,\ldots,x_n\}$ and $a=x_0<x_1<\ldots<x_{n-1}<x_n=b$.
Let $\beta_i\neq 0$ be the derivative of $T$ on $(x_{i-1},x_i)$, $i=1,2\ldots,n$.
We shall refer to
$$ x=(x_0,x_1,\ldots,x_n) $$
as the {\it D-vector} of $T$ (i.e. the domain-of-definition-vector of
$T$). The vectors
\begin{eqnarray*}
\gamma = (\log |\beta_1|, \log |\beta_2|, \ldots, \log
|\beta_n|) \quad \mbox{and} \quad \tau = \left(\frac{\beta_1}{|\beta_1|},
\frac{\beta_2}{|\beta_2|},\ldots,\frac{\beta_n}{|\beta_n|}\right)
\end{eqnarray*}
will be called the {\it log-slope-vector} and the {\it flips-vector
of $T$}, respectively.
Notice that $T$ has flips if and only if some coordinate of $\tau$ is equal to $-1$.
Let
\begin{eqnarray*}
\{z_1,\ldots,z_n\}=\left\{T\left(\frac{x_0+x_1}{2}\right),T\left(\frac{x_1+x_2}{2}\right),\ldots,T\left(\frac{x_{n-1}+x_n}{2}\right)\right\}
\end{eqnarray*}
be such that $ 0<z_1<z_2<\ldots<z_n<1$; we define the {\it permutation $\pi$
associated to $T$} as the one that takes $i\in\{1,2,\ldots,n\}$
to $\pi(i)=j$ if and only if $z_{j}=T((x_{i-1}+x_i)/2)$.
It should be remarked that an AIET $E:[a,b]\to [a,b]$ with flips-vector $\tau\in\{-1,1\}^n$ and which has the zero vector as the
log-slope-vector is an IET (with
flips-vector $\tau$) and conversely. Let $J=[c,d]$ be a proper subinterval of $[a,b]$.
We say that the IET $E$ is {\it self-similar} (on $J$) if
there exists an orientation preserving affine map $L:\mathbb{R}\to\mathbb{R}$ such
that $L(J)=[a,b]$ and $L\circ\widetilde{E}=E\circ L$, where $\widetilde{E}:J\to J$ denotes the IET induced by $E$ and $L(\mathcal{D}(\widetilde{E}))\subset\mathcal{D}(E)$. A self-similar IET $E:[a,b]\to [a,b]$ on a proper subinterval
$J\subset [a,b]$ will be denoted by $(E,J)$.
Given an AIET $E:[a,b]\to [a,b]$, the {\it orbit} of $p\in [a,b]$ is the set
$$O(p)=\{E^n(p)\mid n\in\mathbb{Z} \:,{\rm and}\: p\in\mathcal{D}(E)\}.$$
The AIET $E$ is called {\it transitive} if there exists an orbit of $E$ that is dense in $[a,b]$. We say that the
orbit of $p\in [a,b]$ is {\it finite} if $\#(O(p))<\infty$. In this way, a point $p\in [a,b]-
(\mathcal{D}(E)\cup\mathcal{D}(E^{-1}))$ is said to have a finite orbit. A transitive AIET is {\it minimal} if it has no finite orbits.
Let $E:[a,b]\to [a,b]$ be an IET with D-vector
$(x_0,x_1,\cdots,x_n) $. Denote by $J=[c,d]$ a proper
subinterval of $[a,b]$. Suppose that $E$ is self-similar (on $J$);
so there exists IET $\widetilde{E}:J\to J$ such that $L(J)=[a,b]$ and $L\circ\widetilde{E}=E\circ L$.
Given $i=0,1,\cdots,n$, let $y_i=L^{-1}(x_i)$. In this way,
the sequence of discontinuities of $\widetilde{E}$ is $\{y_1,\cdots,y_{n-1}\}$.
We say that a non-negative matrix is {\it quasi-positive} if some power of it is a positive matrix.
A non-negative matrix is quasi-positive if and only if it is both irreducible and aperiodic.
Let $A$ be an $n\times n$ non-negative matrix whose entries
are:
\begin{eqnarray*}
A_{ji} = \#\{ 0 \leq k \leq N_i : E^k((y_{i-1}, y_i))
\subset (x_{j-1}, x_j) \},
\end{eqnarray*}
where $N_i$ is the least
non-negative integer such that for some $y\in (y_{i-1},y_i)$ (and
therefore for all $y\in (y_{i-1},y_i)$), $E^{N_i+1}(y)\in J$. We
shall refer to $A$ as the {\it matrix associated to} $(E,J)$.
Being self-similar, $E$ is also transitive, which implies the quasi-positivity of $A$.
Hence, by the Perron-Frobenius Theorem \cite{Gan}, $A$ possesses exactly one probability
right eigenvector $\alpha\in\Lambda_n$, where
$$\Lambda_{n}=\{\lambda=(\lambda_1,\ldots,\lambda_n)\mid\lambda_i>0,\,\forall i\}.$$
Moreover, the eigenvalue $\mu$ corresponding to $\alpha$
is simple, real and greater than $1$ and, also,
all other eigenvalues of $A$ have absolute value less than
$\mu$. It was proved by Veech \cite{Ve1} (see also \cite{Mas,Ve2}) that every self-similar IET
is minimal and uniquely ergodic. Furthermore,
following Rauzy \cite{Rau}, we conclude that
\begin{eqnarray*}
\alpha =
(x_1 - x_0, x_2 - x_1, \cdots, x_n - x_{n-1}).
\end{eqnarray*}
\section{The theorem of Bressaud, Hubert and Maass}
Let $A\in SL_n(\mathbb{Z})$ and let $\mathbb{Q}[t]$ be the ring of polynomials with rational coefficients in one variable. We say that two
real eigenvalues $\theta_1$ and $\theta_2$ of $A$ are {\it conjugate} if there exists an irreducible polynomial
$f\in\mathbb{Q}[t]$ such that $f(\theta_1)=f(\theta_2)=0$. We say that an AIET $T$ of $[0,1]$ is {\it semiconjugate}
(resp. {\it conjugate}) to an IET $E$ of $[0,1]$ if there exists a non-decreasing (resp. bijective) continous map $h:[0,1]\to [0,1]$
such that $h(\mathcal{D}(T))\subset\mathcal{D}(E)$ and $E\circ h=h\circ T$.
\begin{theorem}[Bressaud, Hubert and Maass, 2007]\label{BHMthm} Let $J$ be a proper subinterval of $[0,1]$, \mbox{$E:[0,1]\to [0,1]$} be an interval exchange transformation self-similar on $J$ and let $A$ be the matrix associated to $(E,J)$.
Let $\theta_1$ be the Perron-Frobenius eigenvalue of $A$. Assume that
$A$ has a real eigenvalue $\theta_2$ such that
\begin{itemize}
\item [(1)] $1<\theta_2\: (<\theta_1)$;
\item [(2)] $\theta_1$ and $\theta_2$ are conjugate.
\end{itemize}
Then there exists an affine interval exchange transformation $T$ of $[0,1]$ with wandering intervals that is semiconjugate
to $E$.
\end{theorem}
\proof This theorem was proved in \cite{BHM} for oriented IETs. The same proof holds word for word for IETs
with flips. In this case, the AIET $T$ inherits its flips from the IET $E$ through the semiconjugacy
previously constructed therein.
\endproof
\section{The interval exchange transformation $E$}
In this section we shall present the IET we shall use to construct the AIET with flips and wandering intervals.
We shall need the Rauzy induction \cite{Rau} to obtain a minimal, self-induced IET whose associated matrix satisfies
all the hypotheses of Theorem \ref{BHMthm}.
Let $ \alpha = (\alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5)\in\Lambda_5$
be the probability (i.e. each $ \alpha_i >0$ and $ |\alpha| = \alpha_1 + \alpha_2 + \alpha_3 + \alpha_4 +
\alpha_5 = 1 $ ) Perron-Frobenius right eigenvector of the matrix
\begin{eqnarray*}
A = \left(\begin{array} {ccccc}
2 & 4 & 6 & 5 & 2 \\
0 & 2 & 1 & 1 & 1 \\
0 & 0 & 3 & 2 & 0 \\
1 & 2 & 2 & 2 & 1 \\
1 & 3 & 5 & 4 & 2
\end{array}\right).
\end{eqnarray*}
The eigenvalues $\theta_1,\theta_2,\rho_1,\rho_2,\rho_3$ of $A$ are
real and have approximate values:
\begin{eqnarray*}
\theta_1=7.829,\:\theta_2=1.588,\:\rho_1=1,\:\rho_2=0.358,\:\rho_3=0.225
\end{eqnarray*}
and $ \alpha=(\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5)$, the probability right
eigenvector associated to $\theta_1$, has approximate value
\begin{eqnarray*}
\alpha = (0.380, 0.091, 0.070, 0.170, 0.289).
\end{eqnarray*}
Notice that $\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5=1$. In what follows we represent
a permutation $\pi$ of the set $\{1,2,\ldots,n\}$ by the $n$-tuple $\pi=(\pi(1),\pi(2),\ldots,
\pi(n))$.
We consider the iet $E:[0, 1]\to [0, 1]$ which is
determined by the following conditions:
\begin{enumerate}
\item [(1)] $E$ has the D-vector $ x = (x_0, x_1, x_2, x_3, x_4, x_5)$,
where
$$
x_0=0;\quad x_i=\sum_{k=1}^i \alpha_k,\quad i=1,\ldots,5;
$$
\item [(2)] $E$ has associated permutation $(5,3,2,1,4)$;
\item [(3)] $E$ has flips-vector $(-1,-1,1,1,-1)$.
\end{enumerate}
\begin{lemma}\label{example}
The map $E$ is self-similar on the interval $J=[0, 1/\theta_1]$, and $A$
is precisely the matrix associated to $(E,J)$.
\end{lemma}
\begin{proof}
We apply the Rauzy algorithm (see [Rau]) to the IET $E$. We represent $E:I\to I$ by the pair $E^{(0)}=(\alpha^{(0)},p^{(0)})$ where $\alpha^{(0)}=\alpha$ is its length vector and $p^{(0)}=(-5,-3,2,1,-4)$ is its signed permutation, obtained by elementwise multiplication of its permutation $(5,3,2,1,4)$ and flips-vector $(-1,-1,1,1,-1)$. We shall apply the Rauzy procedure fourteen times, obtaining IETs $E^{(k)}=(\alpha^{(k)},p^{(k)})$, $k=0,\ldots,14$, with D-vector $x^{(k)}$ given by $x^{(k)}_0=0$; and $x^{(k)}_i=\sum_{j=1}^i \alpha^{(k)}_j$,
for $i=1,2,\ldots,5$.
\begin{table}[htbp]
\centering
\begin{tabular}{|c| *{4}{r @{\:}}r|c|}
\hline
$k$ && \multicolumn{3}{c}{$p^{(k)}$} && $t^{(k)}$ \\
\hline
0&-5&-3& 2& 1&-4& 1\\
1& 4&-5&-3& 2& 1& 0\\
2& 5&-2&-4& 3& 1& 1\\
3& 5& 1&-2&-4& 3& 1\\
4& 5& 3& 1&-2&-4& 1\\
5& 5&-4& 3& 1&-2& 0\\
6&-2&-5& 4& 1&-3& 1\\
7&-2& 3&-5& 4& 1& 0\\
8&-3& 4&-2& 5& 1& 1\\
9&-3& 4&-2& 5& 1& 1\\
10&-3& 4&-2& 5& 1& 0\\
11&-4& 5&-3& 2& 1& 1\\
12&-4& 5& 1&-3& 2& 1\\
13&-4& 5& 2& 1&-3& 0\\
14&-5&-3& 2& 1&-4& 1\\
\hline
\end{tabular}
\caption{Rauzy cycle with associated matrix $A$.}
\label{tab:Table}
\end{table}
Given an IET $E^{(k)}$, defined on an interval $[0,L^{(k)}]$ and represented by the pair $(\alpha^{(k)},p^{(k)})$, the IET $E^{(k+1)}$ is defined to be the map induced on the interval $[0,L^{(k+1)}]$ by $E^{(k)}$, where $L^{(k+1)}=L^{(k)}-\min\,\{\alpha^{(k)}_5,\alpha^{(k)}_{s}\}$ and $s$ is such that
$\vert p^{(k)}_{\,n}(s)\vert=5$. We say that the type $t^{(k)}$ of $E^{(k)}$ is $0$ if $\alpha^{(k)}_5>\alpha^{(k)}_s$ and $1$ if $\alpha^{(k)}_5<\alpha^{(k)}_{s}$. Notice that $\sum_{i=1}^5 \alpha^{(k)}_i=L^{(k)}$.
The new signed permutations $p^{(k)}$, obtained by this procedure are given in Table~\ref{tab:Table}, along with the type $t^{(k)}$ of $E^{(k)}$. The length vector $\alpha^{(k+1)}$ is obtained from $\alpha^{(k)}$ by the equation $\alpha^{(k)}=M(p^{(k)},t^{(k)}).\alpha^{(k+1)}$, where $M(p^{(k)},t^{(k)})\in SL_n(\mathbb{Z})$ is a certain elementary matrix (see \cite{GLMPZ}). Moreover, we have that
$$
M(p^{(0)},t^{(0)}).\cdots.M(p^{(13)},t^{(13)})=A.
$$
Thus $\alpha^{(14)}=A^{-1}.\alpha^{(0)}=\alpha^{(0)}/\theta_1$, and $J=[0,L^{(14)}]$. Notice that $p^{(14)}=p^{(0)}$, and so we have a Rauzy cycle: $R^{(14)}$ and $R^{(0)}$ have the same flips-vector and permutation. Hence $\widetilde{E}=E^{(14)}$ is a $1/\theta_1$-scaled copy of $E=E^{(0)}$, and so $E$ is self-similar on the interval $J$.
As remarked before, since $E$ self-similar, we have that the matrix associated to $(E,J)$
is quasi-positive. In fact, we have that $A$ is the matrix associated to $(E,J)$. To see that,
for $i\in\{0,\ldots,5\}$, let $y_i=x_i/\theta_1$ be the points of discontinuity for $\widetilde{E}$. Table~\ref{tab:itin} shows the itinerary $I(i)=\{I(i)_k\}_{k=1}^{N_i}$ of each interval $(y_{i-1},y_i)$, where \mbox{$N_i=\min\,\{n>1:E^{n+1}((y_{i-1},y_i))\subset J\}$} and $I(i)_k=r$ if and only if $E^k((y_{i-1},y_i))\subset (x_{r-1},x_r)$.
\begin{table}[htbp]
\centering
\begin{tabular}{|c|c| *{17}{c@{\:}}|}
\hline
$i$ & $N_i$ &&\multicolumn{14}{c}{$I(i)$} && \\
\hline
1& 4& 1& 5& 1& 4& & & & & & & & & & & & & \\
2&11& 1& 5& 2& 1& 4& 1& 5& 2& 1& 5& 4& & & & & & \\
3&17& 1& 5& 2& 1& 4& 1& 5& 3& 1& 5& 3& 1& 5& 3& 1& 5& 4 \\
4&14& 1& 5& 2& 1& 4& 1& 5& 3& 1& 5& 3& 1& 5& 4& & & \\
5& 6& 1& 5& 2& 1& 5& 4& & & & & & & & & & & \\
\hline
\end{tabular}
\caption{Itineraries $I(i)$, $i\in\{1,\ldots,5\}$.}
\label{tab:itin}
\end{table}
The number of times that $j$ occurs in $I(i)$, for $i,j\in\{1,\ldots,5\}$, is precisely $A_{ji}$ and thus $A$ is the matrix associated to the pair $(E,J)$ as required.
\end{proof}
\begin{theoremA} There exists a uniquely ergodic
affine interval exchange transformation of $[0,1]$ with flips
having wandering intervals and such that the support of
the invariant measure is a Cantor set.
\end{theoremA}
\begin{proof} By construction, the matrix $A$ associated to $(E,J)$ satisfies hypothesis $(1)$
of \mbox{Theorem \ref{BHMthm}}. The characteristic polynomial $p(t)$ of $A$ can be written as the
product of two irreducible polynomials over $\mathbb{Q}[t]$:
$$p(t)=(1-t) (1-8t+18t^2-10t^3+t^4).$$
Thus the eigenvalues $\theta_1$ and $\theta_2$ are zeros of the same irreducible polynomial
of degree four and so are conjugate. Hence, $A$ also verifies hypothesis $(2)$ of \mbox{Theorem \ref{BHMthm}},
which finishes the proof.
\end{proof}
Note that for an AIET $T$, the forward and backward iterates of a wandering interval $J$ form a pairwise disjoint collection of intervals. Moreover, when $T$ is semiconjugate to a transitive IET, as is the case in Theorem A, the $\alpha$-limit set and $\omega$-limit set of $J$ coincide.
\bibliographystyle{amsplain}
|
1,477,468,751,399 | arxiv | \section{Introduction}
This paper is the logical continuation of Ref.\,[\onlinecite{PRB1}],
referred to as Part I henceforth. In Part I, we described a method for the exact
diagonalization of clean systems of independent fermions subject to arbitrary
boundary conditions (BCs), and illustrated its application in several prototypical
one-dimensional ($D=1$) tight-binding models \cite{PRB1,JPA,PRL}.
Our broad motivation was, and remains, to develop an analytic approach
for exploring and quantitatively characterizing the interplay between {\em bulk} and
{\em boundary} physics, in a minimal setting where translation symmetry is
broken {\em only} by BCs. On a fundamental level, such an understanding is
a prerequisite toward building a complete physical picture of the
bulk-boundary correspondence for mean-field topological electronic matter.
For systems classified as topologically non-trivial \cite{chiu16}, there exist at least one
bulk invariant and one boundary invariant whose values must coincide \cite{prodanBook}.
Bulk invariants are insensitive to BCs by construction, but what is the impact of BCs
on boundary invariants? Likewise, with an eye toward applications, what are design principles
and ultimate limitations in engineering boundary modes in topological materials?
Our method of exact diagonalization provides an insightful
first step towards answering these questions, because it can
be casted neatly as a generalization of Bloch's theorem to arbitrary BCs. As
we showed, in the generic case the exact energy eigenstates of a
single-particle Hamiltonian are linear combinations of {\em generalized Bloch
states}. The latter are uniquely determined by the analytic continuation
of the Bloch Hamiltonian (or some closely-related matrix function)
off the Brillouin zone to {\it complex} values of the crystal momentum.
In essence, the problem of diagonalizing the single-particle Hamiltonian boils
down to finding all linear combinations of generalized Bloch states which satisfy
the BCs. As long as the bulk is disorder-free and couplings have finite range,
BCs can be encoded in a {\it boundary matrix}, whose shape is generally
independent of the number of lattice sites. Any change in the energy levels and eigenstates
induced by a change in BCs is thus directly and efficiently computable from the boundary matrix in principle.
The generalized Bloch theorem properly accounts for two types of energy
eigenstates that do not exist once translation invariance is imposed via
Born-von-Karman (periodic) BCs: perfectly localized modes and localized modes
whose exponential decay exhibits a power-law prefactor. While such ``power-law
modes'' have been well documented in numerical investigations of long-ranged tight-binding
models \cite{longrange}, it was a surprise to find them in short-range models\cite{PRB1,JPA} -- notably,
the topological zero-modes of the Majorana chain display power-law behavior in a
parameter regime known as the ``circle of oscillations". As shown in
Part I, both types of exotic modes appear precisely when the transfer matrix of
the model fails to be invertible. The generalized Bloch theorem may be thought of
as bestowing {\it exact solvability} in the same sense as the algebraic
Bethe ansatz does: the linear-algebraic task of diagonalizing the single-particle
Hamiltonian is mapped to one of solving a small (independent of
the number of sites) system of {\em polynomial} equations. While
in general, if the polynomial degree is higher than four, the roots
must be found numerically, whenever this polynomial system can be solved
analytically, one has managed to solve the original linear-algebraic
problem analytically as well. In fact, fully analytical solutions
are less rare than one might think, and either emerge in special parameter regimes,
or by suitably adjusting BCs.
In this paper, Part II, we extend the scope of our generalized Bloch
theorem even further, with a twofold goal in mind. First, while in Part I
we presented the basic framework for calculating energy eigenstates
of fermionic $D$-dimensional lattice systems with surfaces, for simplicity
we restricted to a setting where the total system Hamiltonian retains
translation invariance along $D-1$ directions parallel to the surfaces.
In more realistic situations in surface physics, however, this assumption is
invalidated by various factors, including surface reconstruction and
surface disorder. Establishing procedures for exact
diagonalization of \(D\)-dimensional clean systems subject to
arbitrary BCs (surface disorder included) on two parallel hyperplanes
is thus an important necessary step. We accomplish this in
Sec.\,\ref{theoryrecap}, by allowing for BCs to be adjusted in order to
conveniently describe surface relaxation, reconstruction, or disorder in
terms of an appropriate boundary matrix.
As a second main theoretical extension, we proceed to show in Sec.\,\ref{interfaces}
how to diagonalize ``multi-component'' systems that host
hyperplanar interfaces separating clean bulks, that is, ``junctions".
Surface and interface problems are conceptually related: BCs are but effective models of
the interface between the system of interest and its ``complement'' or
environment. While it is well appreciated that exotic many-body phenomena can take
place at interfaces, there are essentially no known principles to guide
interface engineering (see e.g. Ref.\,[\onlinecite{diez15}] for an
instructive case study). It is our hope that our characterization of interfaces
in terms of {\it interface matrices} will shed some light on the problem of finding
such guiding principles, at least within the mean-field approximation.
As a concrete illustration, we include an exact
calculation of the Andreev bound states in a simple model of a clean
superconducting-normal-superconducting (SNS) junction, complementing
the detailed numerical investigations reported in Ref. [\onlinecite{bena12}].
In addition to the SNS junction, we provide in Sec.\,\ref{high_dim} several
explicit applications of our diagonalization procedures to computing surface
band structures in systems ranging from insulating ladders to $p$- and
and $s$-wave topological superconductors (TSCs) in $D=2$ lattices.
The ladder model of domain-wall fermions introduced by Creutz
\cite{Creutz,CreutzPRD,CreutzRMP}
serves as a bridge between one to higher dimensions.
For some values of the magnetic flux, the Creutz ladder can be classified as a
topological insulator in class A and we find that it displays {\em topological power-law modes}.
To the best of our knowledge, this is the first example of such power-law modes
in a short-range insulator. In addition, we uncover a Gaussian duality mapping the Creutz
ladder to a dual system consisting of two Majorana chains (see Ref.\,[\onlinecite{equivalence}]
for other examples of dualities bridging distinct classes in the mean-field topological
classification of electronic matter, and Ref. [\onlinecite{dualitygeneral}] for the general approach
to dualities).
Moving to $D=2$ systems, we first consider graphene
ribbons with two types of edges, ``zigzag-bearded'' and ``armchair'' (in the terminology of
Ref.\,[\onlinecite{kohmoto07}]), in order to also provide an
opportunity for direct comparison within our method and other analytical
calculations in the literature. As a more advanced application, we
compute in closed form the surface band structure of the chiral
\(p+ip\) TSC \cite{read00}. This problem is well under
control within the continuum approximation\cite{bernevig},
but not on the lattice. This distinction is important because the phase
diagram of lattice models is richer than one would infer from the
continuum approximation. As a final, technically harder example of a surface
band-structure calculation, we investigate a two-band, gapless $s$-wave TSC that
can host symmetry-protected Majorana flat bands and is distinguished by a
non-unique, anomalous bulk-boundary correspondence
\cite{swavePRL,swavePRB}.
We conclude in Sec.\,\ref{outlook} by iterating our key points and highlighting some
key open questions. To ease the presentation, most technical details of our calculations
are deferred to appendixes, including the analytic diagonalization
of several paradigmatic $D=1$ models with boundaries. For reference, a summary of
all the model systems we explicitly analyzed so far using the generalized Bloch
theorem approach
is presented in Table \ref{MainTable}.
\begin{table*}
\centering
\begin{tabular}{|l|c|c|c|c|} \hline
{\bf $D=1$ and quasi-($D$=1) systems} & {\bf PC} & {\bf Boundary Conditions} & {\bf Some Key Results} & {\bf See} \\ \hline
\hline
{\sf Single-band chain} & yes & open/edge impurities & full diagonalization & Part I, Sec.\,V.A\\ \hline
{\sf Anderson model} & yes & open & full diagonalization & Part I, Sec.\,V.B\\ \hline
{\sf Majorana Kitaev chain} & no & open & full diagonalization & Refs.\,[\onlinecite{PRL,JPA}]; \\
& & & power-law Majorana modes & Part I, Sec.\,V.C \\ \hline
{\sf Two-band $s$-wave TSC} & no & open/twisted & \(4\pi\)-periodic supercurrent & Part I, Sec\,VI.B\\
& & & without parity switch & \\ \hline
{\sf BCS chain} & no & open & full diagonalization & App.\,\ref{appBCS}\\ \hline
{\sf Su-Schrieffer-Heeger model} & yes & reconstructed & full diagonalization & App.\,\ref{basic_examples} \\ \hline
{\sf Rice-Mele model} & yes &reconstructed & full diagonalization & App.\,\ref{basic_examples} \\ \hline
{\sf Aubry-Andr\'e-Harper model} & yes & reconstructed & full diagonalization & App.\,\ref{basic_examples} \\
(period-two) & & & & \\ \hline
{\sf Creutz ladder} & yes & open & power-law topological modes& Sec.\,\ref{creutzladder}, App.\,\ref{creupendix}\\ \hline
{\sf Majorana ladder} & no & open & SC dual of Creutz ladder & Sec.\,\ref{majoranaladder}\\ \hline
{\sf SNS junction} & no & junction & Andreev bound states & Sec.\,\ref{interfaces}\\ \hline \hline
{\bf $D=2$ systems} & & & & \\ \hline \hline
{\sf Graphene} (including & yes & zigzag-bearded (ribbon) & full diagonalization & Sec.\,\ref{zbsec} \\
modulated on-site potential)& & armchair (ribbon)& full diagonalization & Sec.\,\ref{armpitsec}\\ \hline
{\sf Harper-Hofstadter model} & yes & open (ribbon) & closed-form edge bands and states & Ref.\,[\onlinecite{Qiaoru}] \\ \hline
{\sf Chiral p+ip TSC} & no & open (ribbon) & closed-form edge bands and states & Sec.\,\ref{pwavetoponductor} \\
& & & power-law surface modes & \\ \hline
{\sf Two-band $s$-wave TSC} & no & open/twisted & \(k_\parallel\)-resolved DOS
& Sec.\,\ref{2Dswavetoponductor} \\
& & &localization length at zero energy &\\
& & & enhanced $4\pi$-periodic supercurrent & \\
\hline
\end{tabular}
\caption{
Summary of representative models analyzed in this work along with Part I (Ref. [\onlinecite{PRB1}]) and
Ref. [\onlinecite{Qiaoru}], by using the generalized Bloch theorem approach.
Some emerging key results are highlighted in the fourth column.
PC: particle-conserving, DOS: density of states, SC: superconductor (or superconducting,
depending on context). Additional models that are amenable to solution by our approach include Majorana
chains with twisted BCs \cite{Katsura17} or longer-range (e.g., next-nearest-neighbor) couplings
\cite{Liu}, dimerized Kitaev chains \cite{Zhou}, period-three hopping models \cite{Kevin},
as well as time-reversal-invariant TSC
wires with spin-orbit coupling \cite{Aligia}, to name a few.
}
\label{MainTable}
\end{table*}
\section{Tailoring the generalized Bloch theorem to surface physics problems}
\label{theoryrecap}
As mentioned, the main aim of this section is to describe how the generalized
Bloch theorem may be tailored to encompass BCs encountered in realistic
surface-physics scenarios, which need not respect translation invariance along
directions parallel to the interface, as we assumed in Part I. Notwithstanding,
the key point to note is that the bulk-boundary separation introduced
in Part I goes through {\em regardless} of the nature of the BCs. As a result,
the bulk equation describing a clean system can always be decoupled by a partial
Fourier transform into a family of ``virtual" chains parametrized
by the conserved component of crystal momentum \({\mathbf{k}}_\parallel\).
{\em If} the BCs conserve \({\mathbf{k}}_\parallel\), then they also reduce to BCs for
each virtual chain. If they do {\em not}, then the BCs hybridize the generalized
Bloch states associated to the individual virtual chains. In general, the boundary matrix
will then depend on all crystal momenta in the surface Brillouin zone.
\subsection{Open boundary conditions}
We consider a clean system of independent fermions embedded on a $D$-dimensional lattice
with associated Bravais lattice $\Lambda_D$. Let $d_{\rm int}$ denote the number of fermionic
degrees (e.g., the relevant orbital and spin degrees) enclosed by a primitive cell attached to each
point of $\Lambda_D$. Now let us terminate this system along two parallel lattice hyperplanes,
or {\it hypersurfaces} henceforth -- resulting in open (or ``hard-wall'') BCs.
The terminated system is translation-invariant along $D-1$ lattice vectors parallel to the
hypersurfaces, so that we can associate with it
a Bravais lattice $\Lambda_{D-1}$ of spatial dimension $D-1$,
known as the {\it surface mesh} \cite{bechstedt}. If ${\mathbf{m}}_1,\dots,{\mathbf{m}}_{D-1}$
denote the primitive vectors of $\Lambda_{D-1}$,
then any point ${\mathbf{j}}_\parallel \in \Lambda_{D-1}$ can be expressed
as ${\mathbf{j}}_\parallel = \sum_{\mu=1}^{D-1}j_\mu {\mathbf{m}}_\mu$, where $j_\mu$ are integers.
Let us choose a lattice vector ${\mathbf{s}}$ of $\Lambda_D$ that is not in the surface mesh
(and therefore, not parallel to the two hypersurfaces).
We will call ${\mathbf{s}}$ the {\it stacking vector}. Since $\{{\mathbf{m}}_1,\dots,{\mathbf{m}}_{D-1},{\mathbf{s}}\}$ are not
the primitive vectors of $\Lambda_D$ in general, the Bravais lattice $\bar{\Lambda}_D$
generated by them may cover only a subset of points in $\Lambda_D$.
Therefore, in general, each primitive cell of $\bar{\Lambda}_D$
may enclose a number $I>1$ of points of $\Lambda_D$.
As a result, there are a total of $\bar{d}_{\rm int} =Id_{\rm int}$ fermionic degrees of freedom
attached to each point ${\mathbf{j}}_\parallel + j{\mathbf{s}}$ of $\bar{\Lambda}_D$ with $j$ an integer (see Fig. \ref{fig_surface}).
Let us denote the corresponding creation (annihilation)
operators by $c^\dagger_{{\mathbf{j}}_\parallel j 1}, \dots, c^\dagger_{{\mathbf{j}}_\parallel j \bar{d}_{\rm int}}$
($c_{{\mathbf{j}}_\parallel j 1},\dots,c_{{\mathbf{j}}_\parallel j \bar{d}_{\rm int}}$). For each ${\mathbf{j}}_\parallel$ in the surface
mesh, we define the array of the basis of fermionic operators by
\[
\hat{\Phi}^\dagger_{{\mathbf{j}}_\parallel} \equiv
\begin{bmatrix}\hat{\Phi}^\dagger_{{\mathbf{j}}_\parallel,1} & \cdots &
\hat{\Phi}^\dagger_{{\mathbf{j}}_\parallel,N}\end{bmatrix},
\;\;
\hat{\Phi}^\dagger_{{\mathbf{j}}_\parallel,j} \equiv
\begin{bmatrix}c^\dagger_{{\mathbf{j}}_\parallel j1} & \cdots &
c^\dagger_{{\mathbf{j}}_\parallel j\bar{d}_{\rm int}}\end{bmatrix},
\]
where the integer $N$ is proportional to the separation between the two hypersurfaces.
For arrays, such as $\hat{\Phi}^{\dagger}_{{\mathbf{j}}_\parallel}$ and
$\hat{\Phi}_{{\mathbf{j}}_\parallel}^{\;}$,
we shall follow the convention that the arrays appearing on the left (right)
of a matrix are row (column) arrays.
In the above basis, the many-body Hamiltonian of the system, subject to
open BCs on the hypersurfaces, can be expressed as \cite{PRB1}
\begin{eqnarray*}
\widehat{H}_N = \hspace{-3mm}
\sum_{{\mathbf{j}}_\parallel,{\mathbf{r}}_\parallel \in \Lambda_{D-1}} \hspace{-3mm}
\Big[\hat{\Phi}^\dagger_{{\mathbf{j}}_\parallel}K_{{\mathbf{r}}_\parallel}\hat{\Phi}^{\;}_{{\mathbf{j}}_\parallel+{\mathbf{r}}_\parallel}
+\frac{1}{2}(\hat{\Phi}^\dagger_{{\mathbf{j}}_\parallel}\Delta_{{\mathbf{r}}_\parallel}
\hat{\Phi}_{{\mathbf{j}}_\parallel+{\mathbf{r}}_\parallel}^\dagger +\text{H.c.})\Big],
\label{Hamtransinv}
\end{eqnarray*}
where ${\mathbf{j}}_\parallel, {\mathbf{r}}_\parallel$ are vectors in the surface mesh, and
$K_{{\mathbf{r}}_\parallel}$, $\Delta_{{\mathbf{r}}_\parallel}$ are $N\bar{d}_{\rm int}\times N\bar{d}_{\rm int}$
hopping and pairing matrices that satisfy
\( K_{-{\mathbf{r}}_\parallel}=K_{{\mathbf{r}}_\parallel}^\dagger$,
$\Delta_{-{\mathbf{r}}_\parallel}=-\Delta_{{\mathbf{r}}_\parallel}^{\rm T} \)
by virtue of fermionic statistics, with the superscript ${\rm T}$ denoting
the transpose operation. Thanks to the assumptions of clean, finite-range system,
these are {\em banded block-Toeplitz matrices} \cite{JPA}: explicitly, if $R \geq 1$ is
the range of hopping and pairing, we may write
$[S_{{\mathbf{r}}_\parallel}]_{jj'} \equiv S_{{\mathbf{r}}_\parallel,j'-j} \equiv S_{{\mathbf{r}}_\parallel,r}$,
with
\[ S_{{\mathbf{r}}_\parallel,r} =0 \quad \text{if} \quad |r|>R, \;\; \forall {\mathbf{r}}_\parallel,\quad \mbox{where } S=K,\Delta . \]
\begin{figure}[t]
\begin{center}
\hspace*{3mm}\includegraphics[width=8cm]{surface3.pdf}
\end{center}
\vspace*{-2mm}
\caption{
(Color online) Sketch of a $D=2$ lattice system with nearest-neighbor (NN) hopping
subject to arbitrary BCs.
The filled and hollow circles together form a Bravais lattice $\Lambda_2$.
The two $D=1$ edges of the system are shown by horizontal lines decorated by pattern.
The surface mesh $\Lambda_1$ is generated by ${\mathbf{m}}_1$, and consists of all points
connected by dashed black lines. ${\mathbf{m}}_1$ and ${\mathbf{s}}$ generate the Bravais lattice $\bar{\Lambda}_2$,
formed only by the filled circles. A primitive cell of $\bar{\Lambda}_2$ (shaded brown)
encloses two points of $\Lambda_2$. In this case, assuming there are $d_{\rm int}$
internal degrees associated to each point of $\Lambda_2$, we get
$\bar{d}_{\rm int} = 2d_{\rm int}$. Since, for NN hopping, $R=1$, the operator $W$ that
implements the BCs has its support on single-particle states in the boundary region
(shaded gray).
\label{fig_surface}}
\end{figure}
Next, we enforce periodic BCs along the directions ${\mathbf{m}}_1,\dots,{\mathbf{m}}_{D-1}$ in which translation
invariance is retained, by restricting to
those lattice points ${\mathbf{j}}_\parallel = \sum_{\mu=1}^{D-1} j_\mu {\mathbf{m}}_\mu$ where
for each $\mu$, $j_\mu$ takes values from $\{1,\dots,N_\mu\}$,
$N_\mu$ being a positive integer.
Let ${\mathbf{n}}_1,\dots,{\mathbf{n}}_{D-1}$ denote the primitive vectors of the surface reciprocal
lattice, which is the $(D-1)$-dimensional lattice reciprocal to the surface mesh $\Lambda_{D-1}$,
satisfying ${\mathbf{m}}_\mu \cdot {\mathbf{n}}_\nu = 2\pi \delta_{\mu\nu}$ for $\mu,\nu=1,\dots,D-1$.
The Wigner-Seitz cell of the surface reciprocal lattice is the {\it surface Brillouin zone},
denoted by SBZ. In the Fourier-transformed basis defined by
\begin{equation}
\label{phikperp}
\hat{\Phi}_{{\mathbf{k}}_\parallel}^\dagger \equiv
\sum_{{\mathbf{j}}_\parallel}^{\Lambda_{D-1}} \frac{e^{i{\mathbf{k}}_\parallel\cdot {\mathbf{j}}_\parallel}}{\sqrt{N_S}}\hat{\Phi}_{{\mathbf{j}}_\parallel}^\dagger,
\quad N_S = N_1\dots N_{D-1},
\end{equation}
where ${\mathbf{k}}_\parallel = \sum_{\mu=1}^{D-1} \frac{k_\mu}{N_\mu}{\mathbf{n}}_\mu$ and the integers
$k_\mu$ are crystal momenta in the SBZ, we can then express the relevant many-body Hamiltonian
in terms of ``virtual wires'' labeled by ${\mathbf{k}}_\parallel$.
That is,
\begin{eqnarray}
\widehat{H}_N &\equiv& \sum_{{\mathbf{k}}_\parallel \in \text{SBZ}}
\widehat{H}_{{\mathbf{k}}_\parallel,N} ,\quad \text{where} \label{HamOBC} \\
\widehat{H}_{{\mathbf{k}}_\parallel,N} &=& \frac{1}{2}(\hat{\Phi}_{{\mathbf{k}}_\parallel}^\dagger K_{{\mathbf{k}}_\parallel}\hat{\Phi}^{\;}_{{\mathbf{k}}_\parallel}
- \hat{\Phi}^{\;}_{-{\mathbf{k}}_\parallel} K_{-{\mathbf{k}}_\parallel}^{*} \hat{\Phi}_{-{\mathbf{k}}_\parallel}^\dagger \notag \\
&+& \hat{\Phi}_{{\mathbf{k}}_\parallel}^\dagger \Delta_{{\mathbf{k}}_\parallel}\hat{\Phi}_{-{\mathbf{k}}_\parallel}^\dagger
- \hat{\Phi}_{-{\mathbf{k}}_\parallel}\Delta_{-{\mathbf{k}}_\parallel}^{*} \hat{\Phi}_{{\mathbf{k}}_\parallel}) + \frac{1}{2}\text{Tr } K_{{\mathbf{k}}_\parallel}.
\notag
\end{eqnarray}
Here, Tr denotes trace and the $N\bar{d}_{\rm int} \times N\bar{d}_{\rm int}$ matrices $S_{{\mathbf{k}}_\parallel}$,
for $S = K,\Delta$, have entries
\[[S_{{\mathbf{k}}_\parallel}]_{jj'} \equiv S_{{\mathbf{k}}_\parallel,j'-j} \equiv S_{{\mathbf{k}}_\parallel,r}\equiv
\sum_{{\mathbf{r}}_\parallel} e^{i{\mathbf{k}}_\parallel\cdot {\mathbf{r}}_\parallel}S_{{\mathbf{r}}_\parallel,r}, \]
and the finite-range assumption requires that
\begin{equation}
\label{Range}
S_{{\mathbf{k}}_\parallel,r} =0 \ \, \text{if} \ |r|>R, \;\; \forall {\mathbf{k}}_\parallel \in \text{SBZ},\ \ \mbox{where } S=K,\Delta .
\end{equation}
\subsection{Arbitrary boundary conditions}
\label{sub:abc}
Physically, non-ideal surfaces may result from processes such as
surface relaxation or reconstruction, as well as from the presence of
surface disorder (see Fig. \ref{fig_reconst}). In our setting, these may be
described as effective BCs, modeled by a Hermitian operator of the form
\[
\widehat{W} \equiv \sum_{{\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel} \Big[
\hat{\Phi}^\dagger_{{\mathbf{j}}_\parallel}W^{(K)}_{{\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel}\hat{\Phi}^{\;}_{{\mathbf{j}}_\parallel}
+\frac{1}{2}(
\hat{\Phi}^\dagger_{{\mathbf{j}}_\parallel}W^{(\Delta)}_{{\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel}\hat{\Phi}^{\dagger}_{{\mathbf{j}}_\parallel'}
+\text{H.c.})\Big], \]
subject to the constraints from fermionic statistics,
\begin{eqnarray*}
W^{(K)}_{{\mathbf{j}}'_\parallel,{\mathbf{j}}_\parallel}=\big[W^{(K)}_{{\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel}\big]^\dagger,\quad
W^{(\Delta)}_{{\mathbf{j}}'_\parallel,{\mathbf{j}}_\parallel}=-\big[W^{(\Delta)}_{{\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel}\big]^{\rm T}.
\end{eqnarray*}
Since such non-idealities at the surface are known to influence
only the first few atomic layers near the surfaces, we assume that $\widehat{W}$
affects only the first $R$ boundary slabs of the lattice, so that (see also Fig. \ref{fig_surface})
\[ \big[W^{(S)}_{{\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel}\big]_{jj'} = 0 \quad \forall {\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel,\quad S=K,\Delta, \]
if $j$ or $j'$ take values in $\{R+1,\dots,N-R\}$.
The total Hamiltonian subject to arbitrary BCs is
\[ \widehat{H} \equiv \widehat{H}_N+\widehat{W}. \]
Let $j\equiv b =1, \ldots, R; N-R+1, \ldots, N$ label boundary lattice sites.
While in Part I we also assumed $\widehat{W}$ to be periodic along ${\mathbf{m}}_1,\dots,{\mathbf{m}}_{D-1}$
[case (a) in Fig. \ref{fig_reconst}], in general only $\widehat{H}_N$ will be able to be
decoupled by Fourier-transform, whereas $\widehat{W}$ will retains cross-terms of the form
\begin{eqnarray*}
&&[W_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}^{(S)}]_{bb'} = \sum_{{\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel}
e^{i({\mathbf{k}}_\parallel\cdot {\mathbf{j}}'_\parallel-{\mathbf{q}}_\parallel\cdot{\mathbf{j}}_\parallel)}\big[W_{{\mathbf{j}}_\parallel,{\mathbf{j}}'_\parallel}^{(S)}\big]_{bb'},
\quad S=K,\Delta.
\end{eqnarray*}
If the system is not particle-conserving, let us reorder the
fermionic operator basis according to \cite{PRB1}
\begin{eqnarray*}
\hat{\Psi}_{{\mathbf{k}}_\parallel}^\dagger \equiv \begin{bmatrix}
\hat{\Psi}_{{\mathbf{k}}_\parallel,1}^\dagger & \cdots & \hat{\Psi}_{{\mathbf{k}}_\parallel,N}^\dagger
\end{bmatrix},\quad
\hat{\Psi}_{{\mathbf{k}}_\parallel,j}^\dagger \equiv \begin{bmatrix}\hat{\Phi}_{{\mathbf{k}}_\parallel,j}^\dagger &
\hat{\Phi}^{\;}_{-{\mathbf{k}}_\parallel,j}
\end{bmatrix} .
\end{eqnarray*}
The single-particle Hamiltonian can then be expressed as
\begin{align}
& H=H_N+W= \label{spHam} \\
&=\sum_{{\mathbf{k}}_\parallel}|{\mathbf{k}}_\parallel\rangle\langle {\mathbf{k}}_\parallel|\otimes H_{{\mathbf{k}}_\parallel,N}
+\sum_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}|{\mathbf{q}}_\parallel\rangle\langle {\mathbf{k}}_\parallel|\otimes W_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel},
\notag
\end{align}
where $H_{{\mathbf{k}}_\parallel,N}$ is the single-particle (BdG) Hamiltonian corresponding to
Eq. \eqref{HamOBC}. In terms of the shift matrix
$T\equiv \sum_{j=1}^{N-1}|j\rangle\langle j+1|$
implementing a shift along the direction ${\mathbf{s}}$, and letting $r=j'-j$ as before, we have
\begin{eqnarray}
&&H_{{\mathbf{k}}_\parallel,N}=\mathds{1}_N\otimes h_{{\mathbf{k}}_\parallel,0}+
\sum_{r=1}^R \, [T^r\otimes h_{{\mathbf{k}}_\parallel,r}+\text{H.c.}],
\label{spHkN} \\
&&h_{{\mathbf{k}}_\parallel,r} = \sum_{{\mathbf{r}}_\parallel}e^{i {\mathbf{k}}_\parallel\cdot{\mathbf{r}}_\parallel }h_{{\mathbf{r}}_\parallel,r},\quad
h_{{\mathbf{r}}_\parallel,r} =
\begin{bmatrix}
K_{{\mathbf{r}}_\parallel,r} & \Delta_{{\mathbf{r}}_\parallel,r} \\ -\Delta_{{\mathbf{r}}_\parallel,r}^* & -K_{{\mathbf{r}}_\parallel,r}^*
\end{bmatrix}, \notag
\end{eqnarray}
whereas the single-particle boundary modification $W_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}$ in Eq. \eqref{spHam} is given by
\begin{eqnarray*}
W_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel} &=& \begin{bmatrix}
W^{(K)}_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel} & W^{(\Delta)}_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel} \\
-[{W^{(\Delta)}_{-{\mathbf{q}}_\parallel,-{\mathbf{k}}_\parallel}}]^* & -[{W^{(K)}_{-{\mathbf{q}}_\parallel,-{\mathbf{k}}_\parallel}}]^*
\end{bmatrix} .
\end{eqnarray*}
In the simpler case where the system is particle-conserving, then
$h_{{\mathbf{r}}_\parallel,r} = K_{{\mathbf{r}}_\parallel,r}$ and $W_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel} =
W^{(K)}_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}$.
\begin{figure}
\includegraphics[width=8cm]{Reconstruction.pdf}
\caption{(a) Sketch of a $D=2$ crystal with ideal surface. The remaining panels
show the same crystal with (b) relaxed, (c) reconstructed, and (d) disordered surface.
The unfilled circle in panel (d) shows a surface impurity atom.
\label{fig_reconst}}
\end{figure}
Reflecting the different ways in which a surface may deviate from its ideal structure (Fig. \ref{fig_reconst}),
we may consider BCs as belonging to three different categories of increasing complexity:
\begin{itemize}
\item {\it Relaxed BCs---}
In the process of surface relaxation, the atoms in the surface slab displace from their ideal position
in such a way that the surface (and the bulk) layers remain translation invariant
along ${\mathbf{m}}_1,\dots,{\mathbf{m}}_{D-1}$, as assumed in Part I. Therefore, ${\mathbf{k}}_\parallel$ remains a good
quantum number, and
$W_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}=\delta_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}W_{{\mathbf{k}}_\parallel,{\mathbf{k}}_\parallel}$.
In particular, \(W_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}=0\) for each ${\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel$ for open BCs,
which falls in this category.
\item {\it Reconstructed BCs---}
If the surfaces undergo reconstruction, then the total system can have lower periodicity
than the one with ideal surfaces. This scenario is also referred to
as {\it commensurate} surface reconstruction \cite{bechstedt}. In this case, $W$ may retain some
cross-terms of the form $W_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}$.
However, not all values ${\mathbf{k}}_\parallel$ are expected to have cross-terms in this way, and the system can still
be block-diagonalized. For example, for $2\times 1$ reconstruction of the (111) surface of Silicon crystals,
each block of the Hamiltonian will consist of only $2\times 1=2$ values of ${\mathbf{k}}_\parallel$, whereas
for its $7\times7$ reconstruction, each block includes $49$ values of ${\mathbf{k}}_\parallel$ \cite{bechstedt}.
\item {\it Disordered BCs---}
If the surface reconstruction is {\it non-commensurate}, or if the
surface suffers from disorder, then the Hamiltonian cannot be block-diagonalized
any further in general. Non-commensurate reconstruction of a surface is likely to
happen in the case of adsorption.
\end{itemize}
Our setting is general enough to model adsorption as well as thin layer deposition
up to a few atomic layers. In the following, unless otherwise stated, we will assume that the
system is subject to the most general type of disordered BCs.
\subsection{Generalized Bloch theorem}
\label{sub:gbt}
The first needed ingredient toward formulating the generalized Bloch theorem is a
description of the eigenstates of the single-particle Hamiltonian $H_{{\mathbf{k}}_\parallel, N}$
of the virtual wire labeled by ${\mathbf{k}}_\parallel$, given in Eq. \eqref{spHkN}. Let
\[ d\equiv \left\{ \begin{array}{lcl}
\bar{d}_{\rm int} & \text{if} & \Delta=0 =W^{(\Delta)} ,\\
2\bar{d}_{\rm int} & \text{if} & \Delta \ne 0 \; \text{or} \; W^{(\Delta)} \ne 0 .
\end{array}\right. \]
Then, the projector
\begin{eqnarray*}
P_B =\bm{1} \otimes \sum_{j=R+1}^{N-R}|j\rangle\langle j|\otimes \mathds{1}_d ,
\end{eqnarray*}
determined by the range \(R\) of the virtual chains is the
{\it bulk projector}, where we have used the completeness relation
$\bm{1}=\sum_{{\mathbf{k}}_\parallel \in \text{SBZ}}|{\mathbf{k}}_\parallel\rangle\langle {\mathbf{k}}_\parallel|$.
By definition, the matrix $W$ describing BCs satisfies \( P_B W=0 \), whereby it
follows that \(P_{B}H=P_B(H_N+W)=P_B H_N\). Accordingly, building on the exact
bulk-boundary separation also used in Part I, the {\em bulk equation} to be solved reads
\begin{eqnarray}
\label{bulkeq}
P_{B}H_{N}|\psi\rangle=\epsilon P_{B}|\psi\rangle , \quad \epsilon \in {\mathbb R}.
\end{eqnarray}
To proceed, we need to introduce some auxiliary matrices and states. First and foremost there is the
\(d\times d\) analytic continuation of the Bloch Hamiltonian \cite{PRL}, which now takes the form
\begin{equation}
H_{{\mathbf{k}}_\parallel}(z) \equiv h_0+\sum_{r=1}^R \, (z^rh_{{\mathbf{k}}_\parallel,r}+z^{-r}h_{{\mathbf{k}}_\parallel,r}^\dagger),
\quad z\in {\mathbb C},
\label{HBloch}
\end{equation}
acting on a $d$-dimensional internal space spanned by
states $\{|m\rangle,\ m=1,\dots,d\}$.
If the matrix \(h_{{\mathbf{k}}_\parallel,R}\) is {\em invertible},
then \(H_{{\mathbf{k}}_\parallel}(z)\) is essentially everything one needs to proceed.
Otherwise, the related matrix polynomial
\begin{eqnarray}
K_{{\mathbf{k}}_\parallel}^-(\epsilon,z) \equiv z^R(H_{{\mathbf{k}}_\parallel}(z)-\epsilon\mathds{1}_d)
\end{eqnarray}
is of considerable importance. We will also
need the \(dv\times dv\) generalized Bloch Hamiltonians with block entries
\begin{align}
\label{genhb}
&[H_{{\mathbf{k}}_\parallel,v}(z)]_{xx'} \equiv \\
&\frac{\partial_z^{x'-x}H_{{\mathbf{k}}_\parallel}(z)}{(x'-x)!}=
\frac{H_{{\mathbf{k}}_\parallel}^{(x'-x)}(z)}{(x'-x)!},\quad 1\le x\le x'\le v,\nonumber
\end{align}
with \(H_{{\mathbf{k}}_\parallel}^{(0)}(z)=H_{{\mathbf{k}}_\parallel}(z)\) given in Eq. \eqref{HBloch}.
In array form,
\[
H_{{\mathbf{k}}_\parallel,v}(z)=
\begin{bmatrix}
\ \ H^{(0)} & H^{(1)} & \frac{1}{2} H^{(2)} & \cdots & \frac{1}{(v-1)!}H^{(v-1)} \\
0 &\! \ddots &\! \! \ddots &\! \! \ddots & \vdots \\
\vdots &\! \ddots &\! \ddots &\! \! \ddots & \frac{1}{2}H^{(2)} \\
\vdots & &\! \ddots &\! \ddots & H^{(1)} \\
0 & \cdots & \cdots &0 & H^{(0)}
\end{bmatrix},
\]
where the label $(z)$ and the subscript ${\mathbf{k}}_\parallel$ were dropped for brevity.
The \(dv\times dv\) block matrix \(K_{{\mathbf{k}}_\parallel,v}^-(\epsilon,z)\) is
defined by the same formula. The important difference between these
two matrices is that \(K_{{\mathbf{k}}_\parallel,v}^-(\epsilon,z)\) is well defined
at \(z=0\), whereas \(H_{{\mathbf{k}}_\parallel,v}(z)\) is not. These block matrices
act on column arrays of \(v\) internal states, which can be expressed
in the form
\( |u\rangle=
\begin{bmatrix}
|u_1\rangle& \dots& |u_{v}\rangle
\end{bmatrix}^{\rm T}, \)
where each of the entries is an internal state.
For fixed but arbitrary \(\epsilon\), the expression
\begin{eqnarray}
\label{thepoly}
P_{{\mathbf{k}}_\parallel}(\epsilon,z)\equiv \det K_{{\mathbf{k}}_\parallel}^-(\epsilon,z)
\end{eqnarray}
defines a family of polynomials in \(z\). We call a given
value of \(\epsilon\) {\it singular} \cite{JPA} if \( P_{{\mathbf{k}}_\parallel}(\epsilon,z)\)
vanishes identically for all $z$ for some value of \({\mathbf{k}}_\parallel\).
Otherwise, \(\epsilon\) is {\it regular}. At a singular value
of the energy, \(z\) becomes independent of \(\epsilon\)
for some \({\mathbf{k}}_\parallel\). Physically, singular energies correspond
to {\em flat bands}, at fixed \({\mathbf{k}}_\parallel\). As explained in Part I,
flat bands are not covered by the generalized Bloch theorem and
require separate treatment \cite{FBRemark}. In the following, we will
concentrate on the {\em generic case where \(\epsilon\) is regular}.
For regular energies, \(P_{{\mathbf{k}}_\parallel}(\epsilon,z)\) can be factorized
in terms of its {\em distinct} roots as
\[ P_{{\mathbf{k}}_\parallel}(\epsilon,z)=c\prod_{\ell=0}^n(z-z_\ell)^{s_\ell},
\quad c\in {\mathbb C}, \]
with \(c\) a non-vanishing constant and \(z_0=0\) by convention.
If zero is not a root, then \(s_0=0\). The \(z_\ell,\ \ell=1,\dots,s_\ell\),
are the distinct non-zero roots of multiplicity \(s_\ell\geq 1\).
It was shown in Ref.\,[\onlinecite{JPA}] that the number of solutions
of the kernel equation
\begin{eqnarray}
\label{amps}
(H_{{\mathbf{k}}_\parallel,s_\ell}(z_\ell)-\epsilon\mathds{1}_{ds_\ell})|u\rangle=0
\end{eqnarray}
coincides with the multiplicity \(s_\ell\) of \(z_\ell\).
We will denote a complete set of independent solutions of
Eq.\,\eqref{amps} by $|u_{\ell s}\rangle,$ $s=1,\dots,s_\ell$,
where each $|u_{\ell s}\rangle$ has $d \times 1$ block-entries
\[
|u_{\ell s}\rangle = \begin{bmatrix}|u_{\ell s1}\rangle & \dots & |u_{\ell s s_\ell}\rangle\end{bmatrix}^{\rm T}.
\]
Moreover, if we define
\[ K_{{\mathbf{k}}_\parallel}^-(\epsilon)\equiv
K_{{\mathbf{k}}_\parallel,s_0}^-(\epsilon,z_0=0)\equiv
K_{{\mathbf{k}}_\parallel}^+(\epsilon)^\dagger, \]
then it is also the case that the kernel equations
\begin{eqnarray*}
K_{{\mathbf{k}}_\parallel}^-(\epsilon)|u\rangle=0,\quad K_{{\mathbf{k}}_\parallel}^+(\epsilon)|u\rangle=0
\end{eqnarray*}
have each \(s_0\) solutions. We will denote a basis of solutions
of these kernel equations by \(|u^\pm_s\rangle,\) \(s=1,\dots,s_0\),
each with block entries
\[ |u^\pm_{s}\rangle = \begin{bmatrix}|u^\pm_{s1}\rangle & \dots & |u^\pm_{s s_0}\rangle\end{bmatrix}^{\rm T}.
\]
In order to make the connection to the lattice degrees of freedom,
let us introduce the lattice states
\begin{eqnarray}
\label{zstate}
\!\!|z,v\rangle\equiv\!
\sum_{j=1}^N \frac{j^{(v-1)}}{(v-1)!}z^{j-v+1}|j\rangle\!=\!
\frac{1}{(v-1)!}\partial_z^{v-1}|z,1\rangle, \quad
\end{eqnarray}
with \(j^{(0)}=1\) and \(j^{(v)}=(j-v+1)(j-v+2)\dots j\) for \(v\) a positive integer. The states
\begin{align}
|{\mathbf{k}}_\parallel\rangle|\psi_{{\mathbf{k}}_\parallel \ell s}\rangle
\equiv & \sum_{v=1}^{s_\ell}|{\mathbf{k}}_\parallel\rangle|z_\ell,v\rangle|u_{\ell s v}\rangle,
\quad s=1,\dots, s_\ell,\nonumber\\
|{\mathbf{k}}_\parallel\rangle|\psi^-_{{\mathbf{k}}_\parallel s}\rangle\equiv &
\sum_{j=1}^{s_0}|{\mathbf{k}}_\parallel\rangle|j\rangle|u_{sj}^-\rangle,\quad s=1,\dots,s_0,\nonumber\\
|{\mathbf{k}}_\parallel\rangle|\psi^+_{{\mathbf{k}}_\parallel s}\rangle\equiv
&\sum_{j=1}^{s_0}|{\mathbf{k}}_\parallel\rangle|N-j+s_0\rangle|u_{sj}^+\rangle,
\quad s=1,\dots,s_0,
\label{asin}
\end{align}
form a complete set of independent solutions of the {bulk equation}, Eq. \eqref{bulkeq}.
Intuitively speaking, these states are eigenstates of the Hamiltonian
``up to BCs''. For regular energies as we assumed, there are
exactly \(2Rd=2s_0+\sum_{\ell=1}^ns_\ell\) solutions of the bulk
equation for each value of \({\mathbf{k}}_\parallel\) \cite{JPA,PRB1}.
The solutions associated to the non-zero roots are {\it extended
bulk solutions}, and the ones associated to \(z_0=0\) are
{\it emergent}. Emergent bulk solutions are perfectly localized around
the edges of the system in the direction perpendicular to the hypersurfaces.
It is convenient to obtain a more uniform description of solutions
of the bulk equation by letting
\begin{eqnarray}
\label{asinto}
\!\! |\psi_{{\mathbf{k}}_\parallel \ell s}\rangle=
\left\{
\begin{array}{lcl}
|\psi_{{\mathbf{k}}_\parallel s}^-\rangle &\mbox{if} & \ell=0;\ s=1,\dots,s_0,\\
|\psi_{{\mathbf{k}}_\parallel \ell s}\rangle &\mbox{if}& \ell=1,\dots,n;\ s=1,\dots,s_\ell,\\
|\psi_{{\mathbf{k}}_\parallel s}^+\rangle &\mbox{if} & \ell=n+1;\ s=1,\dots,s_0,
\end{array}\right. \quad
\end{eqnarray}
Also, let \(s_{n+1}\equiv s_0\). Then, the {\it ansatz}
\[ |\epsilon,\bm{\alpha}\rangle
\equiv \sum_{{\mathbf{k}}_\parallel \in \text{SBZ}}\sum_{\ell=0}^{n+1}\sum_{s=1}^{s_\ell}
\alpha_{{\mathbf{k}}_\parallel\ell s}|{\mathbf{k}}_\parallel\rangle|\psi_{{\mathbf{k}}_\parallel\ell s}\rangle
\]
describes the most general solution of the bulk equation
in terms of \(2Rd\) amplitudes \(\bm{\alpha}\) for each
value of \({\mathbf{k}}_\parallel\). We call it an ansatz because the states
\(|\epsilon,\bm{\alpha}\rangle\) provide the appropriate search space for
determining the energy eigenstate of the full Hamiltonian \(H=H_N+W\).
As a direct by-product of the above analysis, it is interesting to
note that a {\em necessary} condition for $H$ to admit an eigenstate of
exponential behavior localized on the left (right) edge is that some of the roots $\{z_\ell\}$
of the equation $\det K^-_{{\mathbf{k}}_\parallel}(\epsilon,z)=0$ be inside (outside)
the unit circle. Therefore, one simply needs to compute all roots of
$\det K^-_{{\mathbf{k}}_\parallel}(\epsilon,z)$ to know whether localized edge states
may exist in principle.
We are finally in a position to impose arbitrary BCs. As before, let \(b=1,\dots,R; N-R+1,\dots,N\)
be a variable for the boundary sites. Then the {\it boundary matrix} \cite{PRL,JPA,PRB1}
is the block matrix
\begin{align*}
&[B(\epsilon)]_{{\mathbf{q}}_\parallel b,{\mathbf{k}}_\parallel\ell s}=\\
&=\delta_{{\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel}\langle b|(H_{{\mathbf{k}}_\parallel,N}-\epsilon\mathds{1}_{dN}|\psi_{{\mathbf{k}}_\parallel\ell s}\rangle
+\langle b|W({\mathbf{q}}_\parallel,{\mathbf{k}}_\parallel)|\psi_{\ell s}\rangle ,
\end{align*}
with non-square \(d\times 1\) blocks (one block per boundary site
\(b\) and crystal momentum \({\mathbf{k}}_\parallel\)). By construction,
\begin{align*}
(H-\epsilon\mathds{1})|\epsilon,\bm{\alpha}\rangle=
\sum_{{\mathbf{q}}_\parallel,b}\sum_{{\mathbf{k}}_\parallel,\ell,s}
|{\mathbf{q}}_\parallel\rangle|b\rangle [B(\epsilon)]_{{\mathbf{q}}_\parallel b,{\mathbf{k}}_\parallel \ell s}\alpha_{{\mathbf{k}}_\parallel\ell s},
\end{align*}
for {\it any} regular value of \(\epsilon\in \mathds{C}\). Hence, an
ansatz state represents an energy eigenstate if and only if
\[
\sum_{{\mathbf{k}}_\parallel,\ell,s}[B(\epsilon)]_{{\mathbf{q}}_\parallel b,{\mathbf{k}}_\parallel \ell s}\alpha_{{\mathbf{k}}_\parallel \ell s}=0
\quad \forall \,{\mathbf{q}}_\parallel, b ,
\]
for all boundary sites \(b\) and crystal momenta \({\mathbf{q}}_\parallel\), or,
more compactly, \(B(\epsilon)\bm{\alpha}=0\). We are finally in a
position to state our generalized Bloch theorem for clean systems
subject to arbitrary BCs on two parallel hyperplanes, and extending
Theorem 3 in Part I:
\medskip
{\it Theorem (Generalized Bloch theorem).}
Let $H=H_N+W$ denote a single-particle Hamiltonian
as specified above [Eq. \eqref{spHam}],
for a slab of thickness \(N>2Rd\). Let \(B(\epsilon)\) be
the associated boundary matrix. If \(\epsilon\) is an eigenvalue
of \(H\) and a regular energy of \(H(z)\), the corresponding
eigenstates of \(H\) are of the form
\[ |\epsilon,\bm{\alpha}_\kappa\rangle
= \sum_{{\mathbf{k}}_\parallel}\sum_{\ell=0}^{n+1}\sum_{s=1}^{s_\ell}
\alpha_{{\mathbf{k}}_\parallel \ell s}^{(\kappa)} \, |{\mathbf{k}}_\parallel\rangle|\psi_{{\mathbf{k}}_\parallel\ell s}\rangle,\quad
\kappa=1,\dots,\mathcal{K}, \]
where the amplitudes
\( \bm{\alpha}_\kappa\) are determined as a complete set of
independent solutions of the kernel equation
\( B(\epsilon)\bm{\alpha}_{\kappa}=0\),
and
the degeneracy \(\mathcal{K}\) of the energy level \(\epsilon\)
coincides with the dimension of the kernel of the boundary matrix,
\(\mathcal{K}=\dim {\rm Ker\,}B(\epsilon)\).
\medskip
In the above statement, the lower bound \(N>2dR\) on the
thickness of the lattice is imposed in order to ensure that the emergent
solutions on opposite edges of the system have
zero overlap and are thus necessarily independent. It can be
weakened to \(N>2R\) in the generic case where \(\det h_{{\mathbf{k}}_\parallel,R}\neq 0\), because
in this case \(s_0=0\) and there are no emergent solutions.
Based on the generalized Bloch theorem, an algorithm for numerical
computation of the electronic structure was given in Part I, directly applicable to the
case of relaxed BCs. In particular, it was shown that the complexity of the algorithm
is independent of the size $N$ of each virtual wire. In the most general case of disordered
BCs we consider here, however, since the boundary matrix can have
cross-terms between the virtual wires, we correspondingly have to deal with a single
(non-decoupled) boundary matrix of size $2RdN^{D-1} \times 2RdN^{D-1}$. Finding
the kernel of this boundary matrix has time complexity $\mathcal{O}(N^{3D-3})$, which will
be reflected in the performance of the overall algorithm.
The generalized Bloch theorem relies on the complete solution of the bulk equation, given
in Eq. (\ref{bulkeq}).
Since the latter describes an unconventional {\em relative} eigenvalue problem for
the (generally) {\em non-Hermitian} operator \(P_B H_N\), the standard symmetry analysis
of quantum mechanics does not immediately apply.
It is nonetheless possible to decompose the solution spaces of the bulk equation
into symmetry sectors, if the Hamiltonian
obeys unitary symmetries that also commute with the bulk projector $P_B$.
Assume that a unitary operator $\mathcal{S}$ commutes with {\em both} $H=H_N+W$ and
$P_B$. Then any vector in the bulk solution space satisfies
\[
P_B(H_N+W-\epsilon\mathds{1})|\psi\rangle = 0 \Rightarrow
\mathcal{S}^\dagger P_B(H_N+W-\epsilon\mathds{1})\mathcal{S}|\psi\rangle = 0 .
\]
This implies that the bulk solution space is invariant under the action of $\mathcal{S}$.
Therefore, there exists a basis of the bulk solution space in which the action of $\mathcal{S}$
is block-diagonal. This leads to multiple eigenstate ans\"{a}tze, each labeled by an eigenvalue
of $\mathcal{S}$. Further, $\mathcal{S}^\dagger P_B \mathcal{S}=0$ implies
that the boundary subspace (i.e., the kernel of $P_B$) is also invariant under $\mathcal{S}$.
After finding a basis of the boundary subspace in which $\mathcal{S}$ is block-diagonal,
the boundary matrix itself splits into several matrices, each labeled by an eigenvalue
of $\mathcal{S}$. We will use this strategy in some of the applications in
Sec.\,\ref{interfaces} and Sec.\,\ref{high_dim}. We also discuss in Appendix
\ref{app:condition} how symmetry conditions can help identifying a criterion
for the absence of localized edge modes, which may be of independent
interest.
\section{Interface physics problems}
\label{interfaces}
\subsection{Multi-component generalized Bloch theorem}
As mentioned, a second extension of our theoretical framework
addresses the exact diagonalization of systems with internal boundaries,
namely, interfaces between distinct bulks. In the spirit of keeping technicalities
to a minimum, we focus on the simplest setting whereby two bulks with identical
reduced Brillouin zones are separated by one interface.
The extension to multi-component systems is straightforward, and can be
pursued as needed by mimicking the procedure to be developed next.
Since the lattice vectors for the two bulks forming the interface are the same, the primitive
vectors of the surface mesh $\{{\mathbf{m}}_\mu,\ \mu=1,\dots,D-1\}$, the stacking vector ${\mathbf{s}}$,
and the basis $\{{\mathbf{d}}_{\bar \nu},\ {\bar \nu}=1,\dots,I-1\}$ are shared by both bulks.
Let us further assume that the latter are described by systems that are half-infinite in the
directions $-{\mathbf{s}}$ and ${\mathbf{s}}$, respectively.
The bulk of system number one (left, $i=1$) occupies sites \(\{{\mathbf{j}} = {\mathbf{j}}_\parallel +j{\mathbf{s}} +{\mathbf{d}}_{\bar \nu},\
j =0,-1,\dots,-\infty\}\), whereas the bulk of system number two (right, $i=2$) occupies the
remaining sites, corresponding to \(j=1,\dots,\infty\) in the direction ${\mathbf{s}}$. In analogy to the case
of a single bulk treated in Sec. \ref{theoryrecap}, we may
write single-particle Hamiltonians for the left and right bulks in terms of appropriate shift operators,
namely,
\[
T_{1}\equiv \sum_{j=-\infty}^{-1} |j\rangle\langle j+1|,\quad
T_{2}\equiv \sum_{j=1}^{\infty}|j\rangle\langle j+1|.
\]
Then
$H_i = \sum_{{\mathbf{k}}_\parallel}|{\mathbf{k}}_\parallel\rangle
\langle {\mathbf{k}}_\parallel|\otimes H_{i,{\mathbf{k}}_\parallel}$, where
\begin{eqnarray*}\label{hamsigma}
H_{i{\mathbf{k}}_\parallel}=\mathds{1}\otimes h_{i{\mathbf{k}}_\parallel0}+\sum_{r=1}^{R_i}
\big[T_{i}^r\otimes h_{i{\mathbf{k}}_\parallel r}+\text{H.c.}\big]
\end{eqnarray*}
with the corresponding bulk projectors given by
\begin{eqnarray*}
P_{B_1}\! \equiv \!\!\sum_{j=-\infty}^{-R_1}\!\!\bm{1}\otimes|j\rangle\langle j|\otimes \mathds{1}_d,\;\;
P_{B_2}\!\equiv \!\!\sum_{j=R_2+1}^{\infty}\!\!\bm{1}\otimes|j\rangle\langle j|\otimes \mathds{1}_d.
\end{eqnarray*}
The projector onto the interface is
\(P_\partial=\mathds{1}-P_{B_1}-P_{B_2}\).
The Hamiltonian for the total system is of the form
\[ H=H_1+W+H_2, \]
with \(P_{B_i}W=0,\ i=1,2\). In this context, \(W\) describes an {\em internal BC}, that is,
physically, it accounts for the various possible ways of joining the two bulks. For simplicity,
let us assume that $W$ is translation-invariant in all directions parallel to the interface, so that
we may write $W = \sum_{{\mathbf{k}}_\parallel}|{\mathbf{k}}_\parallel\rangle
\langle {\mathbf{k}}_\parallel|\otimes W_{{\mathbf{k}}_\parallel}$. The next step is to split the Schr\"odinger equation
\((H-\epsilon\mathds{1})|\epsilon\rangle=0\) into a bulk-boundary system of equations \cite{PRL,PRB1}.
This is possible by observing that an arbitrary state of the total system may be decomposed
as \(|\Psi\rangle=P_1|\Psi\rangle+P_2|\Psi\rangle\) in terms of the left and right projectors
\[ P_1 \equiv \sum_{j=-\infty}^{0}\bm{1}\otimes|j\rangle\langle j|\otimes \mathds{1}_d,\quad
P_2 \equiv \sum_{j=1}^\infty\bm{1}\otimes|j\rangle\langle j|\otimes \mathds{1}_d, \]
and that the following identities hold:
\[P_{B_1}(H_1-\epsilon\mathds{1})P_2=0=P_{B_2}(H_2-\epsilon\mathds{1})P_1.\]
Hence, the bulk-boundary system of equations for the interface (or junction) takes the form
\begin{eqnarray*}
\begin{array}{r}
P_{B_1}(H_1-\epsilon\mathds{1})P_1|\epsilon\rangle=0,\\
P_\partial(H_1+W+H_2-\epsilon\mathds{1})|\epsilon\rangle=0,\\
P_{B_2}(H_2-\epsilon\mathds{1})P_2|\epsilon\rangle=0.
\end{array}
\end{eqnarray*}
We may now solve for fixed but arbitrary \(\epsilon\) the bottom and top bulk equations just as
in the previous section. The resulting simultaneous solutions of the two bulk equations are expressible as
\begin{eqnarray}
\label{ansatzmultbulk}
|\epsilon,\bm{\alpha}_{{\mathbf{k}}_\parallel}\rangle&=&
|\epsilon,\bm{\alpha}_{1{\mathbf{k}}_\parallel}\rangle + |\epsilon,\bm{\alpha}_{2{\mathbf{k}}_\parallel}\rangle\nonumber\\
&=&\sum_{i=1,2}\sum_{{\mathbf{k}}_\parallel}|{\mathbf{k}}_\parallel\rangle \otimes
\big(\sum_{\ell=0}^{n_i}\sum_{v=1}^{s_{i \ell}}
\alpha_{i \ell s}|\psi_{i {\mathbf{k}}_\parallel\ell s}\rangle , \quad
\end{eqnarray}
where $\{|\psi_{i \ell s}\rangle = \sum_{v=1}^{s_{i \ell}}P_i |z_\ell,v\rangle|u_{i \ell s v}\rangle\}$
are solutions of the bulk equation for the $i$th bulk.
In such situations, we extend the definition of the lattice state
$|z,v\rangle$ to a bi-infinite lattice by allowing the index $j$ in
Eq.\,\eqref{zstate} to take all integer values. We refer to
$|\epsilon,\bm{\alpha}_{i{\mathbf{k}}_\parallel}\rangle,\ i=1,2$, as the eigenstate
ansatz for the $i$th bulk. For $|\epsilon,\bm{\alpha}_{{\mathbf{k}}_\parallel}\rangle$ to be an
eigenstate of the full system, the column array of complex amplitudes
\(
\bm{\alpha}_{{\mathbf{k}}_\parallel}=
\begin{bmatrix}
\bm{\alpha}_{1{\mathbf{k}}_\parallel}& \bm{\alpha}_{2{\mathbf{k}}_\parallel}
\end{bmatrix}^{\rm T}
\)
must satisfy the boundary equation \(B(\epsilon)\bm{\alpha}_{{\mathbf{k}}_\parallel}=0\),
in terms of the interface boundary matrix,
\[
[B_{{\mathbf{k}}_\parallel}(\epsilon)]_{b, i \ell s}=
\langle b|(H_{1{\mathbf{k}}_\parallel}+W+H_{2{\mathbf{k}}_\parallel}-\epsilon\mathds{1})|\psi_{i {\mathbf{k}}_\parallel\ell s}\rangle,
\]
where the boundary index \(b\equiv -R_1+1,\dots,0; 1,\dots,R_2\).
\begin{figure*}
\includegraphics[width=1\textwidth]{figSNS.pdf}
\vspace*{-5mm}
\caption{(Color online) Bound modes of an SNS junction. The figure shows
plots of two independent constraints derived from the boundary matrix
against the ratio $\epsilon/\Delta$, as energy is swept from $-\Delta$ to $\Delta$
(with reference to Appendix\,\ref{snsApp}, these constraints are the functions on the left hand-side (blue solid lines) and
right hand-side (black dotted lines) of Eq.\,\eqref{snsboundaryeq1}
and Eq.\,\eqref{snsboundaryeq2}. Each intersection
of the two distinct sets of lines indicates the emergence of a bound
state. The plots in the top (bottom) panels correspond to $N=7$
($N=15$), respectively.
Parameters $t=1,t'=0.5$ are fixed for all the plots. The number
of intersections shows an expected increment as we increase the
length of the normal region ({\sf N}) from $N=7$ to $15$, without changing other
parameters. For a fixed value of $N$, the number of intersections increases
as we change $\Delta=1$ to $\Delta=2$, but it stays constant for $\Delta=2$ vs.
$\Delta=3$, except for additional states pinned to the energies near $\epsilon=\pm\Delta$.}
\label{snsplot}
\end{figure*}
\subsection{Application to SNS junctions}
We illustrate the generalized Bloch theorem for interfaces
by outlining an analytical calculation of the Andreev
bound states for an idealized SNS junction.
The equilibrium Josephson effect, namely, the phenomenon
of supercurrent flowing through a junction of two superconducting
leads connected via a normal link, is of great importance for theoretical
understanding of superconductivity, as well as for its applications
in SC circuits. One of the questions this phenomenon poses is to understand
how exactly a weak link with induced band-gap due to superconducting proximity
effect can carry a supercurrent. An answer to this question invokes the formation
of bound states in the band gap of the weak link, known as the ``Andreev
bound states'', that allow transport of Cooper pairs \cite{lesovik11}.
We model a basic $D=1$ SNS junction as a system formed by
attaching a finite metallic chain (a ``normal dot", denoted by {\sf N}) to two
semi-infinite SC chains (``superconducting leads", denoted by {\sf S1} and {\sf S2}).
Following Ref.\,[\onlinecite{bena12}], we describe the SC leads in terms of a $D=1$ BCS
pairing Hamiltonian,
\begin{eqnarray}
\widehat{H}_{\sf S} = - \sum_{j,\sigma}t c^\dagger_{j\sigma}c_{j+1\sigma}
- \sum_j \Delta c_{j\uparrow}^\dagger c_{j\downarrow}^\dagger+\text{H.c.},
\label{BCS}
\end{eqnarray}
where we have assumed zero chemical potential.
This Hamiltonian can be diagonalized analytically for open BCs,
see Appendix\,\ref{appBCS} (see also Refs.\,[\onlinecite{arutyunov08,ortiz14,ortiz16}] for
a critical discussion of $D=1$ models of superconductivity).
The normal dot is modeled by NN hopping of strength $t$.
The links connecting the SC regions to the metallic one
have a weaker hopping strength, $t' <t$.
The Hamiltonian of the full system is thus
\( \widehat{H}_{\sf SNS}=\widehat{H}_{\sf S1}+
\widehat{H}_{\sf S2}+\widehat{H}_{\sf T}+
\widehat{H}_{\sf N},\)
where $\widehat{H}_{\sf S1}$ and $\widehat{H}_{\sf S2}$ denote the SC Hamiltonians
for the leads, $\widehat{H}_{\sf N}$ describes the normal metal, and $\widehat{H}_{\sf T}$ is the
tunneling Hamiltonian, of the form
\begin{equation}
\widehat{H}_{\sf T}=- \!\!
\sum_{\sigma=\pm1} [t' (c_{-2{\sf L} \sigma}^{\dagger}c_{-2{\sf L}+1 \sigma}+
c_{2{\sf L}-1\sigma}^{\dagger}c_{2{\sf L} \sigma})+\text{H.c.}].
\label{Htunnel}
\end{equation}
The region {\sf S1} extends from $j=-\infty$ on the left to $j=-2{\sf L}$, whereas
{\sf S2} extends from $j=2{\sf L}$ to $j=\infty$, so that the length the of the metallic
chain is $N\equiv 4{\sf L}-1$.
The technical implementation of our diagonalization procedure for
junctions is described in full detail in Appendix \ref{snsApp}. Let us summarize
the key results here (see also Fig.\,\ref{snsplot} for illustration).
The structure of the boundary
equations makes clear the dependence of the number of bound states with the
length $N$ of the normal dot and the pairing amplitude $\Delta$.
When the metal strip is completely disconnected from the SC,
that is, when $t'=0$, the stationary states of the normal dot (standing
waves) are labelled by the quantum numbers
\(k=\frac{\pi q}{2{\sf L}} +\frac{\pi}{4{\sf L}},\) \(q=0,1,\dots,2{\sf L}-1\),
typical of the lattice-regularized infinite square well.
Each of these states at energy less than $\Delta$ turns into a
bound state with a slightly different value of energy for
weak tunneling. For a fixed value of $\Delta$, increasing $N$
allows for more solutions of the boundary equations, and so for
more Andreev bound states.
Conversely, for fixed \(N\) the number of bound modes does not increase
with the value of $\Delta$ once $|\Delta|>|t|$. Instead, we find pinning
of bound states near energy values $\epsilon=\pm\Delta$ as \(N\) increases.
These pinned states, that appear only if $|\Delta|>|t|$, are characterized physically
by a large penetration depth in the superconducting regions {\sf S1} and {\sf S2}.
\section{Surface bands in higher-dimensional systems}
\label{high_dim}
In this section we illustrate the application of the generalized Bloch
theorem to computing surface bands. Our goal is to gain
as much insight as possible on the interplay between bulk
properties -- topological or otherwise -- and BCs toward
establishing the structure of surface bands. We consider first
a prototypical ladder system, the Creutz ladder \cite{Creutz},
as a stepping stone going from one dimension to two. We next examine
a graphene ribbon, partly because there has been a considerable
amount of analytical work on the surface band structure of
this system. Thus, this permits benchmarking
our generalized Bloch theorem against other approaches. In this regard, we
emphasize that our method yields analytically {\it all} of the eigenstates
and eigenvalues of a graphene strip, not just the surface ones.
Our two final illustrative systems are $D=2$ TSCs. Specifically, we
first compute the surface band structure of the chiral \(p+ip\)
TSC analytically, with emphasis on the interplay between the phase
diagram of the {\it lattice} model and its surface physics. A key
point here is to gain physical insight into the emergence of {\em chiral surface
bands} from the point of view of the boundary matrix. We conclude
by providing an exact, albeit non analytical, solution for the {\em Majorana
surface flat bands} of a time-reversal invariant gapless $s$-wave TSC model.
Here, we both revisit the anomalous bulk-boundary correspondence
that this model is known to exhibit \cite{Deng14} through the eyes of the
boundary matrix, and leverage access to the system's eigenstates to
characterize physical equilibrium properties. Notably, we predict that the
presence of a Majorana surface flat band implies a substantial enhancement in the
equilibrium $4\pi$-periodic Josephson supercurrent as compared to a gapped $D=2$ TSC
that hosts only a finite number of Majorana modes.
\subsection{The Creutz ladder}
\label{creutzladder}
\begin{figure*}[t]
\includegraphics[width = 18cm]{FigDuality.pdf}
\caption{(Color online)
Schematic of the duality between the Creutz ladder (left) and cross-linked Majorana chains (right).
\label{fig:duality}}
\end{figure*}
The ladder model described by Hamiltonian
\begin{align}
\label{CreutzHam}
\widehat{H}=&
-\sum_j\big[{\sf K}(e^{i\theta}a_{j}^\dagger a_{j+1}+
e^{-i\theta}b_{j}^\dagger b_{j+1}+\text{H.c.})+\nonumber\\
&+{\sf r} {\sf K}(a_j^\dagger b_{j+1}+b_j^\dagger a_{j+1}+\text{H.c.})+
{\sf M} (a_j^\dagger b_j+b_j^\dagger a_j)\big].
\end{align}
is typically referred to as the Creutz ladder after its proponent
\cite{Creutz,CreutzRMP, CreutzPRD}, and is schematically depicted in
Fig. \ref{fig:duality}(left).
Here, $a_j$ and $b_{j}$ denote
fermionic annihilation operators for fermions at site \(j\)
of two parallel chains visualizable as the sides of a ladder.
Fermions on each side of the ladder are characterized by an
inverse effective mass ${\sf K}$. There is a homogeneous magnetic
field perpendicular to the plane of the ladder, responsible
for the phase $e^{i\theta}$($e^{-i\theta}$) for hopping along the
upper (lower) side of the ladder. Hopping along rungs of the ladder occur
with amplitude \({\sf M}\), whereas diagonal hoppings occur with amplitude
${\sf K}{\sf r}$.
The Creutz ladder is known to host mid-gap bound states when $|{\sf M}|<|2{\sf K}{\sf r}|$ and
$\theta\ne0,\pi$. Such states are called {\em domain-wall fermions}
in lattice quantum field theory. The domain-wall fermions of
the Creutz ladder are, for the most part, not topologically
protected or mandated by the bulk-boundary correspondence. If
\(\theta\neq \pm\pi/2\), the Creutz ladder may be classified
as a $D=1$ model in class \(A\), thus the domain-wall
fermions are not protected. However,
if \(\theta=\pm \pi/2\), then the Creutz ladder enjoys a chiral
symmetry, and with a canonical transformation of the fermionic basis,
the single-particle Hamiltonian can be made real (see Appendix \,\ref{creupendix}).
In this parameter regime, the model belongs to class BDI,
which is topologically non-trivial in $D=1$. Interestingly, this was the parameter
regime analyzed in depth in the original work \cite{Creutz}.
We reveal some of these features analytically
for ${\sf r}=\pm1$ in Appendix \ref{creupendix}.
Ladder systems are not quite $D=1$, but are not
$D=2$ either. Ultimately, it is more convenient
to investigate ladders in terms of the basic generalized
Bloch theorem of Part I. For this reason, we have chosen to relegate
a detailed discussion of the diagonalization of the Creutz ladder to
Appendix \ref{creupendix}.
In the following, we highlight two related new results:
a many-body duality transformation that maps the Creutz ladder
to a pair of Majorana chains, and the existence of
edge modes with a power-law prefactor.
\subsubsection{The dual Majorana ladder}
\label{majoranaladder}
The Gaussian duality transformation \cite{equivalence}
\begin{align*}
&a_j\mapsto
\mathcal{U}_{\sf d} a_j\mathcal{U}_{\sf d}^\dagger
={\sf c} \, a_j+i{\sf s}\, a_j^\dagger-i{\sf c} \, b_j
+{\sf s} \ b_j^\dagger ,\\
&b_j\mapsto
\mathcal{U}_{\sf d} b_j\mathcal{U}_{\sf d}^\dagger
={\sf s} \, a_j-i{\sf c}\, a_j^\dagger-i{\sf s} \, b_j
-{\sf c} \ b_j^\dagger,
\end{align*}
with \(\mathcal{U}_{\sf d}\) a unitary transformation in Fock space
and (${\sf c}=\frac{\cos \varphi}{\sqrt{2}}$, ${\sf s}=\frac{\sin \varphi}{\sqrt{2}}$),
transforms the Creutz ladder model to a dual SC.
Specialized to $\varphi=\pi/4$, the dual SC Hamiltonian
is \
\mathcal{U}_{\sf d}\widehat{H}\mathcal{U}_{\sf d}^\dagger=
\widehat{H}_a+\widehat{H}_b+\widehat{H}_{ab}\),
with
\begin{align*}
&\widehat{H}_a=\!-\sum_{j}\big[t a_{j}^\dagger a_{j+1}+\frac{\mu}{2}a_j^\dagger a_j
+\Delta \, a_ja_{j+1} +\text{H.c.} ] , \\
&\widehat{H}_b=\!-\sum_{j}\big[t b_{j}^\dagger b_{j+1}+\frac{\mu}{2}b_j^\dagger b_j
+\Delta \, b_jb_{j+1} +\text{H.c.} ] , \\
t&\equiv {\sf r}{\sf K},\quad \Delta\equiv {\sf K}\sin\theta,\quad \mu\equiv {\sf M},
\end{align*}
and, finally,
\begin{align*}
\widehat{H}_{ab}=\!-\sum_j\big[i {\sf K} \cos \theta (b^\dagger_ja_{j+1}+b^\dagger_{j+1}a_j-\text{H.c.})-{\sf M}\big].
\end{align*}
We conclude that the dual system may be described as a ladder
consisting of Majorana chains on each side, connected by electron
tunneling and with no pairing term associated to the rungs of the
ladder [see Fig. \ref{fig:duality}(right)]. Moreover, the Majorana chains (the sides of the ladder)
decouple if \(\theta=\pm \pi/2\), in which case the Creutz ladder
displays chiral symmetry. Since these two decoupled Majorana chains
have real parameter values, the dual system also belongs to the
topologically non-trivial class D.
The fermion number operator \(\hat{N}_F\equiv \sum_j(a_j^\dagger a_j
+b_j^\dagger b_j)\), regarded as the broken particle conservation
symmetry of the Majorana ladder, maps by the inverse of the duality transformation
to a broken symmetry
\(\hat{N}_{C}\equiv \mathcal{U}^\dagger_{\sf d} \hat{N}_F\mathcal{U}_{\sf d}\)
of the Creutz ladder. In other words, we expect the insulating spectral
gap of the Creutz ladder to close whenever the symmetry \(\hat{N}_{C}\) is
restored,
unless there is a stronger factor at play. This symmetry is restored
for \({\sf K}\sin(\theta)=0\), which is indeed a gapless regime unless \({\sf K}=0\),
because then the Creutz ladder reaches the atomic limit. A similar
explanation of the insulating gap for the Peierls chain in terms of
a hidden broken symmetry was given in Ref.\,[\onlinecite{equivalence}], where
fermionic Gaussian dualities were investigated in higher dimensions as well.
\subsubsection{Topological power-law modes}
The generalized Bloch theorem identifies regimes
in which the domain-wall fermions of the Creutz ladder may
display power-law behavior. From the analysis in Appendix \ref{creupendix},
power-law modes are forbidden only if ${\sf M}=0,$ $\theta=\pm\pi/2$
and ${\sf K}\ne0,$ ${\sf r}\ne \pm1$. For arbitrary values of ${\sf K},{\sf r},\theta,{\sf M}$,
one can expect in general a finite number of values of $\epsilon$
for which the full solution of the bulk equation includes power-law modes,
potentially compatible with the BCs. Let us point out for illustration the
power-law modes of the Creutz ladder in the parameter regime
$\theta=\pi/2,\ {\sf M}=2{\sf K}\sqrt{{\sf r}^2-1}$, with ${\sf r}>1$. In this
regime the Creutz ladder is dual to two decoupled Kitaev chains,
each individually on its ``circle of oscillations'' in its phase
diagram \cite{hegde16}. The topological power-law modes of
the Kitaev chain have been explicitly described in Part I (see Sec. V C).
Therefore, the power-law topological edge modes of the Creutz ladder may
be found by way of our duality transformation. Alternatively, there is a
shortcut at the single-particle level.
Let us rewrite the Creutz ladder in terms of a new set of fermionic degrees
of freedom
\begin{align}
\label{tildeferms}
\widetilde{a}_j=\frac{1}{\sqrt{2}}(a_j+b_j), \quad \widetilde{b}_j=\frac{i}{\sqrt{2}}(a_j-b_j).
\end{align}
Unlike for our previous duality transformation, the result is another particle-conserving Hamiltonian.
The associated single-particle Hamiltonian is
\begin{eqnarray}
&&\widetilde{H}_N=\mathds{1}_{N}\otimes \tilde{h}_0+
T\otimes \tilde{h}_1+T^\dagger\otimes \tilde{h}_1^\dagger, \label{tildeCreutz} \\
&& \tilde{h}_{0}=-\begin{bmatrix}{\sf M} & 0\\
0 & -{\sf M}
\end{bmatrix}, \nonumber\\
&& \tilde{h}_1=-\begin{bmatrix}{\sf K}({\sf r}+\cos\theta) & {\sf K}\sin\theta\\
-{\sf K}\sin\theta & {\sf K}(-{\sf r}+\cos\theta)
\end{bmatrix}. \nonumber
\end{eqnarray}
For \(\theta=\pi/2\), and with the identifications \(t={\sf K}{\sf r}, \Delta={\sf K}\sin \theta, \mu={\sf M}\) already
introduced, the above \(\widetilde{H}_N\) becomes identical to the single-particle Hamiltonian for the Majorana chain
of Kitaev. Moreover, if \({\sf M}=\mu=2{\sf K}\sqrt{{\sf r}^2-1}\), it follows that \((\mu/2t)^2+(\Delta/t)^2=1\).
This is the aforementioned coupling regime known as the ``circle of oscillations". Hence, by
simply translating the calculations of Part I, Sec. V C, we obtain the topological power-law mode
\begin{align*}
|\epsilon=0\rangle =
\sum_{j=1}^{\infty} j \, w^{j-1}|j\rangle
\begin{bmatrix}
1 \\ -1
\end{bmatrix},\quad
w \equiv -\Big(\frac{{\sf r}-1}{{\sf r}+1}\Big)^{1/2},
\end{align*}
of the Creutz ladder (in the particle-conserving representation of Eq.\,\eqref{tildeferms}).
To our knowledge, this provides the first example of a topological power-law zero mode in a particle-conserving
Hamiltonian in class AIII.
\subsection{Graphene ribbons}
\label{secgraphene}
In this section we investigate NN tight-binding models on the honeycomb (hexagonal) lattice, with graphene as
the prime motivation \cite{castroneto09}. The surface band structure
of graphene sheets or ribbons is well understood, even analytically in limiting cases
\cite{mao10,kohmoto07,delplace11,Iachello}.
As emphasized in Ref. [\onlinecite{yao09}], a perturbation that breaks inversion symmetry can have interesting
effects on these surface bands. With this in mind, in our analysis below we include a sublattice potential and show that the
Hamiltonian for a ribbon subject to zigzag-bearded BCs
can be fully diagonalized in closed form.
\begin{figure*}
\hspace*{-5mm}\includegraphics[angle=0, width=.9\columnwidth]{figgraphene.pdf}
\hspace*{1.5cm}\includegraphics[angle=0, width=6.5cm]{figarmchair.pdf}
\caption{(Color online) Graphene ribbon, periodic or infinite
in the horizontal
\({\mathbf{m}}_1\) direction. Left: The ribbon is terminated in the vertical direction by a zigzag edge
on the bottom and a ``bearded" edge on top. The decoupled \(B\) sites at the
top are auxiliary degrees of freedom.
Right: The ribbon is terminated by armchair edges. The system has mirror
symmetry about the dashed (red) line. In both cases,
on-site potentials \(v_1\) and \(v_2\) are associated with the $A$ and $B$
sublattice, respectively.
}
\label{zigcomb}
\end{figure*}
\subsubsection{Zigzag-bearded boundary conditions}
\label{zbsec}
The honeycomb lattice is bipartite, with triangular sublattices \(A\) and \(B\)
displaced by \({\mathbf{d}}\) relative to each other, see Fig.\,\ref{zigcomb}(left).
We parametrize the lattice sites \(\mathbf{R}\) as
\begin{align*}
\mathbf{R}(j_1,j,m)=
\left\{
\begin{array}{lcl}
j_1{\mathbf{m}}_1+j{\mathbf{s}} +{\mathbf{d}} & \mbox{if} & m=1\\
j_1{\mathbf{m}}_1+j{\mathbf{s}} & \mbox{if} & m=2
\end{array}\right., \quad \\
{\mathbf{m}}_1\equiv a
\begin{bmatrix}1\\0\end{bmatrix},\
{\mathbf{s}}=\frac{a}{2}\begin{bmatrix}1\\\sqrt{3}\end{bmatrix},\
{\mathbf{d}}=-\frac{a}{2\sqrt{3}}\begin{bmatrix}\sqrt{3}\\1\end{bmatrix},
\end{align*}
with \(j_1,j\in\mathds{Z}\), \(a=1\) being the lattice parameter and
$m=1$ ($m=2$) denoting the $A$ ($B$) sublattice.
The localized (basis) states are
\(|{\mathbf{j}}\rangle|m=1\rangle\) and \(|{\mathbf{j}}\rangle|m=2\rangle\), and so the
sublattice label plays the role of a pseudospin-$1/2$ degree of freedom.
The ribbon we consider is translation-invariant in the \({\mathbf{m}}_1\) direction
and terminated along ${\mathbf{s}}$, with single-particle Hamiltonian
$H_N = \sum_{{\mathbf{k}}_\parallel \in \text{SBZ}}|{\mathbf{k}}_\parallel\rangle\langle{\mathbf{k}}_\parallel |
\otimes H_{{\mathbf{k}}_\parallel,N}$, where
\begin{eqnarray*}
H_{{\mathbf{k}}_\parallel,N}
& =& \mathds{1}_N\otimes
\begin{bmatrix}
v_1& -t_0(1+e^{-ik_\parallel})\\
-t_0(1+e^{ik_\parallel})& v_2
\end{bmatrix}\\
& + &
\Big(T\otimes
\begin{bmatrix}
0& 0\\
-t_0& 0
\end{bmatrix} + \text{H.c.}\Big) ,
\end{eqnarray*}
and the \(2\times 2\)
matrices act on the sublattice degree of freedom. Notice that \(H_N\)
is chirally symmetric if the on-site potentials
\(v_1=0=v_2\), and the edges of the ribbon
are of the zigzag type, see Fig.\,\ref{zigcomb}(left).
While in the following we shall set \(v_1=0\) for simplicity, it
is easy to restore \(v_1\) anywhere along the way if desired.
In particular, \(v_1=-v_2\) is an important special case \cite{yao09}.
\iffalse
\begin{figure}
\includegraphics[angle=0, width=.9\columnwidth]{figgraphene.pdf}
\caption{
Graphene ribbon, periodic or infinite
in the horizontal
{\color{red} \({\mathbf{m}}_1\) direction.} The ribbon is terminated
in the vertical direction by a zigzag edge on the bottom and a
``bearded" edge on top. The decoupled \(B\) sites at the
top are auxiliary degrees of freedom.}
\label{zigcomb}
\end{figure}
\fi
The analytic continuation of the Bloch Hamiltonian is
\begin{align*}
H_{k_\parallel}(z)
=\begin{bmatrix}
0 & -t_1(k_\parallel)e^{-i\phi_{k_\parallel}}-t_0z^{-1}\\
-t_1(k_\parallel)e^{i\phi_{k_\parallel}}-t_0z & v_2
\end{bmatrix},
\end{align*}
\[ t_1(k_\parallel) \equiv t_0\sqrt{2(1+\cos(k_\parallel))},\ \
e^{i\phi_{k_\parallel}} \equiv t_0(1+e^{ik_\parallel})/t_1(k_\parallel). \]
This analysis reveals the formal connection between graphene and
the Su-Schrieffer-Heeger (SSH) model: just compare the above $H_{k_\parallel}(z)$
with $H(z)$ in Eq.\,\eqref{precisely0}.
We impose BCs in terms of an operator \(W\) such that
\begin{align*}
\langle k_\parallel|W|k_\parallel'\rangle
\!=\!\delta_{k_\parallel,k_\parallel'}
|N\rangle\langle N|
\otimes
\begin{bmatrix}
0& t_1(k_\parallel)e^{-i\phi_{k_\parallel}}\\
t_1(k_\parallel)e^{i\phi_{k_\parallel}}& 0
\end{bmatrix} .
\end{align*}
In real space, this corresponds to
\begin{align*}
W=\,&\mathbf{1}\otimes|N\rangle\langle N|\otimes
\begin{bmatrix}
0& -t_0\\
-t_0& 0
\end{bmatrix}+\\
&\quad \quad \quad +
\Big(
\mathbf{T}\otimes |N\rangle\langle N|\otimes
\begin{bmatrix}
0& 0\\
-t_0& 0
\end{bmatrix}+\text{H.c.}
\Big),
\end{align*}
The meaning of these BCs is as follows:
for the modified ribbon Hamiltonian described by \(H=H_N+W\), the sites
\(|j_1\rangle|j=N\rangle|B\rangle\) are decoupled from the rest of the
system and each other, see Fig.\,\ref{zigcomb}(left). The termination
of the actual ribbon, consisting of the sites connected to each
other, is of the zigzag type on the lower edge, and ``bearded"
on the upper edge. From a geometric perspective, this ribbon
is special because every \(B\) site is connected to exactly three
\(A\) sites, but not the other way around.
At this point we may borrow results from dimerized chains that we
include in Appendix \,\ref{basic_examples}, to which we refer for full detail.
The energy eigenstates that
are perfectly localized on the upper edge (consisting
of decoupled sites) constitute a flat surface band at energy \(v_2\).
For $|k_\parallel|>2\pi/3$, the energy eigenstates
localized on the lower edge constitute a flat
surface band at \(v_1=0\) energy. Explicitly, these zero modes are
\begin{align*}
|\epsilon=0,k_\parallel\rangle=&
|k_\parallel\rangle|z_1(k_\parallel)\rangle
\begin{bmatrix}
(t_1(k_\parallel)^2-t_0^2)e^{-i\phi_{k_\parallel}}/t_1(k_\parallel)\\
0
\end{bmatrix},\\
\quad z_1(k_\parallel)\equiv&-e^{i\phi_{k_\parallel}}\frac{t_1(k_\parallel)}{t_0}=-(1+e^{ik_\parallel}).
\end{align*}
While their energy is insensitive to \(k_\parallel\), their characteristic localization
length is not; specifically,
\begin{eqnarray}
\ell_{\rm loc}(k_\parallel)= -\frac{1}{\ln(|z_1(k_\parallel)|)}=- \frac{2}{\ln(2+2\cos(k_\parallel))}.\quad
\label{locl}
\end{eqnarray}
For \(k_\parallel\neq\pm\frac{2\pi}{3}\), the bulk states are
\begin{align*}
|\epsilon_n(k_\parallel,q)\rangle
\!=\! |k_\parallel\rangle|\chi_1(q)\rangle \!
\begin{bmatrix} t_1(k_\parallel)e^{-i\phi_{k_\parallel}}\\ -\epsilon_n(k_\parallel,q)\end{bmatrix}
\! + \! |k_\parallel\rangle|\chi_2(q)\rangle \! \begin{bmatrix}t_0\\ 0\end{bmatrix},
\end{align*}
with
\begin{align}
\label{chiwf1}
&|\chi_1(q)\rangle
\equiv 2i\sum_{j=1}^N\sin\!\big(\pi qj/N\big)e^{-i\phi j}|j\rangle,\\
\label{chiwf2}
&|\chi_2(q)\rangle
\equiv 2i\sum_{j=1}^N\sin\!\big(\pi q(j-1)/N\big)e^{-i\phi( j-1)}|j\rangle,
\\
&\epsilon_n(k_\parallel,q)=\\
&=\frac{v_2}{2}+
(-1)^n\sqrt{\frac{v_2^2}{4}+t_1(k_\parallel)^2+t_0^2+2t_1(k_\parallel)t_2\cos\!\big(\frac{\pi}{N}q\big)}
\nonumber,
\end{align}
for $n=1,2$. Since \(t_1(k_\parallel=\pm\frac{2\pi}{3})=t_0\),
the virtual chains \(H_{k_\parallel,N}\)
are gapless if \(v_2=0\), reflecting the fact that
graphene is a semimetal.
The energy eigenstates
are similar but simpler than the ones just described.
\iffalse
\begin{figure}[t]
\vspace{3mm}
\includegraphics[angle=0, width=6.5cm]{figarmchair.pdf}
\caption{ (Color online) Graphene ribbon, periodic or infinite
in the horizontal direction. The ribbon is terminated by
armchair edges. The system has mirror symmetry about the red dashed line.
\label{armchair}}
\end{figure}
\fi
\subsubsection{Armchair terminations}
\label{armpitsec}
The graphene ribbon with zigzag terminations can be
described in terms of smooth terminations of the triangular
Bravais lattice with two atoms per unit cell. In contrast,
armchair terminations require a fairly different description
of the underlying atomic array. Figure\,\ref{zigcomb}(right)
shows how to describe this system in terms of a {\it centered rectangular}
Bravais lattice \cite{bechstedt} with two atoms per unit cell and smooth parallel
terminations. In this case, we parametrize the lattice sites
\(\mathbf{R}\) as
\begin{align*}
\mathbf{R}(j_1,j,m)=
\left\{
\begin{array}{lcl}
j_1{\mathbf{m}}_1+j{\mathbf{s}} +{\mathbf{d}} & \mbox{if} & m=1\\
j_1{\mathbf{m}}_1+j{\mathbf{s}} & \mbox{if} & m=2
\end{array}\right.,\\
{\mathbf{m}}_1\equiv a
\begin{bmatrix}\sqrt{3}\\0\end{bmatrix},\
{\mathbf{s}}=\frac{a}{2}\begin{bmatrix}\sqrt{3}\\1\end{bmatrix},\
{\mathbf{d}}=\frac{a}{\sqrt{3}}\begin{bmatrix}1\\0\end{bmatrix},
\end{align*}
where as before \(j_1,j\in\mathds{Z}\), $a=1$, and $m\in \{1,2\}$ labels the
sublattice. The total single-particle Hamiltonian can now be taken to read $H=H_N+W$, with $W=0$ and
$H_N = \sum_{{\mathbf{k}}_\parallel \in \text{SBZ}}|{\mathbf{k}}_\parallel\rangle\langle{\mathbf{k}}_\parallel |
\otimes H_{{\mathbf{k}}_\parallel,N}$, where
\[
H_{{\mathbf{k}}_\parallel,N}
=\mathds{1}_N\otimes
\begin{bmatrix}
v_1& -t_0\\
-t_0 & v_2
\end{bmatrix}+
\Big(T\otimes
\begin{bmatrix}
0& -t_0e^{-ik_\parallel}\\
-t_0& 0
\end{bmatrix} + \text{H.c.}\Big), \]
and the analytic continuation of the Bloch Hamiltonian for
each $k_\parallel$ is
\[ H_{k_\parallel}(z) \!=\!
\begin{bmatrix} v_1 & -t_0(1+ze^{-ik_\parallel} + z^{-1})\\
-t_0(1+z^{-1}e^{ik_\parallel} + z) & v_2
\end{bmatrix}. \]
The diagonalization of the Hamiltonian proceeds from here on
as before. There is, however, a shortcut based on Appendix \ref{app:condition},
which explains in addition the absence of edge modes in this system. Let
\(
T_{k_\parallel}\equiv e^{-ik_\parallel/2}T
\). In terms of this \(k_\parallel\)-dependent matrix,
\[
H_{{\mathbf{k}}_\parallel,N}\!=\!
\mathds{1}_N\otimes
\begin{bmatrix}
v_1& \!\!-t_0\\
-t_0 &\!\! v_2
\end{bmatrix}
-t_0
\Big(T_{k_\parallel}+T_{k_\parallel}^\dagger\Big)\otimes
\begin{bmatrix}
0 & e^{-ik_\parallel/2}\\
e^{ik_\parallel/2}& 0
\end{bmatrix} \!.\]
\begin{widetext}
It follows that the (unnormalized) energy eigenstates of the graphene
ribbon with armchair terminations are
\begin{eqnarray*}
\label{armpit}
|\epsilon_{q,\pm}\rangle =
|k_\parallel\rangle
\sum_{j=1}^{N}|j\rangle e^{ik_\parallel j/2}\sin [\pi qj/(N+1)]
\begin{bmatrix}-2t_0(1+e^{-ik_\parallel/2}\cos[\pi q/(N+1)])\\ \epsilon_{q,\pm} \end{bmatrix}
,\qquad q=1,\dots,N,
\end{eqnarray*}
where $\epsilon_{q,+}$ and $\epsilon_{q,-}$ are the two roots (in $\epsilon$)
of the quadratic equation
\[ \epsilon^2-v_2\epsilon-
t_0^2 - 4t_0^2\cos (k_\parallel/2)\cos[\pi q/(N+1)]
- 4t_0^2\cos^2[\pi q/(N+1)] =0. \]
These are the $2N$ energy eigenstates of the system for each value
of $k_\parallel$.
\end{widetext}
\subsection{A chiral \(p+ip\) superconductor}
\label{pwavetoponductor}
The spinless \(p+ip\) SC of Ref.\,[\onlinecite{read00}] is the prototype of spinless
superconductivity in $D=2$. The model may be regarded as the mean-field approximation
to an exactly-solvable (by the algebraic Bethe ansatz)
pairing Hamiltonian \cite{rombouts10}. It belongs to class D in the Altland-Zirnbauer
classification, and thus, according to the ten-fold way, it admits an integer ($\mathbb{Z}$)
topological invariant. There has been hope for some time that the related phenomenon
of triplet superconductivity is realized in layered perovskite strontium
ruthenate \(\rm Sr_2RuO_4\), but the matter remains controversial \cite{Sr2RuO4}.
The many-body model Hamiltonian can be taken to be
\begin{align*}
&\hat{H}=
-t\sum_{{\mathbf{r}}}(c_{{\mathbf{r}}+{\mathbf{s}}}^\dagger c_{{\mathbf{r}}}+
c_{{\mathbf{r}}+{\mathbf{m}}}^\dagger c_{{\mathbf{r}}}+\text{H.c.})\\
&-\Delta\sum_{{\mathbf{r}}}(c_{{\mathbf{r}}}c_{{\mathbf{r}}+{\mathbf{s}}}-ic_{{\mathbf{r}}}c_{{\mathbf{r}}+{\mathbf{m}}}+\text{H.c.})-
(\mu-4t)\sum_{\mathbf{r}} c^\dagger_{\mathbf{r}} c_{\mathbf{r}},
\end{align*}
on the square lattice of unit lattice spacing and with standard
unit vectors \({\mathbf{s}},{\mathbf{m}}\) pointing in the \(x\) and \(y\)
directions, respectively. The parameters \(t,\Delta\) are real numbers.
The corresponding single-particle Hamiltonian is
\begin{align*}
H&=- [(\mu-4t)\mathds{1}+t (T_{\mathbf{s}}+T_{\mathbf{s}}^\dagger)+t(T_{\mathbf{m}}+T_{\mathbf{m}}^\dagger)]\otimes \tau_z+\\
&+i\Delta(T_{\mathbf{s}}-T_{\mathbf{s}}^\dagger)\otimes \tau_y+i\Delta(T_{\mathbf{m}}-T_{\mathbf{m}}^\dagger)\otimes\tau_x ,
\end{align*}
in terms of shift operators \(T_{\mathbf{s}} \equiv \sum_{\mathbf{r}} |{\mathbf{r}}\rangle\langle {\mathbf{r}}+{\mathbf{s}}|,$ $T_{\mathbf{m}}\equiv \sum_{\mathbf{r}} |{\mathbf{r}}\rangle\langle {\mathbf{r}}+{\mathbf{m}}|\)
which
can be adjusted to describe relevant BCs
(open-open, open-periodic, periodic-open, and periodic-periodic).
\subsubsection{Closed-form chiral edge states}
If energy is measured in units of \(t\), then the parameter
space of the model can be taken to be two-dimensional after
a gauge transformation that renders \(\Delta>0\). We shall
focus on the line \(\Delta=1=t \), in which \(\mu\) is the
only variable parameter. The Bloch Hamiltonian is
\begin{align*}
H({\mathbf{k}})&=
\begin{bmatrix}
e({\mathbf{k}})& \Delta({\mathbf{k}})\\
\Delta({\mathbf{k}})^*& -e({\mathbf{k}})
\end{bmatrix},\\
\Delta({\mathbf{k}})&\equiv 2i\sin k_1 -2\sin k_2,\\
e({\mathbf{k}})& \equiv -2 \cos k_1 - 2\cos k_2-\mu +4,
\end{align*}
for \( {\mathbf{k}}=(k_1,k_2)\in [-\pi,\pi)\times [-\pi,\pi)\).
The resulting single-particle bulk dispersion then reads
\begin{eqnarray*}
\epsilon(k_1,k_2)^2 &=& \mu^2 -8\mu +24 + 4(\mu-4)(\cos k_1+\cos k_2)\\
&+&8\cos k_1 \cos k_2,
\end{eqnarray*}
and it is fully gapped unless \(\mu=0,4,8\). The gap closes at \({\mathbf{k}}=0\) if \(\mu=0\),
\({\mathbf{k}}=(-\pi,0)\) and \({\mathbf{k}}=(0,-\pi)\) if \(\mu=4\), and at
\({\mathbf{k}}=(-\pi,-\pi)\) if \(\mu=8\). For $0<\mu <8$, the system is in the
weak-pairing topologically non-trivial phase with odd fermion number parity
in the ground state. The phase transition to the trivial strong-pairing phase happens
at $\mu=0$ \cite{read00}.
We now impose open BCs in the \(x\) direction while keeping
the \(y\) direction translation invariant, that is, \(k_2=k_\parallel\). Accordingly, we need the analytic
continuation of the Bloch Hamiltonian in \(k_1\). Let us introduce the compact notation
\[
\omega \equiv -2\cos k_\parallel-\mu+4,\quad
\xi \equiv -2\sin k_\parallel , \]
so that \( H_{k_\parallel}(z)=h_{k_\parallel,0}+zh_1+z^{-1}h_1^\dagger
\), with
\begin{align}
\label{h1here}
h_{k_\parallel,0}=
\begin{bmatrix}
\omega & \xi \\
\xi & -\omega
\end{bmatrix},\quad
h_1=
\begin{bmatrix}
-1& 1\\
-1& 1
\end{bmatrix}.
\end{align}
The condition
\(\det(H_{k_\parallel}(z)-\epsilon\mathds{1}_2)=0\) is then equivalent to
the equation
\begin{align}
\label{thisfirst}
\epsilon^2&=\omega^2+\xi^2+4-2\omega\, (z+z^{-1}).
\end{align}
Note that the replacement \(z+z^{-1}\mapsto 2\cos k_1\) recovers
the bulk dispersion relation. Moreover, if \(2<\mu<6\), there are values of \(k_\parallel\)
for which \(\omega=0\) and the dispersion relation becomes flat. From \(H_{k_\parallel}(z)\) it is
immediate to reconstruct the family of virtual chain Hamiltonians
\begin{align*}
H_{k_\parallel,N}&=\mathds{1}_N\otimes h_{k_\parallel,0}+T\otimes h_1+ T^\dagger\otimes h_1^\dagger.
\end{align*}
From the point of view of any one of these chains, mirror symmetry is
broken by the NN pairing terms.
This fact is important, because then the boundary matrix
is {\em not} mirror-symmetric either, which will ultimately lead to
surface states of opposite chirality on the left and right edges.
The number of edge degrees of freedom is \(2Rd=4\) for each
value of \(k_\parallel\). Since \(h_1\) [Eq. \eqref{h1here}] is not invertible,
and Eq.\,\eqref{thisfirst} is a polynomial
of degree \(2\) in \(z\), the complete eigenstate
ansatz is formed out of four independent states (one ansatz state
for each \(k_\parallel\)): two extended states associated to the roots
\(z_\ell=z_\ell(\epsilon,k_\parallel),\) \(\ell=1,2,\)
of Eq.\,\eqref{thisfirst},
and two emergent states of finite
support localized on the edges of the virtual chains \(H_{k_\parallel,N}\).
With hindsight, we will ignore the emergent states and focus on
the reduced ansatz, namely,
\[
|\epsilon\rangle=\alpha_1|z_1,1\rangle|u_1\rangle+\alpha_2z_2^{-N+1}|z_2,1\rangle|u_2\rangle.
\]
The state \(|z_1,1\rangle|u_1\rangle\) should represent a
surface state for the left edge, \(z_2^{-N+1}|z_2,1\rangle|u_2\rangle\)
one for the right edge, with
\begin{eqnarray}\label{upip}
|u_\ell\rangle=
\begin{bmatrix}
\xi +z_\ell-z_\ell^{-1}\\
-\omega +\epsilon+z_\ell+z_\ell^{-1}
\end{bmatrix}
\end{eqnarray}
satisfying the equation
\(H_{k_\parallel}(z_\ell)|u_\ell\rangle=\epsilon|u_\ell\rangle\). The boundary equations
\(P_\partial(H_{k_\parallel,N}-\epsilon\mathds{1}_{2N})|\epsilon\rangle=0\)
are encoded in the boundary matrices
\begin{eqnarray*}
B_{k_\parallel}(\epsilon)=-
\begin{bmatrix}
h_1^\dagger|u_1\rangle & z_2^{-N-1}h_1^\dagger |u_2\rangle\\
z_1^{N+1}h_1|u_1\rangle& h_1|u_2\rangle
\end{bmatrix},
\end{eqnarray*}
which are, however, non-square \(4\times 2\) matrices as we have
ignored the two emergent states that in principle appear in
the ansatz. Nonetheless, since \(h_1\) is a matrix of rank one,
we can extract a square boundary matrix, namely,
\begin{align*}
\tilde{B}_{k_\parallel}(\epsilon) \!=\!\begin{bmatrix}
z_1(\xi -\omega+\epsilon+2z_1)& \! z_2^{-N}(\xi-\omega+\epsilon+2z_2)\\
z_1^N(\xi+\omega-\epsilon-2z_1^{-1})& \! z_2(\xi+\omega-\epsilon-2z_2^{-1})
\end{bmatrix} ,
\end{align*}
that properly captures the BCs for our reduced trial states.
Surface states are characterized
by the condition \(|z_1|=|z_2^{-1}|<1\). Hence, in the large-\(N\) limit,
one may set \(z_1^N=z_2^{-N}=0\). Within this approximation,
the left and right edges are effectively decoupled by virtue of their large
spatial separation.
\begin{figure}[t]
\label{niceplots}
\includegraphics[width=\columnwidth]{pwave.pdf}
\caption{(Color online) Surface bands for \(\mu=1.5\), centered at \(k_\parallel=0\) (top left panel),
and \(\mu=6.5\), centered at \(k_\parallel=-\pi\) (top right panel). The shaded (gray) region
shows the bulk bands. The electrons on the right edge (dashed red curve) propagate
to the right only, and those on the left edge (solid blue curve) to the left only, that
is, the surface bands are chiral. The lower panels show the
behavior of $z_1$ (solid blue curve) and $z_2$ (dashed red curve) with $k_\parallel$.
Notice how \(z_1\) (\(z_2\)) enters (exits) the unit circle precisely when the
surface bands touch the bulk bands, as marked by vertical solid black lines.
\label{fig:pip}}
\end{figure}
In summary, the left surface band is determined by the polynomial system
\begin{align}
\label{rightedge}
\left\{
\begin{array}{l}
0=\xi -\omega +\epsilon+2z_1\\
0=\epsilon^2+2\omega \, (z_1+z_1^{-1})-(\omega^2+\xi^2+4)
\end{array}\right..
\end{align}
In the following, we will focus on the cases \(0<\mu<2\) or \(6<\mu<8\) for simplicity
(these parameter regimes are in the weak pairing phase and satisfy \(\omega\neq 0\)
for all values of \(k_\parallel\)). Notice that
\begin{eqnarray}
\label{upipeasy}
|u_1\rangle=(\xi+z_1-z_1^{-1})\begin{bmatrix} 1\\ -1\end{bmatrix}
\end{eqnarray}
due to the (top) boundary equation in Eq.\,\eqref{rightedge} (recall
also Eq.\,\eqref{upip}).
The physical solutions\cite{footpip} are surprisingly simple. They are
\begin{align*}
\epsilon&\equiv \epsilon_{\rm left}(k_\parallel)=-\xi=2\sin k_\parallel,\\
z_1&=z_1(k_\parallel)=\frac{\omega}{2}=2-\frac{\mu}{2}-\cos k_\parallel.
\end{align*}
These functions of \(k_\parallel\) represent the dispersion relation
and ``complex momentum" of surface excitations on the left
edge
for those values of \(k_\parallel\) (and {\em only} those values) such that
\(|z_1(k_\parallel)|<1\) (see Fig. \ref{fig:pip}). Notice that {\em the edge band is chiral}.
The surface band touches the bulk band at the two values of
\(k_\parallel\) such that \(|z_1(k_\parallel)|=1\). The (unnormalized) surface states
are, for large \(N\),
\[
| \epsilon_{\rm left}(k_\parallel)\rangle=\sum_{j=1}^N\Big(2-\frac{\mu}{2}-\cos k_\parallel \Big)^j|k_\parallel\rangle
|j\rangle\begin{bmatrix} 1\\ -1\end{bmatrix}.
\]
Similarly, the right surface band is determined by the polynomial system
\begin{align}
\label{leftedge}
\left\{
\begin{array}{l}
0=\xi+\omega-\epsilon-2z_2^{-1}\\
0=\epsilon^2+2\omega\,(z_2+z_2^{-1})-(\omega^2+\xi^2+4)
\end{array}\right..
\end{align}
Due to the boundary equation,
\begin{eqnarray}
\label{upipeasy}
|u_2\rangle=(\xi+z_2-z_2^{-1})\begin{bmatrix} 1\\ 1\end{bmatrix} ,
\end{eqnarray}
the physical solutions are
\begin{align*}
\epsilon&\equiv \epsilon_{\rm right}(k_\parallel)=\xi=-2\sin k_\parallel,\\
z_2&=z_2(k_\parallel)=\frac{2}{\omega}= \Big(2-\frac{\mu}{2}-\cos k_\parallel \Big)^{-1}.
\end{align*}
This surface band is also chiral, but with the {\it opposite}
chirality to that of the left edge. The right surface band
touches the bulk band at the pair of values of \(k_\parallel\) such
that \(|z_2(k_\parallel)|=1\). These values of \(k_\parallel\), are the same
as those computed for the surface band on the
left edge, due
to the fact that \(z_1(k_\parallel)=z_2(k_\parallel)^{-1}\). It is not obvious
from comparing Eqs.\,\eqref{rightedge} and \eqref{leftedge}
that this basic relationship should hold, but the actual
solutions do satisfy it. The (unnormalized) surface states
are, for large \(N\),
\[
| \epsilon_{\rm right}(k_\parallel)\rangle=\sum_{j=1}^N
\Big(2-\frac{\mu}{2}-\cos k_\parallel \Big)^{-(j-N+1)}|k_\parallel\rangle |j\rangle\begin{bmatrix} 1\\ 1\end{bmatrix}.
\]
The root \(z_1(k_\parallel)\) (\(z_2(k_\parallel)\)) is entirely outside
(inside) the unit circle if \(\mu<0\) or \(\mu>8\). This is a direct
indication that the system does not host surface bands in these parameter regimes.
In Fig.\,\ref{fig:pip}, we show the surface bands for two
values of the chemical potential, one for each topologically
non-trivial phase. The location of the surface bands in the
Brillouin zone is not determined by the dispersion relation,
which is itself independent of \(\mu\), but by the behavior of
the wavefunctions as witnessed by \(z_1(k_\parallel)=z_2(k_\parallel)^{-1}\).
\subsubsection{Power-law zero modes}
Here we return to the basic model Hamiltonian with three parameters \(t,\Delta, \mu\).
We consider a sheet of material rolled into a cylinder along the \(y\)-direction and half-infinite
in the \(x\)-direction. The virtual wires are
\begin{align*}
H_{k_\parallel}=&\, 1\otimes h_{k_\parallel,0}+T\otimes h_{k_\parallel,1}
+T^\dagger \otimes h_{k_\parallel,1}^\dagger, \\
h_{k_\parallel,0}=&\begin{bmatrix}
-(\mu-4t)-2t\cos k_\parallel & -2\Delta \sin k_\parallel\\
-2\Delta \sin k_\parallel & (\mu-4t)+2t\cos k_\parallel
\end{bmatrix}, \\
h_{k_\parallel,1}=&
\begin{bmatrix}
-t& \Delta\\
-\Delta & t
\end{bmatrix}.
\end{align*}
The crystal momenta \(k_\parallel=-\pi,0\) have special significance. Since
the off-diagonal entries of \(h_0\) vanish at these momenta, the virtual $D=1$ systems
can be interpreted as one-dimensional SCs. In particular,
\begin{align*}
h_{0,0}=
\begin{bmatrix}
-(\mu-2t)& 0\\
0 & \mu-2t
\end{bmatrix}, \
h_{-\pi,0}=
\begin{bmatrix}
-(\mu-6t) & 0\\
0 & \mu-6t
\end{bmatrix}
\end{align*}
and so the virtual chains \(H_{-\pi}\)
and \(H_{0}\) are precisely the Majorana chain of Kitaev,
at two distinct values of an effective chemical potential \(\mu'=-(\mu-4t)\mp 2t\)
{\it for the chain}. We have investigated this paradigmatic system
by analytic continuation in Refs.\,[\onlinecite{PRL,JPA,PRB1}].
If \(\mu<0\) or \(\mu>8t\), both chains are in their topologically
trivial regime. If \(0<\mu<4t\), then \(H_{0}\) is in the
non-trivial regime, but not \(H_{-\pi}\). The opposite is
true if \(4t<\mu<8t\). This analysis explains why is it that the fermionic parity of
the ground state of the \(p+ip\) SC is odd in the weak pairing phase \cite{read00},
and suggests that one should expect surface bands crossing zero energy at
\(k_\parallel=0\) (\(k_\parallel=-\pi\)) for \( 0<\mu<4\) (\(4<\mu<8\)). We already saw some
some of these bands in the previous section.
Let us focus here on the virtual Kitaev chain at \(k_\parallel=0\). Its effective
chemical potential is \(\mu'=\mu-2t\). Suppose we are in a parameter regime
\[
4\Delta^2=\mu(4t-\mu),\quad 0<\mu<4t,
\]
of the full two-dimensional model. Then the \(H_{k_\parallel=0}\) virtual Kitaev chain
is in the topologically nontrivial parameter regime
\[\left (\frac{\mu'}{2t}\right)^2+\left(\frac{\Delta}{t}\right)^2=1, \quad -2t<\mu'<2t.\]
It is shown in Part I that the Majorana zero modes display an exotic power-law
profile in this regime. For the \(p+ip\) TSC these remarks imply the
following power-law zero-energy surface mode:
\[
|\epsilon=0, k_\parallel=0\rangle=\sum_{j=1}^\infty\sum_{j_1=1}^{N_1}
j\left(\frac{-2(t-\Delta)}{\mu-2t}\right)^{j}|j_1\rangle|j\rangle.
\]
\subsection{Majorana flat bands in a gapless $s$-wave topological superconductor}
\label{2Dswavetoponductor}
A {\em gapless} SC is characterized by a vanishing single-particle excitation gap at particular
${\mathbf{k}}$-points (or regions) of the Brillouin zone, whereas the SC order parameter
remains non-vanishing. An example in $D=2$ was
analyzed in Ref.\, [\onlinecite{Deng14}], where the nodeless
character of the $s$-wave pairing in a two-band system was tuned
to a gapless SC phase by introducing a suitable spin-orbit coupling. A remarkable
feature of this system is the presence of zero-energy Majorana
modes whose number grows with system size -- a {\em continuum} in the thermodynamic
limit, namely, a Majorana flat band (MFB) --
as long as the system is subject to open BCs along one of the two spatial
directions, but {\em not} the other. This anomalous bulk-boundary correspondence was
attributed to an asymmetric (quadratic vs. linear) closing of the bulk excitation gap near
the critical momenta. In this section, we revisit this phenomenon and show that the
indicator of bulk-boundary correspondence we introduced in Ref.\,[\onlinecite{PRL}]
captures it precisely. Furthermore, in the phase hosting a MFB, we demonstrate
by combining our Bloch ansatz with numerical root evaluation, that the characteristic
length of the MFB wavefunctions diverges as we approach the critical values of
momentum, similarly to what was observed in graphene [Eq. (\ref{locl})].
Finally, by comparing
the equilibrium Josephson current in the gapless TSC to the one of a corresponding
gapped model, we show how, similar to the case of the local DOS at the surface \cite{Deng14},
the presence of a MFB translates in principle into a substantial enhancement of the
$4\pi$-periodic supercurrent.
\begin{figure*}
\includegraphics[width = 17.5cm]{s-waveDegeneracy.pdf}
\vspace*{-3mm}
\caption{(Color online)
Energy spectrum (blue scatter plot) and degeneracy indicator $\mathcal{K}_{k_z}(0)$ for the zero
energy level (red solid line) in the large-$N$ limit for BC1 (top panel) vs.
BC2 (bottom panel) for various values of the SC pairing $\Delta$. The other
parameters are $\mu=0$, $t=\lambda=u_{cd}=1$,
$N_x = 120,$ $N_y=30$.
\label{s-wavefig}}
\end{figure*}
\subsubsection{Analysis of anomalous bulk-boundary correspondence\\ via boundary matrix}
The relevant model Hamiltonian in real space is
\[ \widehat{H}\! = \!\frac{1}{2}\sum_{{\mathbf{j}}}\Big(\hat{\Psi}^\dagger_{{\mathbf{j}}} h_{\bm{0}}\hat{\Psi}_{{\mathbf{j}}}-4\mu\Big) +
\frac{1}{2}\sum_{{\mathbf{r}}=\hat{x},\hat{z}}\!\Big(\sum_{{\mathbf{j}}}\hat{\Psi}^\dagger_{{\mathbf{j}}} h_{{\mathbf{r}}}\hat{\Psi}_{{\mathbf{j}}+{\mathbf{r}}}+ \,\text{H.c.}\Big),
\]
with respect to a local basis of fermionic operators given by
$\hat{\Psi}^\dagger_{\mathbf{j}} \equiv \begin{bmatrix}c^\dagger_{{\mathbf{j}},\uparrow} & c^\dagger_{{\mathbf{j}},\downarrow}&
d^\dagger_{{\mathbf{j}},\uparrow} & d^\dagger_{{\mathbf{j}},\downarrow}&
c_{{\mathbf{j}},\uparrow} & c_{{\mathbf{j}},\downarrow}&
d_{{\mathbf{j}},\uparrow} & d_{{\mathbf{j}},\downarrow}\end{bmatrix}$. Here,
\begin{eqnarray*}
h_{\bm{0}} &=& -\mu\tau_z + u_{cd}\tau_z\nu_z -\Delta \tau_x \nu_y \sigma_x,\\
h_{\hat{x}(\hat{z})} &=& -t\tau_z\nu_z +i\lambda \nu_x\sigma_{x(z)} ,
\end{eqnarray*}
with Pauli matrices $\tau_v, \nu_v, \sigma_v,\ v= x,y,z$
for the Nambu, orbital, and spin space, respectively. This Hamiltonian
can be verified to obey time-reversal and particle-hole symmetry, as well as
a chiral symmetry $U_K \equiv \tau_x\nu_z$.
The topological response of the system was studied in Ref.\,[\onlinecite{Deng14}] using a
$\mathds{Z}_2 \times \mathds{Z}_2$ indicator
$(Q_{k_\parallel=0}, Q_{k_\parallel=\pi})$, where $Q_{k_\parallel}$ stands for the parity
of the partial Berry phase sum for the value of transverse momentum \cite{Notenotation} $k_\parallel$.
The bulk-boundary correspondence of the system was studied subject to two
different configurations: BC1, in which the system is periodic along $\hat{z}$ and
open along $\hat{x}$, and BC2, in which the system is periodic along $\hat{x}$
and open along $\hat{z}$. A MFB emerges along the open edges for BC1 in the
phase characterized by $(Q_{k_z=0}, Q_{k_z=\pi})=(1,1)$. No MFB exists in
the configuration BC2.
To shed light into this anomalous bulk-boundary correspondence using our
generalized Bloch theorem framework, consider first the configuration BC1.
Then, if $N_x$ denotes the size of the lattice along the $\hat{x}$ direction,
$\widehat{H}$ decouples into $N_x$ virtual wires, parametrized by the transverse
momentum $k_z$. These virtual $D=1$ Hamiltonians have the form
\begin{eqnarray*}
H_{k_z,N_x} & =& \frac{1}{2}\sum_{j=1}^{N_x}\Big(\hat{\Psi}^\dagger_{j,k_z}
h_{k_z,0}\hat{\Psi}_{j,k_z}-4\mu\Big) \\
&+& \frac{1}{2} \sum_{j=1}^{N_x-1} \! \Big(\hat{\Psi}^\dagger_{j,k_z} h_{k_z,1}\hat{\Psi}_{j+1,k_z}+\text{H.c.}\Big),
\end{eqnarray*}
where \( h_{k_z,0} \equiv h_{\bm{0}} + (e^{ik_z}h_{\hat{z}} + \text{H.c.})$ and $h_{k_z,1} \equiv h_{\hat{x}}. \)
The {\em total} number of Majorana modes hosted by each such chain (on its two ends)
is given by the degeneracy indicator introduced in Part I [Sec. VI], namely,
\( \mathcal{K} (0) \equiv \text{dim}\ \text{ker} [{B}_\infty(0)], \)
where ${B}_\infty(0)$ is the boundary matrix in the large-$N$
limit that we obtain after appropriately rescaling the extended bulk solutions
corresponding to $|z_\ell|>1$, and removing the un-normalizable extended
solutions corresponding to $|z_\ell|=1$.
We calculate the above degeneracy indicator $\mathcal{K}(0) \equiv \mathcal{K}_{k_z}(0)$ for each wire
parametrized by $k_z$, by evaluating the boundary matrix numerically.
Representative results are shown in the top panel of Fig.\,\ref{s-wavefig}.
When the system is in a phase characterized by $(Q_{k_z=0}, Q_{k_z=\pi})=(1,-1)$ ($\Delta=2$)
and $(Q_{k_z=0}, Q_{k_z=\pi})=(-1,-1)$ ($\Delta=4$) there are $\mathcal{O}(N)$ chains, each of
them hosting four Majoranas (two pairs per edge). This is reflected in the four-fold degeneracy
for a continuum of values of $k_z$. The values of $k_z$ at which the excitation gap closes are also the points
at which the indicator changes its nature.
The same analysis may be repeated for BC2, in which case periodic BCs are imposed
along $\hat{x}$ instead.
The resulting virtual $D=1$ systems are now parametrized by $k_x$,
with explicit expressions for the internal matrices given by
\(h_{k_x,0} = h_{\bm{0}} + (e^{ik_x}h_{\hat{x}} + \text{H.c.})\) and \(h_{k_x,1} = h_{\hat{z}}.\)
In the BC2 configuration, the degeneracy indicator remains zero, showcasing
the absence of MFBs, see bottom panel of Fig.\,\ref{s-wavefig}.
\subsubsection{Penetration depth of flat-band Majorana modes}
Whether and how far the Majorana modes in the flat band penetrate in
the bulk is important from the point of view of scattering. Our generalized Bloch theorem
allows us to obtain a good estimate of the penetration depth without diagonalizing the system.
In the large-$N$ limit, the wavefunction corresponding to a Majorana mode for a single wire described by
$H_{k_z,N_x}$ must include left emergent solutions and decaying extended solutions, so that
\[ |\epsilon=0\rangle
=
\sum_{s=1}^{s_0}\alpha_s^-|\psi_{k_z s}^{-}\rangle+
\sum_{|z_\ell|<1}\sum_{s=1}^{s_\ell}\alpha_{\ell s}|\psi_{k_z\ell s}\rangle,
\]
for complex amplitudes $\{\alpha_s^-, \alpha_{\ell s}\}$.
The emergent solutions are perfectly localized, and so the penetration depth
is determined by the extended solutions only. The latter are labeled by the roots $\{z_\ell\}$,
computed at $\epsilon=0$, of the polynomial equation
$z^{dR}\det (H_{k_z}(z)-\epsilon\mathds{1}_8)=0$,
which is the dispersion relation. Each extended solution $|\psi_{k_z\ell s}\rangle$
corresponding to the root $z_\ell,\ |z_\ell|<1$ has penetration depth $(-\ln |z_\ell|)^{-1}$.
A useful estimate of the penetration depth $\delta_p$ of a zero energy mode may then
by obtained by taking the maximum of the individual penetration depths of the bulk solutions
\cite{Lee81}, leading to the expression
\[\delta_p \equiv (-\ln |z_p|)^{-1},\quad |z_p| \equiv \max\,\{|z_\ell|,\ |z_\ell|<1 \} . \]
Since the roots $\{z_\ell\}$
depend on the value of the transverse momentum $k_z$, so does
the penetration depth $\delta_p$. As seen in Fig.\,\ref{fig:pendep},
the Majoranas penetrate more inside the bulk near the critical values of the transverse
momentum, where the excitation gap closes. At these points, the penetration
depth diverges, signifying that the corresponding Majorana excitations become
part of the bulk bands.
\begin{figure}
\includegraphics[width=0.85\columnwidth]{figpendep.pdf}
\vspace*{-2mm}
\caption{(Color online) Penetration depth (in units of the lattice constant) of
flat-band Majoranas as a function of $k_z$.
The parameters are
$\mu=0$, $u_{cd}=t=\lambda=1$, $\Delta=4$.
\label{fig:pendep}}
\end{figure}
\subsubsection{Impact of a Majorana flat band on Josephson current}
Beside resulting in an enhanced local DOS at the surface \cite{Deng14},
one expects that the MFB may impact the nature of the equilibrium (DC)
Josephson current at zero temperature. We now show (numerically) that the
Josephson current flowing through a strip of finite width is $4\pi$-periodic,
irrespective of the width of the strip. This is at variance with the behavior
expected for a gapped $D=2$ $s$-wave TSC, in which case the $4\pi$-periodic
contribution resulting from a fixed number of Majorana modes is washed
away once the strip width becomes large.
We model a SNS
junction of the SC under investigation by letting
the normal part be a weak link with the same type of hopping, spin-orbit coupling
and hybridization as the SC, but weaker by a factor of $w=0.2$.
The DC Josephson current can be calculated using the formula \cite{lesovik11}
\[
I(\phi) = \frac{2e}{\hbar}\frac{\partial E_0}{\partial \phi} =
-\frac{2e}{\hbar}\sum_{\epsilon_n>0}\frac{\partial \epsilon_n}{\partial \phi},
\]
where $E_0$ is the energy of the many-body ground state,
$\epsilon_n$ are single-particle energy levels, and $\phi$ is the SC phase
difference (or flux). As $\phi$ is varied,
at the level crossings
of low-lying energy levels with the many-body ground state associated
with the $4\pi$-periodic effect, the system continues in the state which respects
fermionic parity and time-reversal symmetry in all the virtual wires.
The upper panels of Fig.\,\ref{Josephson} show the Josephson response $I(\phi)$
of the gapless TSC under the two BCs. While in the
BC1 configuration the behavior of the current $I(\phi)$ (solid black line)
is $4\pi$-periodic, the BC2 configuration displays standard $2\pi$-periodicity, reflecting
the presence of the MFB {\em only} under BC1. The lower panels of Fig.\,\ref{Josephson}
show the Josephson response of the gapped $s$-wave TSC model
introduced and analyzed in Ref.\,[\onlinecite{swavePRL,swavePRB}]. It can be seen that
the Josephson current is now identical under BC1 and BC2, as expected
from the fact that a standard bulk-boundary correspondence is in place.
\begin{figure}
\centering
\includegraphics[width = 8cm]{figjosephson.pdf}
\vspace*{-2mm}
\caption{(Color online) Total Josephson current $I(\phi)$ (black solid line),
$2\pi$-periodic component $I_{2\pi}(\phi)$ (blue dotted line) and
$4\pi$-periodic component $I_{4\pi}(\phi)$ (red dashed line) in units of $2e/\hbar$,
as a function of flux $\phi$. Top (bottom) panels correspond to the gapless (gapped) model of
a $D=2$ $s$-wave TSC, whereas left (right) panels correspond to BC1 (BC2), respectively.
The parameters used for both models
are $\mu=0$, $u_{cd}=t=\lambda=1$, $\Delta=4$, $N_x=N_z=60$.
\label{Josephson}}
\end{figure}
Let us separate the total Josephson current $I(\phi)$ into
$2\pi$- and $4\pi$-periodic components by letting $I(\phi) = I_{2\pi}(\phi) + I_{4\pi}(\phi)$,
with
\[I_{2\pi}(\phi) \equiv \left\{
\begin{array}{lcl}
\frac{1}{2}[I(\phi)+I(\phi+2\pi)] & \text{if} & 0\le \phi < 2\pi\\[2pt]
\frac{1}{2}[I(\phi)+I(\phi-2\pi)] & \text{if} & 2\pi \le \phi < 4\pi
\end{array} \right.,\]
\vspace{-0.4cm}
\[I_{4\pi}(\phi) \equiv \left\{
\begin{array}{lcl}
\frac{1}{2}[I(\phi)-I(\phi+2\pi)] & \text{if} & 0\le \phi < 2\pi\\[2pt]
\frac{1}{2}[I(\phi)-I(\phi-2\pi)] & \text{if} & 2\pi \le \phi < 4\pi
\end{array} \right.,\]
In the four panels of Fig.\,\ref{Josephson}, the $2\pi$- and $4\pi$-periodic
components are individually shown by (blue) dotted and (red) dashed lines,
respectively.
The nature of the supercurrent in the gapped TSC
(lower panels) is predominantly $2\pi$-periodic, with only a small
$4\pi$-periodic component due to the presence of a finite number of
Majoranas (two per edge). Further numerical simulations (data not shown)
reveal that the amplitude of the $2\pi$-periodic current relative to the
$4\pi$-periodic current increases linearly with the width of the strip,
so that for large strip width, the Josephson current is essentially
$2\pi$-periodic. The origin of such a degradation of the $4\pi$-periodicity
lies in the fact that the number of Majorana modes is constant,
irrespective of the width of the strip, as only one virtual wire hosts
Majorana modes in this gapped model. Since only the Majorana modes can support
$4\pi$-periodic current, their contribution relative to the extensive $2\pi$-periodic
current arising from the bulk states diminishes as the strip width becomes large.
In contrast, for the gapless TSC in the MFB phase (top panels), the number of virtual wires hosting
Majorana modes grows linearly with the width of the strip in the BC1 configuration.
This leads to an {\em extensive contribution from the $4\pi$-periodic component},
which may be easier to detect in experiments.
\section{Summary and Outlook}
\label{outlook}
As mentioned in the Introduction, this paper constitutes the sequel, Part II, to
Ref.\,[\onlinecite{PRB1}], where we introduced a generalization of Bloch's theorem
for arbitrary boundary conditions. In clean
systems translation symmetry is only broken by surface terminations and boundary constraints
that encode physical or
experimental conditions. The conventional Bloch theorem is not in force
because translational symmetry is explicitly broken. However, since such a symmetry is only
mildly broken, one wonders whether one can one continue to label single-particle electronic excitations
in terms of some kind of ``generalized momenta". Our generalized Bloch theorem \cite{PRL,PRB1}
provides a precise answer to that question. The mathematical framework makes the idea of
approximate translation precise by relating the spectral properties of certain shift operators
to non-unitary representations of the group of translations \cite{JPA}. According to the
generalized Bloch theorem, the exact eigenstates of a clean system of independent
fermions with terminations are linear combinations of eigenstates of non-unitarily represented translations.
It is because of this lack of unitarity that complex momenta arise. The latter leads to the emergence of
localized edge modes and more involved power-law corrections to the Bloch-like wavefunctions.
The amplitudes that weigh the relative contribution of the generalized Bloch states to the
exact energy eigenstates are determined by a boundary matrix. This piece of our formalism,
the {\em boundary matrix}, optimally combines information about the translation-invariant bulk
and the boundary conditions: it allows one to compactly parametrize the manifold of boundary
conditions and may eventually suggest new ways of accessing effective edge theories.
Part II focused on presenting two new theoretical developments and several non-trivial
applications to higher-dimensional systems. New developments include the extension of the
generalized Bloch theorem formalism to incorporate:
(1) Surface reconstruction and surface disorder; and (2) Interface physics involving multiple bulks.
Within our framework, boundary conditions for \(D\)-dimensional systems must be imposed
on two parallel hyperplanes, but are otherwise arbitrary. Thus, the generalized Bloch theorem
yields highly-effective tools for diagonalizing systems subject to anything from pristine
terminations to surface relaxation, reconstruction and disorder. The extension to interfaces
between multiple bulks allows us to study arbitrary junctions, including interface modes
resulting from putting in contact two exotic topologically non-trivial bulks.
It is interesting to digress on what happens when one tries to formulate a generalized Bloch
theorem for clean systems cut into hypercubes. The bulk-boundary separation goes through
essentially unchanged: for example, the range of the boundary projector consists of a
hypercubic surface layer of thickness determined by the bulk structure of the system. The
challenge in higher dimensions is solving the bulk equation explicitly and in full generality.
It is a worthy challenge, because it would yield insight into the plethora of corner states that
can appear in such systems \cite{benalcazar17,hashimoto17,flore18}. While special
cases may still be able to be handled on a case-by-case basis, in general we see little hope
of using the same mathematical techniques (crucially, the Smith decomposition \cite{JPA})
that work so well in our setup. In general, the analytic continuation of the Bloch Hamiltonian become
then a matrix-valued analytic function of \(D\) complex variables. The passage from one complex
variable to several makes a critical difference.
We have illustrated our formalism with several applications to models of
current interest in condensed matter physics. Table \ref{MainTable}
summarizes all systems that we have solved so far by our techniques,
where exact analytic solutions were unknown prior to our
findings, to the best of our knowledge.
For example, we showed that it is possible to {\em analytically} determine
Andreev bound states for an idealized SNS junction.
More importantly, the existence of power-law modes
would not have been unveiled without our mathematical formalism.
Among the challenging applications presented in this paper, we investigated
in detail the Creutz ladder system, where thanks to a
Gaussian duality \cite{equivalence}, we can map this topological insulator
to a pair of coupled Kitaev Majorana chains. The presence
of power-law topological modes in the Creutz ladder insulator is noteworthy,
see Sec.\,\ref{creutzladder}. We also find power-law modes on the surface of
the \(p+ip\) chiral superconductor as part of our closed-form full calculation
of the surface states of this system, see Sec.\,\ref{pwavetoponductor}. It
seems reasonable now to accept that power-law modes, topological or otherwise,
are a general, if fine-tuned, feature of {\it short range} tight-binding models.
We have also included applications
to other $D=2$ systems, such as the full closed-form diagonalization of graphene
ribbons for zigzag-bearded and armchair surface terminations. While the edge
modes for zigzag-bearded graphene have been computed before in closed form,
the closed-form band states appear to be new in the literature. It seems a distinctive
feature of the generalized Bloch theorem that {\em both} edge and bulk bands can
be treated analytically on equal footing. Finally, we investigated in detail the Majorana
flat bands of the gapless $s$-wave topological superconductor we previously introduced
\cite{Deng14}. There, we find an extensive contribution of the surface Majorana flat band to the
$4\pi$-periodic component of the Josephson current, which would serve as a smoking gun
for experimental detection should a candidate material realization be identified.
In view of these results it seems fair to grant that
the generalized Bloch theorem bestows a higher level of control over surface and interface
physics, and opens the door for a deeper investigation of the interplay between surface/interface
and bulk critical phenomena \cite{book8,quelle15,kempkes16}. Let us conclude by recalling a main
motivation behind the formulation of our generalized Bloch theorem. That motivation was to investigate
the bulk-boundary correspondence in {\it boundary space}, that is, the space of boundary conditions,
as opposed to the usual parameter space, in order to quantitatively express stability and
robustness in this new space that clearly affects boundary invariants most directly
\cite{prodanBook}. Physically, boundary conditions are idealized representations of interfaces
between the system of interest and an ``environment" that we choose not to characterize,
and so they capture matching conditions that can have a big impact on the
energy spectrum of the system. This interpretation suggests that it might be very
illuminating to bring closer together precise mathematical ideas of stability and robustness from quantum
information processing and control engineering, and more qualitative concepts in condensed matter physics.
We have not carried out this systematic task in this paper which is, strictly speaking, still an exploration of the
power of the generalized Bloch theorem. We will return to the study of the relation between boundary and bulk
topological invariants in future publications.
\section*{Acknowledgements}
Work at Dartmouth was partially supported by the NSF through Grant No.
PHY-1066293 and the Constance and Walter Burke Special Projects Fund in Quantum
Information Science.
|
1,477,468,751,400 | arxiv |
\section{Introduction}
\label{Introduction}
Type Iax supernovae (SNe) are low luminosity and less energetic cousins of Type Ia SNe \citep{2003PASP..115..453L,2013ApJ...767...57F}. Type Iax SNe are known to have a wide range of luminosities (M$_{r}$ = $-$12.7 mag, \citealt{Karambelkar_2021} to M$_{V}$ = $-$18.4 mag, \citealt{2011ApJ...731L..11N}). There are bright members such as SNe 2011ay \citep{2015MNRAS.453.2103S,2017MNRAS.471.4865B}, 2012Z \citep{2015A&A...573A...2S} and faint members like SNe~2008ha \citep{2009AJ....138..376F, 2009Natur.459..674V}, 2010ae \citep{2014A&A...561A.146S}, 2019gsc \citep{2020ApJ...892L..24S, 2020MNRAS.496.1132T} and 2021fcg \citep{Karambelkar_2021}. However, dominance of relatively faint Type Iax SNe can be seen over bright ones \citep{2011MNRAS.412.1441L, 2017ApJ...837..121G}. Though the sample size of Type Iax SNe is increasing with new discoveries by ongoing transient surveys, the progenitor and explosion mechanism of these peculiar objects are still debated. In order to understand them in a better way, detailed study of individual candidates is important.
The pre-maximum spectra of Type Iax SNe are dominated by Intermediate Mass Elements (IMEs), Iron Group Elements (IGEs), along with C and O features. The pre-maximum spectral features are similar to SN 1991T-like Type Ia SNe \citep{2013ApJ...767...57F, 2014ApJ...786..134M} with weak Si {\sc II}, S {\sc II}, Ca {\sc II} lines and strong high excitation features such as Fe {\sc III}. Measured expansion velocities of Type Iax SNe close to maximum lie between 2000 km s$^{-1}$ to 8000 km s$^{-1}$ \citep{2009AJ....138..376F,2014A&A...561A.146S} which is significantly less than the expansion velocities associated with Type Ia SNe ($\sim$ 11000 km s$^{-1}$, \citealt{Wang_2009,2013ApJ...767...57F}). Type Iax SNe show different spectroscopic behaviour, especially at nebular phase with presence of permitted Fe {\sc II} lines \citep{2008ApJ...680..580S,2017hsn..book..375J}.
The progenitor system of these explosions are not yet fully understood. Deep pre-explosion images are available for a few Type Iax SNe. In the case of SN 2012Z, the analysis of the pre-explosion image led \cite{2014Natur.512...54M} to suggest that the most favoured progenitor of this class could be a white dwarf in a binary system with Helium star as a companion. Nevertheless, the possibility of a single star as the progenitor was not completely ruled out in their work. Based on the pre-explosion images of SN 2014dt, \cite{2015ApJ...798L..37F} suggested that a C-O white dwarf in association with a Helium star can be a plausible progenitor system. Moreover, possible detection of Helium features in SNe 2004cs and 2007J were presented by \cite{2013ApJ...767...57F}. Detailed spectroscopic studies for a sample of Type Iax SNe, however, resulted in null detection of Helium feature \citep{2015ApJ...799...52W,2019MNRAS.487.2538J,2019A&A...622A.102M}. Hence, binary system with a Helium star companion of the progenitor white dwarf is still debated.
The low luminosity and less energetic nature of Type Iax SNe suggest an incomplete disruption of the white dwarf which could lead to a bound remnant.
The presence of P-Cygni lines and forbidden lines in the late phase spectra has been attributed to the centrally located optically thick high density region and optically thin SN ejecta, respectively \citep{2006AJ....132..189J,2008ApJ...680..580S} suggesting two component structure of the ejecta. \cite{2014ApJ...792...29F} presented late time observations of SN 2008ha and discussed about the possibilities of the remnant detection. The observed IR excess seen in the late time light curves of SN 2014dt \citep{2016ApJ...816L..13F} was explained as arising from a bound remnant with an extended optically thick super-Eddington wind. Based on the late phase spectroscopic study for a larger sample, \cite{2016MNRAS.461..433F} have also proposed a two component model for the ejecta of SNe Iax. The possibility of the presence of a bound remnant in these explosions has also been discussed in \cite{ 2014ApJ...786..134M,Shen_2017,Vennes680,2018PASJ...70..111K,Shen_2018,2019MNRAS.489.1489R,10.1093/pasj/psab075} and \cite{2022ApJ...925..138M}.
\cite{2012ApJ...761L..23J}, \cite{2013MNRAS.429.2287K} and \cite{2014MNRAS.438.1762F} gave different deflagration models of C-O white dwarfs and could reproduce most of the observed features seen in relatively bright Type Iax SNe. A disk detonation associated with the merger of a white dwarf with a neutron star or black hole \citep{Fern_ndez_2013} can account for some properties seen in Type Iax SNe. On the other hand, to explain the observed properties of faint Type Iax SNe, several channels e.g. mergers involving C-O and O-Ne white dwarfs \citep{2018ApJ...869..140K}, partial deflagration associated with hybrid C-O-Ne white dwarf \citep{2015MNRAS.447.2696D,2015MNRAS.450.3045K,2016A&A...589A..38B}, deflagrations of C-O white dwarfs \citep{2022A&A...658A.179L}, core collapse scenario \citep{2010ApJ...719.1445M}, O-Ne white dwarf and neutron star/black hole mergers \citep{2022MNRAS.510.3758B}, and electron capture SN scenario \citep{Pumo_2009} have been proposed.
In this paper we present photometric and spectroscopic analysis of a bright Type Iax SN~2020rea. Section \ref{Discovery, observation and data reduction} mentions the discovery, follow-up and procedures used to reduce the data of SN~2020rea. A short description on the adopted distance and extinction is presented in Section \ref{distance_extinction}. In Section \ref{analaysis_light_curve}, the photometric properties of SN~2020rea are discussed. The bolometric light curve, its fitting with analytical models to infer the explosion parameters, and the comparison with deflagration models are presented in Section \ref{bolometric_light_curve}.
Section \ref{spectral_properties} provides spectral studies of SN~2020rea and its host galaxy. A comparison of the observed features of SN~2020rea with the proposed explosion scenario for SNe Type Iax is made in Section \ref{explosion_sscenario}. Finally, a summary of this study is presented at the end of the paper in Section \ref{summary}.
\input{Table1}
\section{Discovery, observation and data reduction}
\label{Discovery, observation and data reduction}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{ds9_subplot_2020rea.pdf}
\end{center}
\caption{Location of SN~2020rea in UGC 10655. This image is acquired on August 22, 2020 in {\it V}-band with 1m LCO telescope.}
\label{fig:ds9_2020rea}
\end{figure}
\input{phot_table_new_copy_w}
\input{Table4}
SN~2020rea was spotted by Supernova and Gravitational Lenses Follow up (SGLF) team in the Zwicky Transient Facility (ZTF) data \citep{2020TNSTR2463....1P} on August 11, 2020 (JD=2459072.702) in the host galaxy UGC 10655 at a redshift of 0.02869$\pm$0.00015 \citep{1999PASP..111..438F}. It was classified as a Type Ia-pec SN by \cite{2020TNSCR2512....1P}. Figure \ref{fig:ds9_2020rea} shows the location of SN~2020rea in UGC 10655. The details of SN~2020rea and its host galaxy are given in Table \ref{tab:SN2020rea_detail}.
Optical photometric follow-up of SN~2020rea was initiated $\sim$ 6 days after discovery with the telescopes of the Las Cumbres Observatory (LCO; \citealt{2013PASP..125.1031B})
under the Global Supernova Project (GSP) in {\it BgVri} bands. SN~2020rea is located in the proximity of the host galaxy hence we performed template subtraction to estimate the true SN flux. The templates were observed in {\it BgVri} bands on May 27, 2021, $\sim$ 8 months after the discovery. The template subtraction was performed using \texttt{PyZOGY} \citep{2017zndo...1043973G}. The \texttt{lcogtsnpipe} pipeline \citep{2016MNRAS.459.3939V} was used to estimate the SN magnitudes. Calibration of the instrumental magnitudes were done using APASS catalog. The calibrated photometric magnitudes of SN~2020rea are listed in Table \ref{tab:photometric_observational_log_2020rea}.
Spectroscopic follow up of SN~2020rea was initiated $\sim$ 5 days after discovery and lasted $\sim$ 1 month using the FLOYDS spectrograph on the 2m FTN telescopes. FLOYDS spectrograph provides a wavelength range of 3300-11000 \AA\ with resolution ranging between 400-700. We have used the \texttt{floydsspec}\footnote{https://www.authorea.com/users/598/articles/6566} pipeline to perform the spectral reduction. Finally the spectra were scaled with respect to the photometry and corrected for redshift. The log of spectroscopic observations is presented in Table \ref{tab:spectroscopic_observations_20rea}.
\section{Distance and extinction}
\label{distance_extinction}
\label{distance_extincton} Assuming $H_0$ = 73 km s$^{-1}$ Mpc$^{-1}$, $\Omega_m$ = 0.27, $\Omega_v$ = 0.73 and a redshift of 0.02869$\pm$0.00015 we estimate the luminosity distance of SN~2020rea to be 120.5$\pm$6.7 Mpc. The distance modulus is 35.40 $\pm$ 0.12 mag. We quote the error from the HyperLeda database \citep{2014A&A...570A..13M}. The Galactic extinction along the line of sight in SN~2020rea is {\it E(B-V)} = 0.02 mag \citep{2011ApJ...737..103S}. SN~2020rea lies in the proximity of the host galaxy and hence extinction due to the host galaxy is also expected. To estimate the extinction due to the host galaxy we used the equivalent width of Na {\sc i}D line in the spectra. The initial spectral sequence of SN~2020rea shows the presence of a strong Na {\sc i}D line. We measured the equivalent width of Na {\sc i}D line in the spectrum combined using two spectra of SN~2020rea close to maximum (Figure \ref{fig:SN 2020rea_spectra_plot}). The estimated equivalent width is 0.66$\pm$0.06 \AA\ which translates to {\it E(B-V)} = $0.08\pm 0.02$ mag using the relation given in \cite{2012MNRAS.426.1465P}. Thus, the total extinction due to the combination of the Galactic and host components is {\it E(B-V)} = $0.10\pm0.02$ mag ({A${_V}$} = 0.31 mag assuming R${_V}$ = 3.1).
\section{Analysis of the light curve}
\label{analaysis_light_curve}
Figure \ref{fig:SN 2020rea_light_curve} shows the light curve evolution of SN 2020rea in {\it BgVri} bands. The peak phase is well covered in all the bands except {\it B}-band. To estimate the peak time and peak magnitude in {\it B}-band a chi-square minimization based template fitting method was used and a best match was found with SN 2005hk. The best fit indicates that SN~2020rea peaked at JD = 2459083.5$\pm$1 with peak magnitude 17.33$\pm$0.07 mag in the {\it B}-band . With these estimates, the light curve decline rate ($\Delta$m$_{15}$) of SN~2020rea in {\it B}-band is 1.61$\pm$0.14 mag. In other bands, peak phase and peak time are estimated by fitting a low order spline to the light curve.
The respective decline rates ($\Delta$m$_{15}$) in {\it g}, {\it V}, {\it r} and {\it i}-bands are 1.31$\pm$0.08 mag, 0.54$\pm$0.05 mag, 0.46$\pm$0.05 mag and 0.50$\pm$0.04 mag. The peak in {\it g} and V bands occur on JD = 2459084.74 and 2458084.77 at a magnitudes of 17.34$\pm$0.03 mag and 17.40$\pm$0.03 mag, respectively. We have used {\it g}-band maximum throughout the paper, as a reference, for further work.
We compare the light curve characteristics of SN~2020rea with other well studied Type Iax SNe. We have represented the wide luminosity range in choosing the comparison sample which includes SNe 2002cx \citep{2003PASP..115..453L}, 2005hk \citep{2008ApJ...680..580S}, 2008ha \citep{2009AJ....138..376F}, 2010ae \citep{2014A&A...561A.146S}, 2011ay \citep{2015MNRAS.453.2103S}, 2012Z \citep{2015A&A...573A...2S,2015ApJ...806..191Y}, 2019muj \citep{2021MNRAS.501.1078B,10.1093/pasj/psab075} and 2019gsc \citep{2020ApJ...892L..24S}. Figures \ref{fig:comp_light_curve_BgVri_2020rea} exhibits the normalized magnitudes of each SN with respect to the peak magnitude in the respective bands. In {\it B}-band, SN~2020rea declines faster than SNe 2002cx, 2011ay and follows a similar evolution as SNe 2005hk and 2012Z up to $\sim$ 20 days after maximum, whereas it declines faster than SN 2005hk at later epochs and shows similarity with SN 2019muj. In {\it V}-band, SN 2020rea shows resemblance with SNe 2005hk and 2012Z. The early time evolution of {\it g}-band light curve of SN~2020rea ($\Delta$m$_{15}$(g) = 1.31$\pm$0.08 mag) is similar to SNe 2005hk ($\Delta$m$_{15}$(g) = 1.36$\pm$0.01 mag, \cite{2015A&A...573A...2S}) and 2012Z ($\Delta$m$_{15}$(g) = 1.31$\pm$0.01 mag, \cite{2015A&A...573A...2S}) whereas in {\it r}-band SN 2020rea ($\Delta$m$_{15}$(r) = 0.46$\pm$0.05 mag) declines slightly slower than SNe 2005hk ($\Delta$m$_{15}$(r) = 0.70$\pm$0.02 mag, \cite{2015A&A...573A...2S}) and 2012Z ($\Delta$m$_{15}$(r) = 0.66$\pm$0.02 mag, \cite{2015A&A...573A...2S}) (Figure \ref{fig:comp_light_curve_BgVri_2020rea}). In {\it i}-band SN~2020rea ($\Delta$m$_{15}$(i) = 0.50$\pm$0.04 mag) shows similarity with SN 2012Z ($\Delta$m$_{15}$(i) = 0.54$\pm$0.04 mag, \citealt{2015A&A...573A...2S}) and declines slower than SN 2005hk ($\Delta$m$_{15}$(i) = 0.60$\pm$0.01 mag, \citealt{2015A&A...573A...2S}).
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{20rea_light_curve_updated.pdf}
\end{center}
\caption{Light curve evolution of SN~2020rea in {\it BgVri} bands. The light curves in all bands are shifted for clarity. In the right Y axis, corresponding absolute magnitudes for each band are presented. The template light curve of SN 2005hk used for estimating the peak magnitude and time of SN 2020rea in {\it B} band is also shown in the figure with dashed line.}
\label{fig:SN 2020rea_light_curve}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{comp_light_curve_BgVri_2020rea.pdf}
\end{center}
\caption{Light curves of SN~2020rea in the {\it BgVri} bands and its comparison with other Type Iax SNe. Here, comparison plots in {\it B} and {\it V} bands are made with respect to maximum in {\it B} band while in {\it gri} bands comparison plots are constructed with respect to {\it g} band maximum.}
\label{fig:comp_light_curve_BgVri_2020rea}
\end{figure}
Figure \ref{fig:SN 2020rea_colour_curve} presents reddening corrected {\it (B-V)}, {\it (V-I)}, {\it (V-R)} and {\it (R-I)} colour evolution of SN~2020rea and its comparison with other Type Iax SNe. For SNe 2020rea and 2010ae, we have used the formulations given in \cite{2006A&A...460..339J} to convert {\it ri} magnitude into {\it RI} magnitude. The {\it (B-V)}, {\it (V-I)}, {\it (V-R)} and {\it (R-I)} colour evolution of SN~2020rea follows a trend similar to other Type Iax SNe used for comparison. We have used date of {\it B} band maximum as reference for SNe 2002cx and 2011ay and {\it g} band maximum as reference for all the other SNe used for comparison.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{colorcurve_2020rea.pdf}
\end{center}
\caption{The colour evolution of SN~2020rea and its comparison with colours of other well studied Type Iax SNe.}
\label{fig:SN 2020rea_colour_curve}
\end{figure}
Using the distance and extinction given in Section \ref{distance_extinction}, we estimate the peak absolute magnitude of SN 2020rea in {\it V}-band = $-$18.30$\pm$0.12 mag. This is similar to SNe 2011ay \citep{2015A&A...573A...2S}, 2012Z \citep{2015A&A...573A...2S} and higher than SNe 2002cx \citep{2003PASP..115..453L}, 2005hk \citep{2008ApJ...680..580S} and 2014dt \citep{2018MNRAS.474.2551S}. Absolute magnitudes of SN~2020rea in {\it BgVri} bands are presented in Figure \ref{fig:SN 2020rea_light_curve}.
\section{Light curve modelling}
\label{bolometric_light_curve}
We construct the pseudo-bolometric light curve of SN~2020rea using extinction corrected magnitudes in {\it BgVri} bands. For the epoch JD~2459078.8, the $B$-band photometry is missing and hence we fit the $B$-band light curve with the template of SN~2005hk \citep{2008ApJ...680..580S} to estimate the magnitude. The extinction corrected magnitudes were converted to flux using zero points from the SVO filter profile service \footnote{\url{http://svo2.cab.inta-csic.es/theory/fps/index.php?mode=browse&gname=LCO&asttype=}} \citep{2020sea..confE.182R}. These fluxes are used to generate spectral energy distribution (SED) at each epoch which was then integrated using trapezoidal rule between 4000 to 9000 \AA~ to get the pseudo-bolometric flux. The contribution of UV and IR flux to the total bolometric flux is not well constrained for Type Iax SNe. It is estimated to be typically lying in the range 10\% to 53\% \citep{2007PASP..119..360P,2015ApJ...806..191Y,2016MNRAS.459.1018T,2020MNRAS.496.1132T,2020ApJ...892L..24S,2022ApJ...925..217D}. Due to unavailability of data in UV and IR bands, we have used pseudo-bolometric fluxes and reported the lower limit of the explosion parameters.
The integrated fluxes are converted to luminosity using the distance modulus $\mu$ = 35.40 $\pm$ 0.12 mag. The peak pseudo-bolometric luminosity of SN~2020rea is (3.09 $\pm$ 0.27) $\times$ 10$^{42}$ erg s$^{-1}$ and it occurred at JD 2459087.26 about 2.52 days after maximum in $g$-band. For direct comparison, we also estimate the pseudo-bolometric light curve of SN~2012Z using {\it BgVri} data with $E(B-V)$ = 0.11 $\pm$ 0.03 mag \citep{2015A&A...573A...2S} and distance modulus of 32.34 $\pm$ 0.28 mag, obtained using the luminosity distance of 29.4$\pm$3.8 Mpc. The peak pseudo-bolometric luminosity of SN~2012Z is (2.82 $\pm$ 0.58) $\times$ 10$^{42}$ erg s$^{-1}$ at JD 2455972.0. The peak pseudo-bolometric luminosity of SN~2020rea is slightly higher than SN 2012Z and lies towards the brighter end of the luminosity distribution of Type Iax SNe. Figure \ref{fig:2020rea_2012Z_deflag_bol_light_curve} shows the pseudo-bolometric light curves of SNe~2020rea and 2012Z.
To constrain the amount of $^{56}$Ni synthesized during the explosion we used a radiation diffusion model (\citealt{1982ApJ...253..785A, 2008MNRAS.383.1485V,2012ApJ...746..121C}) which takes into account energy generated through radioactive decay from $^{56}$Ni $\rightarrow$ $^{56}$Co $\rightarrow$ $^{56}$Fe and also includes $\gamma$-ray escape from the ejecta.
The output luminosity is expressed as
\begin{equation}
\begin{split}
\label{eq:Arnett}
L(t) = M_{\rm{Ni}} \mathrm{e}^{-x^{2}} [(\epsilon_{\rm{Ni}}-\epsilon_{\rm{Co}})\int_0^x 2 z \mathrm{e}^{z^{2}-2zy}\,\mathrm{d}z\\
+ \epsilon_{\rm{Co}}\int_0^x 2 z \mathrm{e}^{z^{2}-2yz+2zs}\,\mathrm{d}z]( 1 - \mathrm{e}^{-{(\frac{t_{\gamma}}{t}})^{2}})
\end{split}
\end{equation}
\noindent
where $t$ (days) is the time since explosion, \(t_{\rm{lc}}\) is time scale of the light curve, $t_\gamma$ is gamma ray time scale, \(M_{\rm{Ni}}\) is initial mass of $^{56}$Ni, $x$ $\equiv$ $t$/$t$\(_{\rm{lc}}\), $y$ $\equiv$ \(t_{\rm{lc}}\)/(2\(t_{\rm{Ni}}\)) and $s$ $\equiv$ [\(t_{\rm{lc}}\)(\(t_{\rm{Co}}\) - \(t_{\rm{Ni}}\))/(2\(t_{\rm{Co}}\)\(t_{\rm{Ni}}\))] with \(t_{\rm{Ni}}\) = 8.8~d and \(t_{\rm{Co}}\) = 111.3~d, respectively. The rate of energy generation due to Ni and Co decay are \(\rm \epsilon_{Ni} = 3.9 \times 10^{10}\ erg\ s^{-1}\ g^{-1}\) and \(\rm \epsilon_{Co} = 6.8 \times 10^{9}\ erg\ s^{-1}\ g^{-1}\), respectively. The free parameters in the model are epoch of explosion $t_{expl}$, $M_{\rm{Ni}}$, $t_\gamma$ and $t_{\rm{lc}}$.
The mass of ejecta (\(M_{\rm{ej}}\)) and kinetic energy (\(E_{\rm{K}}\)) are expressed as
\begin{equation}
\label{eq:EjectaMass}
M_{\rm{ej}} = 0.5 \frac{\beta c}{\kappa} v_{exp}t_{lc}^{2}
\end{equation}
\begin{equation}
\label{eq:KineticEnergy}
E_{\rm{K}} = 0.3 M_{ej} v_{exp}^{2}
\end{equation}
\noindent
where $v_{exp}$, $c$ and $\beta$ (= 13.8) are the expansion velocity of the ejecta, the speed of light, and the constant of integration, respectively.
The fit of the radiation diffusion model to the pseudo-bolometric light curve of SN~2020rea gives $^{56}$Ni = 0.13$^{+0.01}_{-0.01}$ M$_{\odot}$, $t_{\rm lc}$ = 12.36$^{+0.9}_{-1.75}$ days, $t_{\rm \gamma}$ = 43.60$^{+2.4}_{-1.7}$ days and $JD_{\rm exp}$ = 2459070.64$^{+1.45}_{-0.76}$. The ejecta mass for SN 2020rea is estimated as $M_{\rm ej}$ = 0.77$^{+0.11}_{-0.21}$ M$_{\odot}$ and kinetic energy $KE$ = 0.19$^{+0.02}_{-0.06}$ $\times$ 10$^{51}$ erg, using a constant opacity $\kappa_{\rm opt}$ = 0.1 cm$^{2}$g$^{-1}$ and $v_{\rm exp}$ of 6500 km s$^{-1}$, close to maximum light.
We repeat the same exercise for the pseudo-bolometric light curve of SN~2012Z. We get $^{56}$Ni = 0.12$^{+0.01}_{-0.01}$, $t_{\rm lc}$ = 14.19$^{+0.8}_{-1.2}$ days, $t_{\rm \gamma}$ = 43.68$^{+1.2}_{-1.5}$ days and $JD_{\rm exp}$ = 2455954.39$^{+0.5}_{-0.37}$. Using an expansion velocity of 7000 km~s$^{-1}$ and the same constant optical opacity, we get $M_{\rm ej}$ = 1.09$^{+0.12}_{-0.19}$ M$_{\odot}$ and $KE$ = 0.32$^{+0.04}_{-0.05}$ $\times$ 10$^{51}$ erg. The values of $^{56}$Ni mass, ejecta mass and kinetic energy estimated by \cite{2014A&A...561A.146S} for SN 2012Z are 0.25--0.29 $M_{\odot}$, 1.4--2.6 $M_{\odot}$ and 0.7--2.8 $\times$ 10$^{51}$ erg, respectively which are higher than our estimates. The difference is mostly due to the adopted distance modulus, the wavelength range of the spectral energy distribution and velocity used for estimating the explosion parameters. The faster rise in SN~2020rea as compared to SN 2012Z could be attributed to the different amount of $^{56}$Ni mixing in the ejecta.
We compare the pseudo-bolometric light curves of SN~2020rea and SN~2012Z with optical bolometric light curves of pure deflagration model of $M_{\rm ch}$ white dwarfs \citep{2014MNRAS.438.1762F}. For each model mentioned in Figure \ref{fig:2020rea_2012Z_deflag_bol_light_curve}, we integrate the model optical spectrum at each epoch available with the \texttt{HESMA} database in the same wavelength range as for SN~2020rea to obtain the model pseudo-bolometric luminosity. In the deflagration models, the explosion strength is characterized by ignition spots. With the increase in number of ignition spots, more material burns, which leads to an increase in the luminosity, explosion energy and ejecta velocity. The model light curves for N1-def, N3-def, N5-def and N10-def, with ignition spots 1, 3, 5, 10, respectively, are shown in Figure \ref{fig:2020rea_2012Z_deflag_bol_light_curve}.
The early photospheric phase of the light curve for SN 2020rea falls between models N3-def and N5-def. However, the observed light curves of both SNe~2012Z and 2020rea declines slower than the N5-def as well as the N10-def model bolometric light curves. This is because the ejected mass, the parameter that accounts for the decline rate, in the N5-def and N10-def models are 0.372 and 0.478 M$_{\odot}$, respectively \citep{2014MNRAS.438.1762F}, which are less than the estimated ejecta mass for SNe 2012Z and 2020rea.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{deflagration_2020rea_2012Z_revision.pdf}
\end{center}
\caption{Pseudo-bolometric light curves of SNe~2020rea and 2012Z fitted with the radiation diffusion model are shown. The pseudo-bolometric light curves are compared with optical bolometric light curves of the pure deflagration of $M_{\rm ch}$ white dwarf \citep{2014MNRAS.438.1762F}.}
\label{fig:2020rea_2012Z_deflag_bol_light_curve}
\end{figure}
\section{spectral properties}
\label{spectral_properties}
Figure \ref{fig:SN 2020rea_spectra_plot} presents the spectral evolution of SN 2020rea from $\sim$ $-$7 days to +21 days. The early time spectra are dominated by a blue continuum along with well developed P-Cygni profiles with relatively broad absorption features. The pre-maximum spectra of SN 2020rea show Si {\sc II}/Ca {\sc II} feature in the blue region, Fe {\sc III}, Si {\sc III}, S {\sc II} and relatively weak Si {\sc II} feature around 6000 \AA. The spectrum around maximum is similar to the pre-maximum spectra with an evolved Si {\sc II} feature. After maximum, a feature at $\sim$ 6000 \AA\ grows stronger and can be associated with Fe {\sc II}. In the 8000 \AA\ to 9000 \AA\ region, the Ca II NIR triplet starts developing. A clear absorption feature due to Co {\sc II} $\sim$ 9000 \AA\ is also present. The spectral region between 5500 \AA\ and 7000 \AA\ is dominated by Fe {\sc II} lines. By +21 days the continuum becomes redder and Co {\sc II} around 6600 \AA\ starts developing. In addition, Fe {\sc II} feature in the blue region, Ca {\sc II} NIR triplet and Co {\sc II} at $\sim$ 9000 \AA\ become stronger.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{test.pdf}
\end{center}
\caption{Spectral evolution of SN~2020rea spanning between $-$7.0 days to +20.9 days since maximum in {\it g}-band. Prominent lines are marked with shaded bars. In the inset plot, we have presented the zoomed Na {\sc i}D feature and the fit associated with it.}
\label{fig:SN 2020rea_spectra_plot}
\end{figure}
\subsection{Comparison with other Type Iax SNe}
\label{comparion_spectral_features_other_SNe}
To investigate the nature of spectral lines we compare the pre-maximum, near maximum and post-maximum spectra of SN 2020rea with other well studied Type Iax SNe such as SNe 2002cx \citep{2003PASP..115..453L}, 2005hk \citep{2007PASP..119..360P,2008ApJ...680..580S}, 2008ha \citep{2009Natur.459..674V,2009AJ....138..376F}, 2010ae \citep{2014A&A...561A.146S}, 2011ay \citep{2013ApJ...767...57F}, 2012Z \citep{2013ApJ...767...57F,2015A&A...573A...2S} and 2019muj \citep{2021MNRAS.501.1078B}.
Figure \ref{fig:SN 2020rea_spectra_comp_pre_peak} presents the pre-maximum spectra of SN 2020rea and other Type Iax SNe. The Fe {\sc III} feature near 4000 \AA\ and 5000 \AA\ are seen in all the SNe having coverage in bluer region. The C {\sc II} feature is prominent in fainter and intermediate luminosity Type Iax SNe 2008ha, 2010ae and 2019muj, however, in SN 2020rea and other bright Type Iax SNe, this feature is very weak. The Ca {\sc II} NIR triplet can only be seen in SNe 2008ha and 2010ae. Overall pre-maximum spectroscopic features of SN~2020rea are typical of brighter Type Iax SN. In the spectral comparison near maximum, we find that the prominent spectral lines such as Fe {\sc III}, Fe {\sc II} and Si {\sc II} are present in all the SNe as shown in Figure \ref{fig:SN 2020rea_spectra_comp_peak}. In the post maximum spectra (Figure \ref{fig:SN 2020rea_spectra_comp_post_peak}), the Ca {\sc II} NIR feature is clearly seen in SNe 2005hk, 2010ae, 2011ay, 2012Z and 2019muj. SN~2020rea has weak Ca {\sc II} NIR triplet. The Fe {\sc III}, Fe {\sc II} multiplets and Cr {\sc II} lines are clearly visible in all the SNe. At the post maximum phase, SNe 2020rea and 2012Z show resemblance in their spectral properties. For a detailed spectral comparison between SNe 2012Z and 2020rea, spectra obtained $\sim$ 20 days after maximum of both the SNe are plotted in Figure \ref{fig:SN 2020rea_spectra_comp_21_day}. We notice that both the SNe show similarities with each other in terms of spectral signatures, displaying relatively broad features.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{pre_peak_comp2020rea.pdf}
\end{center}
\caption{Comparison of pre-maximum spectrum of SN~2020rea with other well studied Type Iax SNe.}
\label{fig:SN 2020rea_spectra_comp_pre_peak}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{peak_comp2020rea.pdf}
\end{center}
\caption{Near maximum spectrum of SN~2020rea is shown with spectra of other Type Iax SNe at comparable epochs.}
\label{fig:SN 2020rea_spectra_comp_peak}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{post_peak_comp2020rea.pdf}
\end{center}
\caption{The post-maximum spectrum of
SN~2020rea compared with spectra of other Type Iax SNe at similar epoch.}
\label{fig:SN 2020rea_spectra_comp_post_peak}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{21day_comp2020rea.pdf}
\end{center}
\caption{Comparison of spectral features of SN~2020rea at +21 day with SN 2012Z.}
\label{fig:SN 2020rea_spectra_comp_21_day}
\end{figure}
Figure \ref{fig:velocity_plot} shows the velocity evolution of the Si {\sc II} 6355 \AA\ feature of SN~2020rea and other Type Iax SNe. The line velocities are measured by fitting Gaussian profiles to the absorption minima of the P-Cygni profile associated with Si {\sc II} line. The error bar associated with velocities of SN~2020rea are measurement errors only. In the pre-maximum phase, the line velocity of the Si {\sc II} feature in SN~2020rea is less than SN 2002cx and higher than SN 2005hk. In the post-maximum phases the Si {\sc II} line velocity of SN~2020rea is lower than SNe 2011ay, 2012Z and higher than other comparison SNe. In the late post-maximum phase, the identification of Si {\sc II} is a bit questionable as Fe {\sc II} lines (at 6149 \AA\ and 6247 \AA) start appearing close to the Si {\sc II} line.
The velocity of the Fe {\sc II} 5156 \AA\ line in the pre-maximum and near maximum spectra are estimated as $\sim$ 10000 km s$^{-1}$ and 8570 km s$^{-1}$, respectively which are around 3500 km s$^{-1}$ and 2000 km s$^{-1}$ higher than the Si {\sc II} velocity at similar phase. This trend of higher velocity of Fe {\sc II} lines as compared to Si {\sc II} line shows significant mixing of burned materials \citep{2007PASP..119..360P}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{velocity_plot_com_werr.pdf}
\end{center}
\caption{Velocity evolution of Si {\sc II} line of SN~2020rea and its comparison with other well studied Type Iax SNe. Error bars associated with velocity estimation of SN~2020rea are also plotted in the figure.}
\label{fig:velocity_plot}
\end{figure}
\subsection{Spectral modelling}
\label{spectral_modelling}
We perform modelling of a few spectra of SN~2020rea using \texttt{TARDIS} (a one dimensional radiative transfer code, \citealt{2014MNRAS.440..387K,kerzendorf_wolfgang_2018_1292315}). \texttt{TARDIS} assumes an opaque core with a sharp boundary or photosphere that emits a blackbody continuum. The ejecta is divided into spherical shells and is assumed to be undergoing homologous expansion. \texttt{TARDIS} allows the user to supply custom density and abundance profiles for the SN ejecta as input. In this work, we assume a uniform abundance profile for each element. The other input parameters are time since explosion and luminosity at a comparable epoch of the spectrum. The photospheric approximation used in \texttt{TARDIS} means that it is only applicable at early times. To generate the synthetic spectrum, we use as input the bolometric luminosity at the corresponding epoch. The mass fractions of radioactive isotopes are varied to improve the fit. For SN ejecta we adopt an exponential density profile of the form
\noindent
\begin{equation}
\rho(v,t_{exp}) = \rho_{0}(\frac{t_{0}}{t_{exp}})^{3}e^{-v/v_{0}}
\end{equation}
\noindent
where $t_{0}$ = 2 days, $\rho$$_{0}$ is reference density (= 6$\times$10$^{-11}$ g cm$^{-3}$), $t_{exp}$ is time since explosion, $v$ is velocity and $v$$_{0}$ is the reference velocity.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{tardis_plot.pdf}
\end{center}
\caption{Spectra of SN~2020rea during the photospheric phase, overplotted are the model spectra generated using TARDIS.}
\label{fig:tardis_plot}
\end{figure}
In order to perform the \texttt{TARDIS} spectral fitting we adopt $v{_0}$ = 7000 km s$^{-1}$ and explosion time $t_{exp}$ JD = 2459070 (see section \ref{bolometric_light_curve} for details). The outer velocity of the ejecta has been fixed at 11500 km s$^{-1}$ and the inner velocity was varied between 6800 and 6000 km s$^{-1}$. Since there is degeneracy in the parameters used in \texttt{TARDIS} fit, the spectral model presented in this paper is not unique. The modelled spectra for $-$4.0, 0.0 and +9.9 days with respect to {\it g}-band maximum are overplotted on the observed spectrum in Figure \ref{fig:tardis_plot}. To model the observed spectra, species of carbon, oxygen, iron, cobalt, calcium, chromium, titanium and other ions usually present in SN ejecta are used. As we did not detect lines due to helium in the spectra, helium is not included in the model.
\input{table_tardis}
Table \ref{tab:tardis_20rea} presents the mass fraction of the dominant elements used to generate the model spectra (Figure \ref{fig:tardis_plot}). In the modelled spectrum at $-$4.0 day, Fe features between 4000 \AA\ and 5000 \AA\ are well reproduced, Si {\sc II} line is weak and continuum matches well with the observed spectrum. To constrain mass fraction of Si, synthetic spectra were generated by varying Si mass fraction at different epochs. It is found that increasing Si mass fraction beyond 1\% for pre-peak spectrum and 3\% for post-peak spectrum degrades the fit. Hence, we have used 2\% of Si for spectral fitting at all the three epochs. We do not see strong features due to C and O in the spectra, usually they are used as filler elements. However, we do see a weak OI line in the spectrum obtained at maximum and +9.9 d. We have used a significant amount of Ni for fitting all three spectra of SN~2020rea presented in Figure \ref{fig:tardis_plot}. In the synthetic spectra at pre-maximum and at maximum a very low amount of Fe is used as introducing more Fe resulted in over represented Fe features. We have included $\sim$ 20\% Neon as a filler element for fitting the first two epochs and $\sim$ 2\% of Ne for fitting the last spectrum at +9.9 day since maximum. IMEs such as Mg, Ca, S etc. are also used to fit the spectra. In the modelled spectrum around maximum, the region between 4000 \AA\ to 5200 \AA\ is similar to the observed spectrum. In the +9.9 day spectrum, the observed spectral features and continuum are well reproduced by the model with significant amount of IGEs. However, the `W' feature at $\sim$ 6000 \AA\ could not be reproduced. This feature is attributed to the presence of S line during the early phase of evolution which is later converted to iron when the SN enters the Fe dominated phase. Since we have assumed a model with a uniform abundance profile for each element and got a fairly good fit for our +9.9 day spectrum, this indicates towards a well mixed ejecta, which is expected in a deflagration scenario \citep{2003Sci...299...77G}.
\subsection{Host galaxy metallicity}
\label{host_galaxy_metallicity}
We have calculated the metallicity of the host galaxy of SN~2020rea using narrow emission line fluxes in the host galaxy spectrum taken on August 15, 2020 with LCO's FLOYDS spectrograph at Faulkes Telescope North (FTN). Prominent lines of H$\alpha$, [N {\sc II}], etc. are present in the host spectrum. There are several methods to measure the metallicity \citep{1991ApJ...380..140M, Kewley_2002, 10.1111/j.1365-2966.2004.07591.x, Pilyugin_2005}. These calculations involve flux measurements of various emission lines. Using the N2 index calibration of \cite{10.1111/j.1365-2966.2004.07591.x}, we estimate the metallicity of the host galaxy as 12+log(O/H) = 8.56$\pm$0.18 dex. This is comparable to the metallicity of the host galaxy of SNe 2012Z (8.51$\pm$0.31 dex; \citealt{2015ApJ...806..191Y}) and 2020sck (8.54$\pm$0.05 dex; \citealt{2022ApJ...925..217D}). The metallicity measurements for host galaxy of faint Type Iax SNe such as SNe 2008ha, 2010ae, 2019gsc, 2020kyg are 8.16$\pm$0.15 dex \citep{2009AJ....138..376F}, 8.40$\pm$0.18 dex \citep{2014A&A...561A.146S}, 8.10$\pm$0.06 dex \citep{2020ApJ...892L..24S} and 8.68$\pm$0.04 dex \citep{2022MNRAS.511.2708S}, respectively. \cite{2017A&A...601A..62M} demonstrated that there is no clear correlation between host galaxy metallicity and SN luminosity for Type Iax SNe, however with the increased sample we do see a tendency of Type Iax SNe to prefer metal poor hosts.
\section{Explosion scenario}
\label{explosion_sscenario}
SN~2020rea is one of the brightest members of Type Iax sub-class. In order to understand the most favorable explosion scenario for SN~2020rea, we compare the observational properties of SN~2020rea with different models one by one.
First, we consider the pulsational delayed detonation (PDD)
model. In the PDD scenario, the white dwarf remains bound while expanding due to slow deflagration and after that detonation occurs during pulsation because of compression and ignition caused by infalling C-O layers \citep{1974Ap&SS..31..497I,1991A&A...245L..25K,1991A&A...246..383K,1993A&A...270..223K,1995ApJ...444..831H,1996ApJ...457..500H,2006ApJ...642L.157B,Baron_2012,2014MNRAS.441..532D}. In the PDD explosion of a M$_{ch}$ C-O white dwarf, Fe group elements are produced in the deflagration phase. The mass of $^{56}$Ni produced in these model falls in between 0.12 to 0.66 M$_{\odot}$ \citep{1995ApJ...444..831H}. The estimated $^{56}$Ni mass for SN~2020rea matches with PDD5 model \citep{1995ApJ...444..831H} but ejecta velocity for SN~2020rea ($\sim$ 6500 km s$^{-1}$) is lower than that predicted by PDD5 model (8400 km s$^{-1}$). Also, the observed {\it (B-V)$_{0}$} colour at maximum ($-$0.01 mag) for SN~2020rea does not match with the {\it (B-V)$_{0}$} colour of PDD5 model (0.44 mag, \citealt{1995ApJ...444..831H}).
Second, we consider a low energy core-collapse explosion model of a massive star which has been used to explain the observational features of some faint Type Iax SNe such as SN 2008ha \citep{2009Natur.459..674V,2009AJ....138..376F,2010ApJ...719.1445M}. Because of the low energy budget of faint SNe, a considerable amount of the ejecta falls back onto the remnant. This core-collapse scenario predicts kinetic energy of 1.2$\times$10$^{48}$ erg, 0.074 M$_{\odot}$ of ejecta mass and 0.003 M$_{\odot}$ of $^{56}$Ni \citep{2010ApJ...719.1445M} . Thus the predicted parameters in the core-collapse scenario are in disagreement with those of SN~2020rea.
Next, we investigate the deflagration to detonation transition (DDT) model \citep{1991A&A...245L..25K,1991A&A...245..114K,1993A&A...270..223K,1995ApJ...444..831H,1996ApJ...457..500H,2002ApJ...568..791H,2013MNRAS.429.1156S,2013MNRAS.436..333S} which has been used to explain several observational properties of Type Ia SNe by varying the central density of white dwarf and strength of deflagration. The basic assumption in the deflagration to detonation models is that at late stage of explosion there is a transition of deflagration flame into a detonation front. DDT models \citep{2013MNRAS.429.1156S,2013MNRAS.436..333S} are generated by varying the number of ignition points.
The mass of $^{56}$Ni produced by these models (0.32 to 1.1 M$_{\odot}$, \citealt{2013MNRAS.436..333S}) is very high as compared to the $^{56}$Ni produced in SN~2020rea explosion. The range of kinetic energy (E$_{k}$ = 1.20-1.67 $\times$10$^{51}$ erg), absolute magnitude in {\it B}-band ($-$19.93 to $-$18.16 mag) and the redder {\it (B-V)$_{0}$} colour at maximum (0.15 to 0.56 mag) of the DDT models \citep{2013MNRAS.436..333S} do not agree with the estimated parameters of SN~2020rea.
Finally, we take into account the three-dimensional pure deflagration of a C-O white dwarf \citep{2014MNRAS.438.1762F} which can successfully explain the observed properties of the bright and intermediate luminosity Type Iax SNe. These models provide a wide range of $^{56}$Ni mass between 0.03 to 0.38 M$_{\odot}$, rise time between 7.6 days to 14.4 days, and peak {\it V}-band absolute magnitudes spanning between $-$16.84 to $-$18.96 mag \citep{2014MNRAS.438.1762F}. The observed parameters of SN~2020rea ($^{56}$Ni mass = 0.13$\pm$0.01 M$_{\odot}$, rise time = $\sim$ 16 days, {\it V}-band peak absolute magnitude = $-$18.30$\pm$0.12 mag) fall within the range prescribed by these models. In section \ref{bolometric_light_curve} we compared the pseudo-bolometric light curve of SN~2020rea with optical bolometric light curves presented in \cite{2014MNRAS.438.1762F}. The mixed abundance distribution given by these models is consistent with SN~2020rea. The expansion velocity inferred from Fe line is higher than Si lines indicating significant mixing in the ejecta. Furthermore, modelling the spectra of SN~2020rea with TARDIS (Section \ref{spectral_modelling}) suggests a mixed distribution of elements, consistent with the deflagration scenario.
\section{Summary}
\label{summary}
The photometric and spectroscopic investigations of SN~2020rea in optical wavelengths show that it lies at the brighter end of Type Iax luminosity distribution. The light curve decline rate in {\it B} and {\it g}-bands are $\Delta$m$_{15}$(B) = 1.61$\pm$0.14 mag and $\Delta$m$_{15}$(g) = 1.31$\pm$0.08 mag, respectively, indicating its similarity with SNe 2005hk and 2012Z. The colour evolution of SN~2020rea is analogous to other Type Iax SNe. Modeling of the pseudo bolometric light curve (constructed using {\it BgVri} bands) places SN~2020rea in the category of relatively bright Type Iax SNe with a rise time of $\sim$ 16 days and $^{56}$Ni of 0.13$\pm$0.01 M$_{\odot}$. Assuming a photospheric velocity of 6500 km s$^{-1}$, ejecta mass and kinetic energy are estimated to be 0.77$^{+0.11}_{-0.21}$ M$_{\odot}$ and 0.19$^{+0.02}_{-0.06}$ $\times$ 10$^{51}$ erg, respectively. The comparison of the pseudo-bolometric light curve of SN~2020rea with optical bolometric light curves representing deflagration models of varying strength shows that the light curve of SN~2020rea is situated between N3-def and N5-def models during the early photospheric phase. The post-peak decline of the pseudo bolometric light curve is slower than the deflagration model light curves. The spectroscopic features of SN~2020rea are typical of Type Iax SNe. The Si {\sc II} line velocities of SN~2020rea are generally higher than those of other Type Iax SNe except for SNe 2011ay and 2012Z. The higher Fe line velocity than Si line around maximum indicates mixing of fully burned material. Spectral modelling of SN~2020rea shows weak Si {\sc II} feature in early photospheric phase, an IGEs dominated ejecta $\sim$ 10 days after maximum and hints towards a mixed ejecta. The host galaxy metallicity (8.56$\pm$0.18 dex) of SN~2020rea is similar to the host galaxy metallicity of SN 2012Z (8.51$\pm$0.31 dex). Out of the several proposed explosion scenarios for Type Iax SNe, pure deflagration of white dwarf emerges as a promising one to explain the observed properties of SN~2020rea.
\section*{Acknowledgments}
We thank the anonymous referee for giving constructive comments which has improved the presentation of the paper. We acknowledge Wiezmann Interactive Supernova data REPository http://wiserep.weizmann.ac.il (WISeREP) \citep{2012PASP..124..668Y}. This research has made use of the CfA Supernova Archive, which is funded in part by the National Science Foundation through grant AST 0907903. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work makes use of data obtained with the LCO Network. RD acknowledges funds by ANID grant FONDECYT Postdoctorado Nº 3220449. KM acknowledges BRICS grant DST/IMRCD/BRICS/Pilotcall/ProFCheap/2017(G) for the present work. The LCO group were supported by NSF Grants AST-1911151 and AST-1911225. This research made use of TARDIS, a community-developed software package for spectral synthesis in supernovae \citep{kerzendorf_wolfgang_2018_1292315, kerzendorf_wolfgang_2019_2590539}. The development of TARDIS received support from the Google Summer of Code initiative and from ESA's Summer of Code in Space program. TARDIS makes extensive use of Astropy and PyNE. This work made use of the Heidelberg Supernova Model Archive (HESMA)\footnote{\url{https://hesma.h-its.org}}.
\section*{ Data availability} The photometric and spectroscopic data of SN~2020rea presented in this paper will be made available by the corresponding author on request.
\bibliographystyle{mnras}
|
1,477,468,751,401 | arxiv | \section{Introduction}
In December 2018, New York became the first city in the U.S. to adopt a minimum wage for drivers working for app-based transportation network companies (TNCs) like Uber and Lyft. The New York City Taxi and Limousine Commission (NYTLC) established a ``minimum per-trip payment formula'' that gave an estimated gross hourly driver earnings before expenses of at least \$26.51 per hour and a net income of
\$17.22 per hour after expenses, equivalent to the minimum wage of \$15 per hour because, as ``independent contractors,'' drivers pay additional payroll taxes and get no paid time off.
The NYTLC formula for non-wheelchair accessible vehicles is
\begin{equation}\label{utilization}
\mbox{Driver pay per trip} =
\Bigg( \frac{\$0.631 \times \text{Trip Miles}}{\text{Company Utilization Rate}}\Bigg)
+ \Bigg( \frac{\$0.287 \times \text{Trip Minutes}}{\text{Company Utilization Rate}}\Bigg)
+ \text{Shared Ride Bonus},
\end{equation}
amounting to \$23 for a 30-min, 7.5-mile ride.\footnote{The utilization rate is calculated by dividing the total amount of time drivers spend transporting passengers on trips dispatched by the base by the total amount of time drivers are available to accept dispatches from the base \cite{ban2018gan}.}
The Commission imposed the wage floor based on testimony on driver expenses, meetings with stakeholders, and on the report of labor economists J.A. Parrott and M. Reich which showed that
median driver earnings had declined almost \$3.00 per hour from \$25.78 in September 2016 to \$ 22.90 in October 2017, a decrease of 11.17\%. The TNCs imposed the \$3.00 per hour wage cut
during a period when the number of drivers in the largest four TNCs (Uber, Lyft, Gett/Juno, and Via) had grown by 80,000 \cite{ban2018gan}. Uber would be the largest for-profit private employer in New York City if its drivers were classified as employees rather than independent contractors \cite{parrott2018earning}.
The subminimum wage of Uber drivers also prompted the Seattle City Council in April 2018 to pass a unanimous resolution to explore setting a minimum base rate of \$2.40 per mile for TNCs compared with the prevailing rate of \$1.35 per mile and the rate of \$2.70 charged by taxis. The resolution also asked TNCs to voluntarily hand over anonymous data on hours, trips, fares and compensation. Unlike NYTLC, however, no other US city has access to TNC data to estimate what their drivers are paid.
In December 2018, Uber lost its case at the U.K.'s Court of Appeal against the October, 2016, ruling that its drivers should be classified as workers entitled to rights such as minimum wage and paid holidays. The Court ruled against Uber's claim that its drivers were just self-employed contractors who use its app in exchange for a share of their fares at the level dictated by Uber \cite{Uber_London}. The case can be used to challenge the self-employed status of millions of gig-economy workers who work for companies like Airbnb and Deliveroo on a freelance basis without fixed contracts.
New York and London are the largest Uber markets in the U.S. and E.U.
Uber's reaction to these three adverse decisions was predictable.
Responding to the NYTLC ruling Uber's director of public affairs stated, ``legislation to increase driver earnings will lead to higher than necessary fare increases for riders while missing an opportunity to deal with congestion in Manhattan's central business district.''\footnote{Lyft echoed the Uber response
stating, ``These rules would be a step backward for New Yorkers, and we urge the TLC to reconsider them \cite{Uber_USA}.''} Uber challenged the Seattle resolution: its general manager for Seattle said, ``we are generally unclear how nearly doubling per-mile rider rates would not result in an increased cost for riders.'' Uber also declared it would fight the U.K. Appeal Court's decision in the Supreme Court. Contradicting Uber's claims, this study shows that, for a large range of parameter values, raising driver wages will \textit{increase} the number of drivers at the same time that passengers enjoy \textit{faster} and \textit{cheaper} rides, while platform rents are reduced.
The aforementioned regulations are partly motivated by public concern over the disruption to the urban transportation system caused by the rapid growth of RNCs. Worldwide, the
monthly number of Uber users is forecast to reach 100 million in 2018, up from 75 million in 2017. In New York, Uber, Lyft, Juno and Via combined dispatched nearly 600,000 rides per day in the first quarter of 2018, increasing their annual trip totals by over 100 percent in 2016 and by 71 percent in 2017. About 80,000 vehicles are affiliated with these four companies \cite{parrott2018earning}. In San Francisco, 5,700 TNC vehicles operate in peak times. They daily make over 170,000 vehicle trips, approximately 12 times the number of taxi trips, and 15 percent of all intra-San Francisco trips, comprising
at least 9 percent of all San Francisco person trips \cite{castiglione2016tncs}. This explosive growth of TNCs has raised two public concerns.
As noted above, one concern is with the working conditions of TNC drivers. The TNC business model places much of the economic risk associated with the app sector on drivers, who are classified as independent contractors. Furthermore, the model relies on having many idle cars and drivers, resulting in low driver pay per hour and high TNC platform rents.\footnote{TNC expenditures comprise a fixed initial cost for setting up the platform and a small variable cost as the company grows. Thus the average cost per trip falls and its profit margin increases as the TNC grows.} Uber's annual revenue from passenger fares in New York City amounts to about \$2
billion, of which it keeps about \$375 million in commissions and fees, for a markup estimated at six times its variable operating cost or 600 percent \cite{parrott2018earning}.
One common understanding is that ``Uber's driver-partners are attracted to the flexible schedules that driving on the Uber platform affords \ldots because the nature of the work, the flexibility, and the compensation appeals to them compared with other available options \cite{Hall_Kreuger}.''
In fact, more than 60 percent of New York City drivers work full-time and provide 80 percent of all rides; their work hours are not flexible \cite{parrott2018earning}.
The second concern is with the negative impact of the explosive TNC growth on a city's traffic congestion as well as public transit ridership.
A detailed 2017 report \cite{schaller2017empty} examined the impact of TNC growth on traffic conditions in Manhattan's CBD. The analysis shows that, from 2013 to 2017, TNC trips increased 15 percent, VMT increased 36 percent, traffic speed declined 15 percent, the number of vehicles increased 59 percent, and the number of unoccupied vehicles increased 81 percent. The report suggested reducing the unoccupied time of TNC vehicles as a means of congestion control. Responding to the increased congestion, the New York City Council in 2018 passed a regulation freezing the number of TNC vehicles on the road for one year. Supporters of the cap, including Mayor Bill de Blasio, said the regulation will protect drivers, fairly regulate the industry and reduce congestion \cite{cnbc_NY}. However, our analysis shows that imposing a cap hurts drivers, because the TNC retains as profit the benefits of limiting supply.
Another detailed report \cite{castiglione2016tncs} by San Francisco Transportation Authority provides information on the size, location, and time-of-day characteristics of TNC activities in San Francisco. The follow-up report \cite{castiglione2018tncs} identifies the impact of TNC activities on roadway congestion in San Francisco County. It shows that after subtracting the impact of employment growth, population change, and network capacity change, TNCs accounted for 51 percent of the increase in vehicle hours of delay, 47 percent of increase in VMT, and 55 percent of the average speed decline between 2010 and 2016. Moreover, ``TNC trips are concentrated in the densest and most congested parts of San Francisco including the downtown and northeastern core of the city. At peak periods, TNCs are estimated to comprise 25 percent of vehicle trips in South of Market.'' The report cites studies showing that ``between 43 percent and 61 percent of TNC trips substitute for transit, walk, or bike travel or would not have been made at all.''
This paper evaluates two policies regulating TNCs: a minimum driver wage, and a cap on the number of drivers or vehicles. We analyze the impacts of these policies on several aspects of the the app-based
ride-hailing market, including ride prices and driver wages established by the platform, the incentives of passengers and drivers, vehicle occupancy rate, and platform rent or profit. We use a model to determine
the arrival of passengers, number of drivers, ride prices and platform commissions, conditioned on the exogenous regulatory policy. The model employs
a queuing theoretic model with dynamic matching of passengers and drivers, an equilibrium model that predicts the long-term average arrivals of passengers and drivers, and an optimization model that captures platform decision-making. We summarize the key results.
\begin{itemize}
\item Imposing a minimum wage will motivate TNCs to hire {\em more} drivers, and passengers to enjoy {\em faster} and {\em cheaper} rides, while TNC rent or profit shrinks. In contrast with the traditional economic model \cite{Neumark}, raising the minimum wage will benefit both drivers and passengers, while TNC rents will decline. This counter-intuitive result holds for a large regime of model parameters, and it occurs because the quality of service increases with the number of drivers. Consequently, by hiring more drivers the platform improves the quality of service (pickup time) and attracts more passengers. For certain values of the minimum wage, the increased sales volume outweighs the increased labor cost.
\item Contrary to common belief, a cap on the number of drivers will hurt driver earnings. This is because when fewer drivers are permitted, the platform will hire cheaper labor by reducing driver pay. In other words, the benefit of limiting the driver supply is retained by the platform.
\end{itemize}
Aside from these results, we also present variants of our model to analyze platform subsidy, platform competition and autonomous vehicles.
*************************
{\bf Related Works}: There are many studies of ride-hailing platforms. A major consideration is to evaluate decisions that maximize platform profit. Using
a queuing theoretic model is proposed in \cite{banerjee2015pricing} to study the optimal pricing of ride-hailing platforms. It shows that the throughput and profit under dynamic pricing strategy can not exceed that under the optimal static pricing strategy that is agnostic to stochastic dynamics of demands. On the other hand, it also shows that dynamic pricing is much more robust to fluctuations in systems parameters compared to static pricing. Therefore, the platform can use dynamic pricing to realize the benefits of optimal static pricing without perfect knowledge of system parameters. A similar problem is studied by \cite{cachon2017role}, with a focus focus on the self-scheduling capacity of for-hire drivers. It shows the additional flexibility of drivers is beneficial to both platforms, consumers and drivers. It also suggests that when some periods have predictably higher demand than others (e.g., a rainy evening), static pricing is hard to find service at peak demand times, so that surge pricing is likely to benefit all stakeholders in this case. In the same vein, Bai (2017) \cite{bai2018coordinating} suggests dynamic pricing for the platform to maximize the profit across different time periods when the underlying operating characteristics can change significantly. Taylor (2018) shows in \cite{taylor2018demand} that platform pricing can be more complicated when there is uncertainty in passenger's valuation or driver's opportunity cost.
Ban developed a general economic equilibrium model to evaluate the impacts of ride-hailing services on the deadhead miles and traffic congestion \cite{ban2018gan}.
In addition, ride-hailing platforms are also closely inspected in the economic literature as a special case of two-sided platform. See \cite{rysman2009economics} and \cite{rochet2006two} for a summary of literature in two sided-platforms, and \cite{weyl2010price} for a general theory of monopoly pricing in multi-sided platforms.
The literature on regulatory policies of app-based ride-hailing marketplace is relatively limited. Gurvich (2016) \cite{gurvich2016operations} considered a ride-hailing platform that manages a group self-scheduling drivers to serve the time-varying demand. It shows that under a wage floor, the platform starts to limit the agent flexibility by restricting the number of agents that can work in some time intervals. Optimal pricing under exogenous wage is considered in \cite{hu2017price}. It shows that when the platform sets the trip price under an exogenously given wage, the optimal price has a U-shape relation with respect to wage: as driver wage increases, the ride price first decreases, and then increases. However, the key limitation of this result is that it assume that the platform always sets driver pays at an exogenous wage, even if this wage is below the profit maximizing level. In practice, the platform optimizes {\em both} price and wage under a wage floor, and the first half of the U-shape never appears based on the model of \cite{hu2017price} (see more discussions in Section \ref{maintheoremsec}).
The closest work to ours is reported in Parrott and Reich \cite{parrott2018earning}. The authors use the administrative data of TNCs collected by the New York City Taxi and Limousine Commission (TLC) to examine the likely impact of the TCL's proposed regulatory policies \cite{nycdoc}. By numerical simulation, they show that the proposed policy will increase driver earnings by 22.5 percent, while the passengers will only experience moderate increase of trip fare (less than 5 percent ) and waiting times (12 to 15 seconds). However, our analysis shows that both the trip cost and the waiting time will decrease. This is because in our model we assume that the passengers are sensitive to the pickup time of the ride-hailing services, which is not captured in \cite{parrott2018earning}.
*****************
\section{Introduction}
{
Transportation network companies (TNCs) like Uber, Lyft and Didi, have dramatically changed urban transportation. While the emergence of TNC significantly benefits passengers and drivers, it also brings negative externalities that have to be addressed by regulatory intervention. In recent years, this concern has prompted several cities to take actions to regulate TNCs \cite{NYC2019surcharge, ban2018gan, SFSPUR, seattleregulation2020}. Despite numerous works on the operation and management strategies of TNC platforms, only a handful of works have considered the mathematical model for policy analysis on the ride-hailing market. This paper aims to formulate an economic equilibrium model to evaluate the impacts of various regulations on the TNC economy.
{\bf Background and Motivation} \\
TNCs are disrupting the urban transportation systems. On the one hand, they offer on-demand ride services at prices that many riders can afford. On the other hand, they create numerous job opportunities for drivers working as independent contractors. These favorable demand and supply factors led to the TNC's explosive growth.} However, the resulting growth has raised two public concerns in large metropolitan areas. The first is due to increased traffic congestion. In New York City, Uber, Lyft, Juno and Via together dispatch nearly 600,000 rides per day, involving about 80,000 vehicles. Schaller \cite{schaller2017empty} estimates that from 2013 to 2017 TNC trips in NYC increased by 15\%, traffic speed dropped by 15\%, VMT increased by 36\%, and the number of TNC vehicles increased by 59\%. He suggested regulation to reduce TNC vehicles deadhead time (when vehicles are carrying no passengers) in order to limit congestion. Two reports \cite{castiglione2016tncs,castiglione2018tncs} by the San Francisco County Transportation Authority identified TNC impact on traffic congestion and estimated that TNCs account for approximately 50 percent of the increase in congestion in San Francisco between 2010 and 2016. More recently, Uber and Lyft commissioned Fehr \& Peer to estimate the TNC share of VMT in six US Metropolitan Regions, Boston, Chicago, Los Angeles, Seattle, San Francisco and Washington. Their report \cite{balding2019} concludes that Uber and Lyft have a nontrivial impact in core urban areas such as San Francisco County, where they account for 12.8\% of total VMT.
The second concern is provoked by the very low earnings of TNC drivers. The success of the on-demand ride-hailing business relies on short passenger waiting times that require a large pool of available but idle TNC drivers. This pushes down driver wages. Parrott and Reich \cite{parrott2018earning} revealed that the majority of for-hire vehicle drivers in NYC work full-time. They found that the median driver earnings declined almost $\$$3 per hour from $\$$25.67 in September 2016 to $\$$22.90 in October 2017, and that 85 percent of drivers made less than the minimum wage after deducting vehicle expenses. A follow-up study \cite{parrott2020minimum} examined the payments of drivers working for TNCs in Seattle and discovered that their average net earning is \$9.73/hour (after expenses), well below the \$16.39/hour minimum wage. Further, more than four-fifths of full time drivers purchased their vehicle primarily or partly to provide TNC services, and nearly three-fourths rely on TNC driving as their sole source of income. These drivers are hired as independent contractors, who can not unionize to negotiate for labor rights such as minimum wage, overtime compensation, and paid time-off.
These concerns have prompted cities to regulate TNCs. To address congestion, New York City Taxi and Limousine Commission (NYCTLC) introduced a \$2.75 charge on all for-hire vehicle trips that pass through the ``congestion zone'' of the city \cite{NYC2019surcharge}. The congestion zone is the area south of 96th Street in Manhattan, and the charge is assessed on each trip that starts from, ends in, or passes through the congestion area. To protect TNC drivers, NYCTLC imposed a minimum per-trip wage for drivers amounting to a wage floor of \$25.76/hour or \$17.22/hour after vehicle expenses \cite{ban2018gan}. This is equivalent to the \$15/hour minimum wage after deducting a paid-time off supplement of \$2.22/hour.
In addition to NYC, similar regulations are being considered by other U.S. cities. In November 2019 Chicago approved a congestion tax on ride-hailing services for weekday single-passenger trips (and lowered the tax on shared trips) in the downtown area to raise \$40 million per year \cite{chicago_surcharge}. Also in November 2019 San Francisco passed a special 3.25\% excise tax on TNC rides to raise \$30-\$35 million per year for congestion mitigation projects
\cite{SFSPUR}. At around the same time, the Seattle City Council unanimously approved the ``Fare Share'' plan, which provides TNC driver protections including a fair wage after expenses and a first-in-the-nation Driver Resolution Center to offer support services for drivers to fight against unwarranted deactivations \cite{seattleregulation2020}. In September 2019
California passed bill AB5 \cite{vox_gig} which classifies hundreds of thousands of independent contractors (gig workers) including TNC drivers as employees to protect them with minimum wage and other employee benefits. These actions imply a changing regulatory environment to address TNC-provoked concerns in large cities.
{ {\bf Research Problem and Contribution} }\\
This paper presents a study calculating the impact on TNCs of the joint imposition of a congestion charge and a driver minimum wage. The impact is formulated within a framework comprised of a queuing theoretic model of the arrivals of passengers and drivers, a general equilibrium model that predicts market prices, passenger
demand and driver supply, and a profit maximizing model of the TNC platform decisions. This framework enables the assessment of the impact in terms of changes in ride prices, passenger waiting time, driver wage, numbers of passengers and drivers, vehicle occupancy rate, platform rent, and city tax revenue. The key conclusions of this study are:
\begin{itemize}
\item The congestion charge does not significantly affect TNC ridership. It does not directly curb traffic congestion by reducing the number of TNC vehicles on the road. This is because the impact of the surcharge is mitigated by the wage floor on TNC drivers.
\item The time-based congestion charge is preferable to the trip-based charge because the former penalizes idle vehicle hours, thereby increasing vehicle occupancy (we use the terms congestion charge and tax interchangeably.) Furthermore, the increased occupancy generates a surplus that offers a Pareto improvement in a certain regime, bringing higher consumer surplus, higher platform profit and higher tax revenue for the city.
\item The case study for San Francisco employs a model whose parameters are calibrated to match reported San Francisco TNC data, and the model is used to predict the likely effect of regulatory policies on the San Francisco TNC market.
\item Through numerical simulation, we show that the tax burden mainly falls on the ride-hailing platform as opposed to passengers and drivers. Under a trip-based tax of \$2/trip (with average trip fare of \$11.6), passenger travel cost increases by 0.6\%, driver wage remains unchanged, while the platform profit is reduced by 59.5\%. Under a time-based tax in the regime of practical interest, both passengers and drivers are unaffected, while the platform assumes all of the tax burden.
\end{itemize}
\section{Related Works}
There is an extensive literature on app-based ride-hailing platforms. Many studies investigated the platform pricing strategy under various interacting factors. Zha et al \cite{zha2016economic}
developed an aggregate model to capture the interactions among passengers, drivers and the platform, and found that the first-best solution is not sustainable when the matching function exhibits increasing returns to scale and the cost function of the platform is subject to economies of scale.
Bai et al \cite{Bai2018coordinating} considered an on-demand service platform using earning-sensitive independent providers with heterogeneous reservation price, and concluded that it is optimal to charge a higher price when demand increases, and that the platform should offer a higher payout ratio as demand increases, capacity decreases, or customers become more sensitive to waiting time. Taylor \cite{taylor2018demand} examined how delay sensitivity and agent independence affect the platform's optimal price and wage and identified the complexity caused by uncertainty in customer valuation. Hu and Zhou \cite{hu2019price} studied the commission setting of the ride-sourcing platform and showed that an optimal fixed-commission contract can achieve at least 75\% of the optimal profit when there is no pre-committed relationship between price and wage.
Platform pricing has also been studied with temporal and spatial considerations. From the temporal aspect, Cachon et al \cite{cachon2017role} showed that surge pricing can significantly increase platform profit relative to contracts that have a fixed price or fixed wage, and that all stakeholders can benefit from the use of surge pricing on a platform with driver self-scheduling capacity. Castillo et al \cite{castillo2017surge} showed that surge pricing can avoid cases where vehicles are sent on a wild goose chase to pick up distant customers, wasting driver time and reducing earnings. Zha et al \cite{zha2017surge} investigated the impact of surge pricing using a bi-level programming framework, and showed that compared to static pricing, the platform and drivers are found generally to enjoy higher revenue while customers may be made worse off during highly surged periods.
Banerjee et al \cite{banerjee2015pricing} developed a queuing theoretic model to study the optimal (profit-maximizing) pricing of ride-sharing platforms. They show that the performance of a dynamic price (in terms of revenue and throughput) does not exceed that of a static price, but it is more robust to fluctuations of model parameters. From the spatial aspect, Bimpikis et al \cite{bimpikis2019spatial} considered the price discrimination of a ride-sourcing platform over a transportation network and established that profits and consumer surplus at the equilibrium corresponding to the platform's optimal pricing are maximized when the demand pattern is ``balanced'' across the network's locations. Guda and Subramanian \cite{guda2019your} studied the spatial pricing of a ride-sourcing platform over a transportation network and showed that surge pricing can be useful even in zones where supply exceeds demand. Zha et al \cite{zha2018geometric} developed a model to investigate the effects of spatial pricing on ride-sourcing markets and found that the platform may resort to relatively higher price to avoid an inefficient supply state if spatial price differentiation is not allowed.
In addition to platform pricing, studies also touch upon driver supply \cite{Hall_Kreuger}, \cite{gurvich2019operations}, platform operations \cite{yang2020optimizing}, \cite{vazifeh2018addressing}, platform competition \cite{nikzad2017thickness}, \cite{bernstein2019competition}, and regulations \cite{li2019regulating}, \cite{benjaafar2018labor}, \cite{yu2019balancing}, \cite{vignon2020regulating}. Please see \cite{wang2019ridesourcing} for a comprehensive literature review.
Road pricing has attracted substantial research attention for decades. The idea was initially proposed by Pigou \cite{pigou2017economics}, which inspired several seminal works including Vickery \cite{vickrey1955some}, Walters \cite{walters1961theory} and Beckmann \cite{beckmann1967optimal}. Since then, various taxing schemes have been proposed in the literature, including charge based on cordon-crossing, distance traveled, time spent traveling, or time spent in congestion \cite{may2000effects}. For instance, Zhang and Yang \cite{zhang2004optimal} investigate the cordon-based second-best congestion pricing problem on road networks that jointly consider toll levels and toll locations. Yang et al \cite{yang2010road} study road pricing for effective congestion control without knowing the link travel time and travel demand. Liu and Li \cite{liu2017pricing} derive a time-varying toll combined with a flat ride-sharing price to nudge morning travelers to depart in off-peak hours. Despite this large literature in transportation economics, the research on congestion charges for TNCs is relatively scarce. A TNC congestion charge is distinctive since it involves decisions of the profit-maximizing platform and the passengers and drivers in the two-sided ride-hailing market. Li et al \cite{li2019regulating} proposed a market equilibrium model to evaluate the impact of various regulatory policies and analyzed the incidence of a TNC tax on passengers, drivers, and the TNC platform. Schaller \cite{schaller2018making} conducted an in-depth analysis of how to apply pricing to new mobility services, and recommended that a surcharge on taxi/for-hire trips in central Manhattan be applied as an hourly charge. Recent work of Vignon and Yin \cite{vignon2020regulating} investigated the performance of various regulation policies on ride-sourcing platforms with congestion externality and product differentiation taken into account. They compared a uniform toll that treats all vehicles identically with a differentiated toll that treats idle vehicles, solo rides and pooled rides differently, and showed that a differentiated toll offers little advantage over a uniform one.
Only a handful of studies considered wage regulation of TNCs. Gurvich \cite{gurvich2016operations}
studied the platform's profit maximizing wage level for self-scheduling drivers, and showed that under a minimum wage, the platform limits agent flexibility by restricting the number of agents that can work during some time intervals. Parrott and Reich \cite{parrott2018earning} utilized administrative data of New York City and showed by simulation that the proposed minimum wage standard in New York City will increase driver wage by 22.5 percent while hurting passengers by slightly increasing ride fare and waiting time. Li et al. \cite{li2019regulating} and Benjaafar et al. \cite{benjaafar2018labor} developed market equilibrium models to show that wage regulations on TNC will benefit both passengers and drivers, because wage regulation curbs TNC labor market power \cite{li2019regulating}.
Zhang and Nie \cite{zhang2019pool} proposed a market equilibrium model for ride-sourcing platforms that offers a mix of solo and pooled rides. They showed that a wage floor on TNC drivers will force the platform to hire more drivers, which will reduce the appeal of collective modes and the supply efficiency and is likely to worsen traffic congestion.
This paper differs from the aforementioned works in that we explore the {\em joint} impact of congestion charge and driver minimum wage on the TNC market. We are the first to point out that distinct regulatory policies on TNCs do interfere with each other when they are jointly implemented, which may produce surprising market outcomes that deviate from the expectation of the policy maker. We are also the first to establish models that compare the trip-based congestion charge and the time-based congestion charge and identify the superiority of time-based congestion in certain regimes of practical interest. These results will provide valuable insights for city planners who are considering implementing (e.g., San Francisco), or have already implemented (e.g., NYC and Seattle) a congestion charge and a minimum wage to address TNC externalities.
\section{Market Equilibrium Model}
\label{lowerlevel}
We consider a transportation system comprised of a city council, a TNC platform, and a group of passengers and drivers. The city council approves legislation (e.g., cap on the total number of vehicles, minimum wage for TNC drivers, congestion charge on TNC trips) to regulate the operations of the TNC platform. The platform sets fares and wages and hires drivers to maximize its profit under these regulations. The pricing decisions affect the choices of passengers and drivers, and these choices collectively determine the platform's profit. We will describe a market equilibrium model to capture the decisions of passengers, drivers, and the TNC platform. The model will be used to investigate how TNC market outcomes are affected by regulation.
\subsection{Matching passengers and drivers}
The TNC platform matches randomly arriving passengers to idle TNC drivers. Upon arrival, each passenger joins a queue and waits until she or he is matched to an idle driver\footnote{For simplicity, we do not consider the case of multiple passengers sharing the same vehicle.}. This matching is modeled as a continuous-time queuing process, in which each passenger defines a ``job'' and each driver is a ``server''. The server is ``idle'' if the vehicle is not occupied, and it is ``busy'' if a passenger is on board or if the vehicle is dispatched and on its way to pick up a passenger. Assume that passenger arrivals form a Poisson process with rate $\lambda > 0$, and denote $N$ as the total number of TNC drivers. This matching process forms a M/G/N queue, and the expected number of idle servers (vehicles) is ${N_I} = N - \lambda /\mu $, with $\mu $ being the service rate ($1/\mu$ is the amount of time a passenger occupies a vehicle on average). We assume that $N > \lambda /\mu $. { Given the model parameters, the average waiting time for the M/G/N queue can be derived approximately in an analytical form. We comment that this is the ride confirmation time, which represents the time elapsed after the ride is requested and before the ride is confirmed. It differs from the pickup time (from ride confirmation to pickup), which will be treated below. }
\subsection{Passenger incentives}
The total travel cost of the TNC passenger consists of the waiting time for pickup, the travel time during the trip, and the monetary payment for the ride service. We refer to this total travel cost as the ``generalized cost'' and define it as the weighted sum of waiting time, travel time, and trip fare. It may differ for distinct passengers due to the randomness in trip length, trip duration, and the matching process of the TNC platform. Since we primarily focus on aggregate market outcomes, we define the average generalized cost as:
\begin{equation}
\label{cost_definition}
c = \alpha {t_w} + \beta t_0+ {p_f},
\end{equation}
where ${t_w}$ is the average waiting time, $t_0$ is the average trip duration (in minutes), and $p_f $ is the average price of a TNC ride. The parameters $\alpha$ and $\beta$ specify the passenger trade-off between time and money. Note that $\alpha$ is generally larger than $\beta$ since empirical study suggests that the value of time while waiting is larger than the value of time while traveling in the vehicle.
It is important to emphasize that we do not need to assume that all passengers have the same travel cost. The heterogeneity in passengers is irrelevant as we focus on the aggregate market outcome, which typically depends on the average cost $c$. A widely-studied example is the logit choice model, where the total number of agents choosing a particular mode only depends on the average cost of each mode. In this spirit, we define a demand function that determines the arrival rate of TNC passengers as a function of the average generalized cost:
\begin{equation}
\label{demand_function}
\lambda = {\lambda _0}{F_p}(c),
\end{equation}
where \({\lambda _0}\) is the arrival rate of potential passengers (total travel demand in the city), and \({F_p}( \cdot )\) is the proportion of potential passengers who choose a TNC ride. We assume that \({F_p}( \cdot )\) is a strictly decreasing and continuously differentiable function so that a higher TNC travel cost $c$ will lead to fewer TNC passengers. The logit model is a special case of (\ref{demand_function}).
The passenger waiting time $t_w$ intimately interacts with other endogenous decision variables $\lambda$ and $N$. To delineate this relation, we divide a TNC ride into three time periods: (1) from ride being requested to the ride being confirmed, (2) from the ride being confirmed to passenger pickup, (3) from passenger pickup to drop-off. Let ${t_m}$, ${t_p}$, and ${t_0}$ represent the length of these three periods, respectively, then we have ${t_w} = {t_m} + {t_p}$, and ${t_0}$ as the average trip distance $L$ divided by traffic speed $v$, i.e., ${t_0} = L/v$. Since the platform immediately matches each newly arrived passenger to the nearest idle vehicle, ${t_m}$ is the average waiting time in the queue, and $t_p$ depends on the traffic speed $v$ and the distance of the passenger to the nearest idle vehicle, which further depends on the number of idle vehicles ${N_I}$. Therefore, we write $t_p$ as a function of $N_I$ and $v$, i.e., $t_p(N_I, v)$. The following assumption is imposed on $t_p(\cdot)$:
\begin{assumption}
\({t_p}({N_I},v)\) is twice differentiable with respect to \({N_I}\) and \(v.\) It is decreasing and strictly convex with respect to \({N_I},\) and it is decreasing with respect to traffic speed \(v.\)
\label{assumption1}
\end{assumption}
Assumption \ref{assumption1} requires that the pickup time decreases with respect to the number of idle vehicles and the traffic speed. We suppose traffic speed $v(N)$ is a function of the total number $N$ of vehicles and impose the following assumption on $v(\cdot)$:
\begin{assumption}
$v(N)$ is decreasing and continuously differentiable with respect to $N$.
\label{assumption2}
\end{assumption}
{ Using data of San Francisco and New York City for the M/G/N queue, we find that the ride confirmation time $t_m$ is very short, i.e., less than 1 seconds. This is negligible compared to the pickup time \({t_p}\), which is typically around 3-5 minutes.} Therefore we ignore \({t_m}\) and express the total waiting time \({t_w}\) as\footnote{The waiting time can be significantly larger in rush hours. In this case, one can add $t_m$ to $t_w$ as the waiting time in the queue. We believe that this will not affect our conclusion, but we neglect this term in this paper for analytic tractability.}
\begin{equation}
{t_w} = {t_p}({N_I},v).
\end{equation}
The number of idle vehicles $N_I$ depends on $\lambda$ and $N$, whereas the average traffic speed $v $ depends on $N$.
\subsection{ Driver incentives}
In the TNC market, drivers can decide whether to remain subscribed to the TNC platform depending on the long-term average earnings offered by the platform. The average hourly wage of drivers depends on the ride fare of the TNC trip, the commission rate set by the platform, and the occupancy rate of the vehicles. It can be described as:
\begin{equation}
\label{driver_wage_def}
w = \frac{{\lambda {p_d}}}{N},
\end{equation}
where $p_d$ is the average per-trip payment to drivers. The driver payment \({p_d}\) differs from the passenger trip fare $p_f$. The difference $p_f-p_d$ is kept by the platform as profit. Therefore, the commission rate of the platform (typically 25\%-40\%) can be written as $(p_f-p_d)/p_f$. The average hourly wage (\ref{driver_wage_def}) is just the total platform payment to all drivers $\lambda {p_d}$ divided by the total number of drivers $N$. Each driver may have an hourly earning that differs from the earning of others due to the randomness in work schedule, driver location, and repositioning strategy. However, as we primarily focus on the aggregated market outcome, the heteregeneity in driver earnings is irrelevant as far as the aggregate market outcome (e.g.,total number of TNC passengers or drivers) only depends on the average hourly earning over all TNC drivers. Note that this is the case for the well-established logit choice model. More generally, we define a supply function that determines the total number of TNC drivers as a function of the average hourly wage:
\begin{equation}
\label{supply_function}
N = {N_0}{F_d}(w),
\end{equation}
where $N_0$ is the number of potential drivers (all drivers seeking a job), and $F_d(w)$ is a strictly increasing and continuously differentiable function that gives the proportion of drivers willing to join TNC. Note that the logit model is a special case of (\ref{supply_function})
\subsection{ Platform decisions in absence of regulation}
The TNC platform determines the ride prices and the driver payment to gauge passengers and drivers to maximize its profit. In each time period, the platform revenue is the total ride fares received from passengers, i.e., $\lambda p_f$, and the platform cost is the total payment made to the drivers, i.e., $\lambda p_d$. The profit of the platform can be thus written as the difference between the revenue and the cost
\begin{equation}
\label{optimalpricing}
\hspace{-1.5cm} \mathop {\max }\limits_{{p_f}, {p_d}} \quad \lambda ({p_f} - {p_d})
\end{equation}
\begin{subnumcases}{\label{constraint_optimapricing_TNC}}
\lambda = {\lambda _0}{F_p}\left(\alpha {t_p} + \beta t_0+ {p_f} \right) \label{demand_constraint}\\
N = {N_0}{F_d}\left(\frac{{\lambda {p_d}}}{N}\right) \label{supply_constraint}
\end{subnumcases}
where (\ref{demand_constraint}) is the demand function and (\ref{supply_constraint}) is the supply function. Note that $t_p$ depends on $\lambda$ and $N$, and $t_0$ depends on the traffic speed which is a function of $N$. The overall problem not only involves $p_f$ and $p_d$ as decision variables, but also involves $N$, $\lambda$, $t_p$, $v$ and $t_0$ as endogenous variables. The optimal solution to (\ref{optimalpricing}) represents the platform's profit-maximizing pricing decision in absence of the regulatory intervention.
{ The profit maximization problem (\ref{optimalpricing}) is a constrained optimization which can be solved by various gradient-based algorithms \cite{bertsekas1997nonlinear}. However, since the problem is non-concave with respect to $p_d$ and $p_f$, it is difficult to assert whether the obtained solution is globally optimal. To address this concern, we apply a change of variable and treat $\lambda$ and $N$ as the new decision variables. More specifically, given $\lambda$ and $N$, we can use (\ref{demand_constraint})-(\ref{supply_constraint}) to uniquely determine $p_f$ and $p_d$ as follows:
\begin{subnumcases}{\label{changeofvariable}}
p_f= F_p^{-1} \left(\dfrac{\lambda}{\lambda_0}\right) - \alpha t_p(N_I, v)-\beta p_0 \label{changeofvariable1}\\
p_d= \dfrac{N}{\lambda} F_d^{-1}\left( \dfrac{N}{N_0}\right) \label{changeofvariable2}
\end{subnumcases}
where (\ref{changeofvariable1}) is derived from (\ref{demand_constraint}), and (\ref{changeofvariable2}) is derived from (\ref{supply_constraint}). Note that the right-hand sides of (\ref{changeofvariable1}) and (\ref{changeofvariable2}) are both functions of $\lambda$ and $N$. By plugging (\ref{changeofvariable1}) and (\ref{changeofvariable2}) into (\ref{optimalpricing}), we can transform the profit maximization problem (\ref{optimalpricing}) into the following unconstrained optimization:
\begin{equation}
\label{optimalpricing_transformed}
\hspace{-1.5cm} \mathop {\max }\limits_{\lambda, N} \quad \lambda \left(F_p^{-1} \left(\dfrac{\lambda}{\lambda_0}\right)- \alpha t_p(N_I, v)-\beta p_0 \right)- N F_d^{-1}\left( \dfrac{N}{N_0}\right)
\end{equation}
where $\lambda$ and $N$ are decision variables. Clearly, (\ref{optimalpricing_transformed}) is equivalent to (\ref{optimalpricing}). We note that although (\ref{optimalpricing_transformed}) is non-concave with respect to $\lambda$ and $N$, under certain mild conditions, it is concave with respect to $\lambda$ for fixed $N$. We formally summarize this result as the following proposition:
\begin{proposition}
\label{prop_concave}
Assume the demand function $F_p(\cdot)$ is a logit model represented as:
\begin{equation}
\label{logit_demand_prop}
\lambda =\lambda_0 \frac{e^{-\epsilon c}}{e^{-\epsilon c}+e^{-\epsilon c_0}},
\end{equation}
where $\epsilon>0$ and $c_0$ are parameters. Further assume that given $v$, the waiting time function $t_p(N_I, v)$ is convex with respect to $N_I$, then we have the following results: \\
(1) the profit maximization problem (\ref{optimalpricing_transformed}) is concave with respect to $\lambda$ under a fixed $N$, \\
(2) Given $N$, there exists a unique $\lambda$ that maximizes the platform profit (\ref{optimalpricing_transformed})
\end{proposition}
The proof can be found in Appendix A. Proposition \ref{prop_concave} suggests that for any fixed $N$, we can efficiently derive the unique optimal $\lambda$ that maximizes the profit by solving a concave program. This result is based on a few mild assumptions: (a) the logit model (\ref{logit_demand_prop}) is used for studying customer discrete choice, (b) the convexity of $t_p(\cdot)$ simply requires that the marginal benefit of adding extra idle vehicles in reducing passenger waiting time decreases with respect to $N_I$, which is consistent with intuition. Based on this result, we can obtain the optimal combination of $(\lambda, N)$ by enumerating over $N$. This provides the globally optimal solution to (\ref{optimalpricing_transformed}).
}
\begin{remark}
Many works study the spatial and temporal aspects of the TNC market. These aspects are neglected in our model since we primarily focus on the evaluation of regulatory policies (e.g., minimum wage) that are imposed on a uniform basis regardless of the time of the day or the location of the driver. This makes it legitimate to consider the impact of these policies at the aggregate scale, which suffices to provide valuable insights for city planners to assess their policies.
A spatial-temporal analysis is necessary if policy makers further consider fine-tuning these policies so that they differentiate trips at different time instances or different locations. This is left for future work.
\end{remark}
\subsection{Modeling regulation policies}
Regulation policies, such as congestion charge and driver minimum wage, modify the incentives of passengers and drivers and affect the pricing decision of the TNC platform. To capture this effect, we formulate the platform pricing problem under the minimum wage, trip-based congestion charge, and time-based congestion charge.
{\bf Minimum wage:} To capture the impact of a driver wage floor $w_0$, we impose the constraint that requires the driver hourly earning to be greater than $w_0$. The optimal pricing problem under minimum wage regulation can be formulated as:
\begin{equation}
\label{optimalpricing_wage}
\hspace{-1.5cm} \mathop {\max }\limits_{{p_f}, {p_d}, N} \quad \lambda ({p_f} - {p_d})
\end{equation}
\begin{subnumcases}{\label{constraint_optimapricing_TNC_wage}}
\lambda = {\lambda _0}{F_p}\left(\alpha {t_p} + \beta t_0+ {p_f} \right) \label{demand_constraint_wage}\\
N \leq {N_0}{F_d}\left(\frac{{\lambda {p_d}}}{N}\right) \label{supply_constraint_wage} \\
\frac{{\lambda {p_d}}}{N} \ge {w_0} \label{min_wage_wage}
\end{subnumcases}
where constraint (\ref{min_wage_wage}) captures the wage floor on TNC driver earnings. Note that we relax the equality constraint (\ref{supply_constraint}) to inequality constraint (\ref{supply_constraint_wage}). This permits the TNC platform to hire a subset of drivers who are willing to work for TNC in case the minimum wage is set so high that it is unprofitable for the platform to hire all the willing drivers in the market.
{
\begin{remark}
Note that the minimum wage constraint (\ref{min_wage_wage}) places a lower bound on the {\em average} driver wage $w$. Since the hourly wage may differ from one driver to another, when (\ref{min_wage_wage}) is satisfied, it does not necessarily mean that all drivers earn at least the minimum wage. Instead, it only indicates that drivers can earn more than the minimum wage on average. We emphasize that this formulation is consistent with the practice: the minimum wage for TNC drivers in New York City and Seattle are both implemented on a platform-wide average basis, instead of an individual driver basis \cite{ban2018gan}, \cite{ban2018sea}.
\end{remark}}
{\bf Trip-based congestion charge:} Many existing congestion charge schemes are trip-based (e.g., New York City, Seattle, Chicago). The trip-based congestion charge assesses an extra fee of $p_t$ on each TNC trip in the congestion area. When a congestion charge $p_t$ and a minimum wage $w_0$ are imposed concurrently, the optimal pricing problem can be formulated as follows:
\begin{equation}
\label{optimalpricing_trip}
\hspace{-2cm} \mathop {\max }\limits_{{p_f}, {p_d}, N} \quad \lambda ({p_f} - {p_d})
\end{equation}
\begin{subnumcases}{\label{constraint_optimapricing_trip}}
\lambda = {\lambda _0}{F_p}\left(\alpha {t_p} + \beta t_0+ {p_f}+ p_t \right) \label{demand_constraint_wage}\\
N \le {N_0}{F_d}\left(\frac{{\lambda {p_d}}}{N}\right) \label{supply_constraint_trip} \\
\frac{{\lambda {p_d}}}{N} \ge {w_0}
\label{min_wage_const}
\end{subnumcases}
where the per-trip congestion charge $p_t$ is incorporated into the passenger travel cost within the demand function (\ref{demand_constraint_wage}). Another way to formulate the congestion charge is by adding it to the cost of the platform. This is easier to implement as it only requires the platform to transfer the accumulated congestion charge of all trips within certain period to the city. In this case, the optimal pricing problem can be written as:
\begin{equation}
\label{optimalpricing_trip2}
\hspace{-1cm} \mathop {\max }\limits_{{p_f}, {p_d}, N} \quad \lambda ({p_f} - {p_d})- \lambda p_t
\end{equation}
\begin{subnumcases}{\label{constraint_optimapricing2}}
\lambda = {\lambda_0}{F_p}\left(\alpha {t_p} + \beta t_0+ {p_f}\right) \label{demand_constraint_wage2}\\
N \le {N_0}{F_d}\left(\frac{{\lambda {p_d}}}{N}\right) \label{supply_constraint_trip2} \\
\frac{{\lambda {p_d}}}{N} \ge {w_0}
\label{min_wage_const2}
\end{subnumcases}
where $p_t$ is incorporated into the profit of the platform instead of the travel cost of the passengers. Economists find that whether a tax is levied on the buyer or seller of the good does not matter because they always share the tax burden based on their elasticities \cite[Chap. 16]{varian2014intermediate}. This principle also applies here:
\begin{proposition}
\label{prop1}
Let $(p_f^*,p_d^*,N^*,\lambda^*)$ and $(p_f^{**},p_d^{**},N^{**},\lambda^{**})$ denote the optimal solutions to (\ref{optimalpricing_trip}) and (\ref{optimalpricing_trip2}), respectively, then we have $p_f^*+p_t=p_f^{**}, p_d^*=p_d^{**}, \lambda^*=\lambda^{**},$ and $N^*=N^{**}$.
\end{proposition}
Proposition \ref{prop1} states that the two formulations of trip-based congestion charge, i.e., (\ref{optimalpricing_trip}) and (\ref{optimalpricing_trip2}), lead to the same market outcome. The proof is omitted since it can be simply derived by a change of variable.
{\bf Time-based congestion charge:} Distinct from the trip-based congestion charge, the time-based charge is levied on TNC vehicles based on vehicle hours instead of trip volumes. The key difference between the two congestion charge schemes is that time-based congestion charge not only penalizes TNC trips, but also penalizes idle TNC hours and thus incentivizes the platform to increase vehicle utilization. When the time-based congestion charge $p_h$ and a minimum wage $w_0$ are concurrently levied on TNC drivers, we have the following formulation:
\begin{equation}
\label{optimalpricing_time}
\hspace{-2.5cm} \mathop {\max }\limits_{{p_f}, {p_d}, N} \quad \lambda ({p_f} - {p_d})
\end{equation}
\begin{subnumcases}{\label{constraint_optimapricing_time}}
\lambda = {\lambda _0}{F_p}\left(\alpha {t_p} + \beta t_0+ {p_f}+ p_t \right) \label{demand_constraint_wage_time}\\
N \le {N_0}{F_d}\left(\frac{{\lambda {p_d}}}{N}-p_h\right) \label{supply_constraint_time} \\
\frac{{\lambda {p_d}}}{N} -p_h\ge {w_0}
\label{min_wage_const_time}
\end{subnumcases}
When the time-based congestion charge is levied on the TNC platform, we have the following formulation:
\begin{equation}
\label{optimalpricing_time2}
\hspace{-0.5cm} \mathop {\max }\limits_{{p_f}, {p_d}, N} \quad \lambda ({p_f} - {p_d}) -Np_h
\end{equation}
\begin{subnumcases}{\label{constraint_optimapricing_time2}}
\lambda = {\lambda _0}{F_p}\left(\alpha {t_p} + \beta t_0+ {p_f} \right) \label{demand_constraint_wage_time2}\\
N \le {N_0}{F_d}\left(\frac{{\lambda {p_d}}}{N}\right) \label{supply_constraint_time2} \\
\frac{{\lambda {p_d}}}{N} \ge {w_0}
\label{min_wage_const_time2}
\end{subnumcases}
Similar to the trip-based congestion charge, these two forms of formulations are equivalent.
\begin{proposition}
\label{prop2}
Let $(\bar{p}_d,\bar{p}_d,\bar{N},\bar{\lambda})$ and $(\tilde{p}_f,\tilde{p}_d,\tilde{N},\tilde{\lambda})$ denote the optimal solutions to (\ref{optimalpricing_time}) and (\ref{optimalpricing_time2}), respectively, then we have $\bar{p}_f=\tilde{p}_f, \dfrac{\bar{\lambda}\bar{p}_d}{\bar{N}}-p_h=\dfrac{\tilde{\lambda}\tilde{p}_d}{\tilde{N}}, \bar{\lambda}=\tilde{\lambda},$ and $\bar{N}=\tilde{N}$.
\end{proposition}
Proposition \ref{prop2} states that the two formulations of time-based congestion charge, i.e., (\ref{optimalpricing_time}) and (\ref{optimalpricing_time2}), lead to the same market outcome. The proof is omitted since it is similar to that of Proposition \ref{prop1}.
\section{Profit maximization under trip-based congestion charge}
This section analyzes the joint impact of a trip-based congestion charge and a minimum wage for TNC drivers. We consider a platform that determines the ride fare $p_f$ and the per-trip driver payment $p_d$ to maximize its profit $\lambda(p_f-p_d)$ under the trip-based congestion charge $p_t$ and a minimum wage $w_0$. The optimal pricing problem can be formulated as (\ref{optimalpricing_trip}) or (\ref{optimalpricing_trip2}). For sake of exposition, we will start with a realistic numerical example for San Francisco. The numerical example will be complemented by a theoretical analysis presented later that shows the insights derived from the numerical example can be generalized.
\subsection{Numerical example}
\label{parameter_section}
We investigate the impact of the proposed regulations via a case study for San Francisco (followed by theoretical analysis in the next subsection). Assume that passengers choose their transport mode based on the total travel cost. We use a logit model so the demand function for TNC rides is
\begin{equation}
\label{logit_demand}
\lambda =\lambda_0 \frac{e^{-\epsilon c}}{e^{-\epsilon c}+e^{-\epsilon c_0}},
\end{equation}
where $c$ is the total travel cost of a TNC trip, and $\epsilon>0$ and $c_0$ are parameters. Similarly, drivers choose to work for the TNC depending on its wage. Under a logit model, the supply function is
\begin{equation}
\label{logit_supply}
N =N_0 \frac{e^{\sigma w}}{e^{\sigma w}+e^{\sigma w_0}},
\end{equation}
where $\sigma$ is a parameter. We note that (\ref{logit_demand}) is a special case of the general demand function (\ref{demand_function}), and (\ref{logit_supply}) is a special case of the general supply function (\ref{supply_function}).
Passenger pickup time $t_p$ follows the ``square root law'' established in \cite{arnott1996taxi} and \cite{li2019regulating}:
\begin{equation}
{t_p}({N_I},v) = \frac{M}{{v\sqrt {N - \lambda /\mu } }},
\label{pickuptime_func}
\end{equation}
where the constant $M$ depends on the travel times in the city. The square root law establishes that the average pickup time is inversely proportional to the square root of the number of idle vehicles in the city, $(N - \lambda /\mu)$. The intuition behind (\ref{pickuptime_func}) is straightforward. Suppose all idle vehicles are uniformly distributed throughout the city, then the distance between any two nearby idle vehicles is inversely proportional to the square root of the total number of idle vehicles. This distance is proportional to that between the passenger and the closest idle vehicle, which determines the pickup time. A justification of the square root law can be found in \cite{li2019regulating}.
The average traffic speed $v$ is a function of the total traffic. Using Greenshield model \cite{greenshields1953study} gives the linear speed-density relation\footnote{Since TNC vehicles only account for a small percentage of the overall traffic, the Greenshield model can be regarded as a linear approximation in a small neighborhood of a nonlinear speed-density function.},
\begin{equation}
\label{greens_model}
v = {v_0} - \kappa (N+N_b),
\end{equation}
where $N_b$ is the background traffic\footnote{TNC trips may substitute taxis or private vehicles. This may introduce coupling between the TNC demand and the background traffic $N_b$. For simplicity, we neglect this substitution effect and assume $N_b$ is exogenous. We leave it for future work to investigate how the coupling between $\lambda$ and $N_b$ affect the conclusion of this paper. }, $N$ is the number of TNC vehicles, and $v_0$ and $\kappa$ are model parameters. Assuming that $N_b$ is constant, (\ref{greens_model}) is equivalent to
\begin{equation}
v = {v_f} - \kappa N.
\label{greenshiledmodel}
\end{equation}
In summary, the model parameters are
\[{\Theta}=\{\lambda_0, N_0, M, L, v_f, \kappa, \alpha, \epsilon, c_0, \sigma, w_0\}.
\]
In the numerical study we set the parameters values so that the optimal solution to (\ref{optimalpricing}) matches the real data of San Francisco city. The values of these model parameters are summarized below:
\[ \lambda_0=1049/\text{min}, \, N_0=10000, \, M = 41.18, \quad L=2.6 \text{ mile}, \quad {v_f} = 15 \text{ mph},\quad \kappa = 0.0003,\]
\[\alpha=2.33, \, \epsilon=0.33, \, c_0=31.2, \, \sigma=0.089, \, w_0=\$31.04/\text{hour}.\]
For the data source (from San Francisco) and justification of these parameter values, please refer to Appendix B.
We solve the profit maximizing problem (\ref{optimalpricing_trip}) for different values of congestion charge $p_t$ under a fixed wage floor $w_0$, and plot all the variables as a function of $p_t$. The minimum wage of TNC drivers in San Francisco is set in a way similar to that in NYC. Under current NYC regulations, the TNC driver minimum wage is $\$25.76$/hour, which is equivalent to the $\$15$/hour minimum wage of NYC after deducting vehicle expenses such as insurance, maintenance and taxes. Since the hourly minimum wage of San Francisco is $\$0.59$ higher than in NYC, we set $w_0=\$25.76+\$0.59=\$26.35$/hour to compensate for this difference.
\begin{figure*}[bt]
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure1_trip}
\vspace*{-0.3in}
\caption{Number of drivers under different trip-based congestion charge. }
\label{figure1_trip}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure2_trip}
\vspace*{-0.3in}
\caption{Passenger arrivals under different trip-based congestion charge.}
\label{figure2_trip}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure3_trip}
\vspace*{-0.3in}
\caption{Occupancy rate under different trip-based congestion charge.}
\label{figure3_trip}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure4_trip}
\vspace*{-0.3in}
\caption{Per-trip ride price and driver payment under different trip-based congestion charge.}
\label{figure4_trip}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure5_trip}
\vspace*{-0.3in}
\caption{Passenger pickup time in minutes under different trip-based congestion charge.}
\label{figure5_trip}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure6_trip}
\vspace*{-0.3in}
\caption{Passenger travel cost in \$ per trip under different trip-based congestion charge.}
\label{figure6_trip}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure7_trip}
\vspace*{-0.3in}
\caption{Per-hour driver wage under different trip-based congestion charge.}
\label{figure7_trip}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure8_trip}
\vspace*{-0.3in}
\caption{Per-hour platform profit under different trip-based congestion charge.}
\label{figure8_trip}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure9_trip}
\vspace*{-0.3in}
\caption{Per-hour tax revenue under different trip-based congestion charge.}
\label{figure9_trip}
\end{minipage}
\end{figure*}
\subsection{Analysis}
\label{trip_analysis_sec}
Figure \ref{figure1_trip}- Figure \ref{figure3_trip} show the number of drivers, passenger arrival rate, and the occupancy rate of TNC vehicles as a function of the congestion charge $p_t$ when the minimum wage is set at $w_0=\$26.35$/hour. Figure \ref{figure4_trip} shows the per-trip ride fare $p_f$ and the driver payment $p_d$. Figure \ref{figure5_trip}-Figure \ref{figure6_trip} show the passenger pickup time and travel cost. Figure \ref{figure7_trip} shows the driver wage (which equals the minimum wage). Figure \ref{figure8_trip} and Figure \ref{figure9_trip} show the platform profit and city's tax revenue under different values of ${p_t}$, respectively.
Clearly, the optimal solution as a function of $p_t$ has two distinct regimes:
\begin{itemize}
\item when ${p_t} \leq \$2.1$/trip, the number of drivers remains constant, while the number of passengers reduces; vehicle occupancy drops, passenger pickup time decreases, ride fare increases, and the passenger total travel cost increases. At the same time, driver wage remains constant and equals the minimum wage, platform profit reduces, and the tax revenue increases.
\item when ${p_t} > \$2.1$/trip, both the passenger arrival rate and number of TNC drivers reduce sharply; vehicle occupancy reduces, ride fare and pickup time increase, and the total travel cost increases. The driver wage remains constant and equals the minimum wage, while the platform revenue declines, and the tax revenue increases.
\end{itemize}
This is a surprising result: the number of drivers is unaffected by the congestion charge $p_t$ when $p_t\leq \$2.1$/trip. It is in contrast with the case when there is only a congestion charge and no minimum wage (see \cite{li2019regulating}). Therefore, this set of result indicates that the effect of a congestion charge on congestion relief is mitigated by the wage floor on TNC drivers. In certain regimes, the congestion charge cannot directly curb traffic congestion by reducing the number of TNC vehicles.
The reason behind this surprising result is rooted in the platform's power in the labor market. The platform is a monopoly in the labor market and sets driver wages. When there is no regulation (i.e., ${p_t} = 0$ and ${w_0} =0$), the platform hires fewer drivers to maximize its profit compared to a competitive labor market where the TNC faces the competitive driver wage. In a certain regime, the minimum wage squeezes the platform's market power and induces it to hire more drivers \cite{li2019regulating}. This indicates that the marginal profit of hiring additional drivers under the minimum wage regulation is positive. When the congestion charge is insignificant, this marginal profit reduces but remains positive, and thus the platform still hires all drivers available in the labor market. The number of drivers is upper bounded by $N\leq N_0F_d(w_0)$. Therefore, in the first regime, $N$ remains constant and satisfies $N=N_0F_d(w_0)$. If the congestion charge is further increased, the marginal profit of hiring an additional driver reduces to zero, and the system enters the second regime.
{
\begin{remark}
We would like to clarify that the aforementioned result relies on the assumption that the TNC platform has market power that can influence the driver wage\footnote{In a competitive labor market where driver wage is given, the conclusions of this numerical study no longer hold. }, but does not rely on the assumption that TNC is a monopolistic wage-setter. To validate this, we considered the duopolisitic setting, where two symmetric TNCs compete against each other on both passenger and driver side to maximize their own profits. The numerical study reveals that the number of drivers and number of passengers at the Nash equilibrium demonstrate the same properties as shown in Figure \ref{figure1_trip} and Figure \ref{figure2_trip}, respectively. We believe that this can be further extended to the case of more than two competing TNCs.
\end{remark}
}
Figures \ref{figure6_trip}-\ref{figure8_trip} show that the tax burden primarily falls on the ride-hailing platform as opposed to passengers and drivers. As the trip-based charge increases, the passenger cost increases slightly, the driver wage remains unchanged, while platform profit reduces significantly. In particular, under a trip-based tax of \$2/trip, passenger cost increases by 0.6\%, driver wage remains constant, and platform profit declines by 59.5\%. This is because drivers are protected by the minimum wage, and the passenger's price elasticity\footnote{The passenger price elasticity is $\dfrac{\partial \lambda}{\partial p_f}\dfrac{p_f}{\lambda}$. We calculate $\dfrac{\partial \lambda}{\partial p_f}$ assuming that the waiting time $t_w$ is fixed under different $p_t$. This is a reasonable approximation since $t_w$ does not change significantly under distinct $p_t$ (Figure \ref{figure5_trip}).} is relatively high (Figure \ref{elasticity}) so that the platform has to refrain from significantly increasing the ride fare.
\begin{figure}[ht!]
\centering
\input{elasticity}
\caption{Absolute value of passenger price elasticity and driver wage elasticity under different trip-based tax}.
\label{elasticity}
\end{figure}
We show that the result reported in Figure \ref{figure1_trip}-Figure \ref{figure9_trip} (including number of drivers, number of passengers, platform revenue, and tax revenue) is robust for a large range of model parameters. For notation convenience, let $\tilde w$ be the optimal driver wage set by the platform in the absence of any regulation (i.e., ${p_t} = {w_0} = 0$), and denote by ${N^*_t}({p_t})$ the optimal number of drivers to (\ref{optimalpricing_trip}) under a fixed wage floor, which depends on $p_t$. We then have the following result.
\begin{theorem}
Assume that (\ref{optimalpricing_trip}) has a unique solution. For any model parameters $\lambda_0, N_0$ and $\alpha$, any strictly decreasing function ${F_p}(c)$, any strictly increasing function ${F_d}(w)$, any pickup time function ${t_p}$ that satisfies Assumption \ref{assumption1}, and any speed-density relation $v(N)$ that satisfies Assumption \ref{assumption2}, there exists ${w_1} > \tilde{w},$ such that for any $\tilde{w} < {w_0} < {w_1}$, there exists ${\bar p_t} > 0$, so that $\partial {N^*_t}/\partial{p_t} = 0$ for $ {p_t} \in (0,{\bar p_t})$.
\label{theorem1}
\end{theorem}
The proof of Theorem 1 is can be found in Appendix C. It states that for any wage floor in an appropriate range, there is always a regime in which the congestion charge does not affect the number of TNC vehicles or drivers. In this case, the congestion charge will not directly curb the congestion by reducing the number of TNC vehicle on the city's streets. Instead, it can only indirectly mitigate the traffic congestion by collecting taxes to subsidize public transit to attract passengers. Note that $\tilde{w}$ and $w_1$ can be calculated numerically, and ${\bar p_t}$ depends on the wage floor ${w_0}$. For the case of San Francisco, we calculate that $\tilde{w}=\$21.55$/hour, $w_1=\$29.20$/hour, and $\bar{p}_t=\$2.1$/trip when $w_0=\$26.35$/hour.
\section{ Profit maximization under time-based congestion surcharge}
\label{time-based}
This section considers the profit maximization problem under a wage floor and a time-based congestion charge. Under the time-based charge, each vehicle is penalized based on the total time it stays active on the platform (whether there is a passenger on board or not). Let $p_h$ denote the per-vehicle per-unit-time congestion charge. The total charge (per unit time) is $N p_h$, and the profit maximization problem is cast as
(\ref{optimalpricing_time}). For sake of exposition, we will first present a numerical example for San Francisco. The insights derived from the numerical study will be examined by theoretical analysis later to demonstrate its independence on model parameters.
In the numerical study, we will solve the profit maximization problem (\ref{optimalpricing_time}) for different time-based congestion charge $p_h$ under a fixed wage floor $w_0=\$26.35$/hour. The model parameters of (\ref{optimalpricing_time}) are the same as those in Section \ref{parameter_section}.
\begin{figure*}[bt]%
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure1_time}
\vspace*{-0.3in}
\caption{Number of drivers under different time-based congestion surcharge. }
\label{figure1_time}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure2_time}
\vspace*{-0.3in}
\caption{Passengers arrival rate under different time-based congestion charge.}
\label{figure2_time}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure3_time}
\vspace*{-0.3in}
\caption{Occupancy rate under different time-based congestion charge.}
\label{figure3_time}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure4_time}
\vspace*{-0.3in}
\caption{Per-trip ride price and driver payment under different time-based congestion charge.}
\label{figure4_time}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure5_time}
\vspace*{-0.3in}
\caption{Passenger pickup time in minutes under different time-based congestion charge.}
\label{figure5_time}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure6_time}
\vspace*{-0.3in}
\caption{Passenger travel cost in \$ under different time-based congestion charge.}
\label{figure6_time}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure7_time}
\vspace*{-0.3in}
\caption{Per-hour driver wage under different time-based congestion charge.}
\label{figure7_time}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure8_time}
\vspace*{-0.3in}
\caption{Per-hour TNC profit under different time-based congestion charge.}
\label{figure8_time}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure9_time}
\vspace*{-0.3in}
\caption{Per-hour tax revenue under different time-based congestion charge.}
\label{figure9_time}
\end{minipage}
\end{figure*}
Figure \ref{figure1_time} - Figure \ref{figure3_time} display the number of drivers, passenger arrival rates, and vehicle occupancy as a function of the time-based congestion charge. Figure \ref{figure4_time} shows the ride fare and per-trip driver payment. Figure \ref{figure5_time} and Figure \ref{figure6_time} show the passenger pickup time and total travel cost. Figure \ref{figure7_time} shows the driver wage. Figure \ref{figure8_time} and Figure \ref{figure9_time} present the platform profit and tax revenue, respectively. Clearly, the plots in Figure \ref{figure1_time}-\ref{figure9_time} have two distinct regimes:
\begin{itemize}
\item when $p_h\leq \$6.2$/hour the number of TNC drivers and the passenger arrival rate remain constant. So do the occupancy rate, ride fare, per-trip driver payment, pickup time, passenger travel cost and driver wage. The platform revenue reduces linearly, and the tax revenue also increases linearly.
\item when $p_h> \$6.2$/hour the numbers of drivers and passengers decline. Vehicle occupancy, ride fare ($p_f$) and per-trip driver payment ($p_d$) also decline. The pickup time and passenger travel cost increase. The driver wage is constant and equals the minimum wage. The platform profit reduces and the tax revenue increases.
\end{itemize}
Simulation results suggest that the time-based congestion charge does not affect the number of TNC vehicles unless the charge is greater than $\$6.2$/hour. In that case the effect of the congestion charge on congestion relief is mitigated by the minimum wage on TNC drivers. This observation is consistent with the results in Section \ref{trip_analysis_sec} and for the same reason. However, in contrast with the trip-based charge, the time-based charge does not affect passenger arrivals (Figure \ref{figure2_time}). This indicates that the time-based charge leads to a direct money transfer from the platform to the city in the first regime without affecting the passengers or drivers. This is evidenced by the linear curves in the first regime of Figure \ref{figure8_time}-\ref{figure9_time}.
The quantitative results in Figure \ref{figure1_time}-Figure \ref{figure9_time} are robust with respect to the variation of model parameters. Formally, denote by ${N^*_h}({p_h})$ and $\lambda^*_h(p_h)$ the optimal number of drivers and passenger arrival rates to (\ref{optimalpricing_time}) under a fixed wage floor. We have the following result.
\begin{theorem}
Assume that (\ref{optimalpricing_time}) has a unique solution. For any model parameters $\lambda_0, N_0$, and $\alpha$, any strictly decreasing function ${F_p}(c)$, any strictly increasing function ${F_d}(w)$, any pickup time function ${t_p}$ that satisfies Assumption \ref{assumption1}, and any speed-density relation $v(N)$ that satisfies Assumption \ref{assumption2}, there exists ${w_2} > \tilde{w},$ such that for any $\tilde{w} < {w_0} < {w_2}, $ there exists ${\bar p_h} > 0, $ so that $\partial {N^*_h}/\partial{p_h} = 0$ and $\partial {\lambda^*_h}/\partial{p_h} = 0$ for $\forall {p_h} \in (0,{\bar p_h}).$
\label{theorem2}
\end{theorem}
The proof of Theorem \ref{theorem2} can be found in Appendix D. Theorem \ref{theorem2} states that there exists a regime under which both the number of TNC drivers and the passenger arrival rates are unaffected by the congestion charge. This indicates that the ride fare, driver wage and passenger cost remain constant in this regime and the congestion charge is entirely imposed on the platform through a direct money transfer from the platform to the city. In this scheme, congestion charge will not directly curb the congestion by reducing traffic in the city. Instead, it can only indirectly mitigate traffic congestion by collecting taxes to subsidize public transit.
\section{Comparison between time-based and trip-based charges}
\begin{figure*}[bt]%
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure1_compare}
\vspace*{-0.3in}
\caption{Number of drivers under different schemes of congestion surcharge. }
\label{figure1_compare}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure2_compare}
\vspace*{-0.3in}
\caption{Comparison of passenger arrival rate (per minute).}
\label{figure2_compare}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure3_compare}
\vspace*{-0.3in}
\caption{Occupancy rate under different congestion surcharges.}
\label{figure3_compare}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure4_compare}
\vspace*{-0.3in}
\caption{Per-trip ride price under different congestion surcharges.}
\label{figure4_compare}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure5_compare}
\vspace*{-0.3in}
\caption{Comparison of passenger pickup time (minute).}
\label{figure5_compare}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{figure6_compare}
\vspace*{-0.3in}
\caption{Platform revenue under different congestion surcharges.}
\label{figure6_compare}
\end{minipage}
\end{figure*}
This section provides a comparison of the trip-based and time-based congestion charges. To ensure a meaningful comparison, we first set a target for the city's tax revenue. This target can be achieved by setting the appropriate charges. For each scheme, we find the charge that exactly attains the targeted tax revenue and we compare the two schemes for the same target. The model parameters are consistent with previous case studies in Section \ref{parameter_section} and Section \ref{time-based}.
Figures \ref{figure1_compare}-\ref{figure3_compare} compare the number of drivers, passenger arrival rate and the vehicle occupancy of the two schemes for different targets for the city's tax revenue. Figure \ref{figure4_compare} and Figure \ref{figure5_compare} compare the ride fare and the pickup time for the two schemes. Figure \ref{figure6_compare} compares the platform profit under the trip-based and time-based charges. These results reveal that for the same realized tax revenue, the time-based charge is Pareto superior to the trip-based charge (as currently implemented in NYC).
Under the time-based charge, the TNC platform earns a higher profit. For drivers, the time-based congestion charge does not affect their surplus in the first regime as the same number of drivers are hired at the same wage. For passengers, the time-based charge leads to
a lower ride fare but a longer waiting time. However, the time-based congestion charge also has higher passenger arrival rate (Figure \ref{figure2_compare}).
Since the demand function $F_p(c)$ is monotonic, this implies that the total travel cost $c$ is lower and the passenger surplus is higher under the time-based congestion charge.
In summary, the time-based congestion charge leads to higher passenger surplus and higher platform profit (Figure \ref{figure6_compare}), which benefits all participants of the transportation system. This is because the time-based congestion surcharge penalizes idle vehicle hours and motivates the TNC to increase the occupancy rate of the vehicles (see Figure \ref{figure3_compare}). Based on the data for San Francisco, the surplus resulting from increased vehicle occupancy will be distributed to all market participants, including the passengers, the TNC platform, and the city.
While the aforementioned results do not necessarily hold for all levels of targeted tax revenues, the conclusion is indeed applicable for a large range of model parameters in the regime of practical interest. To formally present this claim, we define $N_t^*, w_t^*, \lambda_t^*, c_t^*, P_t^*, Tr_t^*$ as the optimal solution to (\ref{optimalpricing_trip}) and denote $N_h^*, w_h^*, \lambda_h^*, c_h^*, P_h^*, Tr_h^*$ as the optimal solution to (\ref{optimalpricing_time}). They are respectively the optimal number of drivers, driver wage, passenger arrival rate, total travel cost, platform profit, and city tax revenue. Note that all variables with subscript $t$ depend on $p_t$ and $w_0$, and all variables with subscript $h$ depend on $p_h$ and $w_0$. We suppress this dependence to simplify the notation whenever it is clear from the context.
\begin{theorem}
\label{theorem3}
Assume that the profit optimization problems (\ref{optimalpricing_trip}) and (\ref{optimalpricing_time}) both have unique solutions. Assume that ${F_p}(c)$ and ${F_d}(w)$ satisfy the logit model as specified in (\ref{logit_demand}) and (\ref{logit_supply}), respectively. For any pickup time function ${t_p}$ that satisfies Assumption \ref{assumption1}, any speed-density relation $v(N)$ that satisfies Assumption \ref{assumption2}, and any model parameters ${\Theta}=\{\lambda_0, N_0, M, L, v_f, \kappa, \alpha, \epsilon, c_0, \sigma, w_0\}$, there exists $w_3>\tilde{w}$, such that for any $\tilde{w}\leq {w_0}\leq w_3$, there exists $\bar{p}_t$ so that for any trip-based congestion surcharge ${p_t}\in [0,\bar{p}_t]$, there exists a time-based congestion surcharge ${p_h}$ that offers a Pareto improvement, i.e.
\[N_h^* = N_t^*,w_h^* = w_t^* = {w_0},\lambda _h^* > \lambda _t^*,c_h^* < c_t^*,P_h^* > P_t^*,Tr_h^* > Tr_t^*\]
\end{theorem}
The proof of Theorem \ref{theorem3} can be found in Appendix E. It shows that there exists a regime where a time-based charge offers a Pareto improvement over a trip-based one. In this regime, for any trip-based charge, one can find an appropriate time-based charge for which the same number of drivers is hired, more passengers take TNC rides at a lower cost, the platform earns more profit, and the city collects more tax revenues to subsidize public transit. For the case of San Francisco, we calculate $w_3=\$29.20$/hour, and $\bar{p}_t=\$2.1$/trip when $w_0=\$26.35$/hour.
\begin{remark}
Theorem \ref{theorem3} identified a regime under which a time-based congestion charge offers a Pareto improvement. The caveat is that this regime only applies to the wage floor and congestion charge levels within a certain range, i.e., $w_0\in [\tilde{w},w_3]$, ${p_t}\in [0,\bar{p}_t]$. Outside of this range, the comparison between the two congestion charge schemes may depend on the model parameters. However, we emphasize that it is unlikely for cities to impose very stringent policies that substantially raise the driver payment level (or surcharge level), since this may drive the TNCs out of business. In practice, regulatory policies are likely to reside in or stay close to the regime identified by this paper.
\end{remark}
\begin{figure*}[bt]%
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s1}
\vspace*{-0.3in}
\caption{Number of drivers as a function of $p_h$ under distinct $\lambda_0$. }
\label{figures1}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s2}
\vspace*{-0.3in}
\caption{Passenger arrival rate (/min) as a function of $p_h$ under distinct $\lambda_0$. }
\label{figures2}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s3}
\vspace*{-0.3in}
\caption{Platform profit (per hour) as a function of $p_h$ under distinct $\lambda_0$. }
\label{figures3}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s4}
\vspace*{-0.3in}
\caption{Number of drivers as a function of $p_h$ under distinct $N_0$. }
\label{figures4}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s5}
\vspace*{-0.3in}
\caption{Passenger arrival rate (/min) as a function of $p_h$ under distinct $N_0$. }
\label{figures5}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s6}
\vspace*{-0.3in}
\caption{Platform profit (per hour) as a function of $p_h$ under distinct $N_0$. }
\label{figures6}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s7}
\vspace*{-0.3in}
\caption{Number of drivers as a function of $p_h$ under distinct $\alpha$. }
\label{figures7}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s8}
\vspace*{-0.3in}
\caption{Passenger arrival rate (/min) as a function of $p_h$ under distinct $\alpha$. }
\label{figures8}
\end{minipage}
\begin{minipage}[b]{0.005\linewidth}
\hfill
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\include{s9}
\vspace*{-0.3in}
\caption{Platform profit (per hour) as a function of $p_h$ under distinct $\alpha$. }
\label{figures9}
\end{minipage}
\end{figure*}
\section{Sensitivity Analysis}
This section reports a sensitivity analysis to test the robustness of our results with respect to the model parameters. We vary the model parameters of (\ref{optimalpricing_time}) and evaluate the impact of the time-based congestion charge under distinct parameter values. The nominal values of the parameters are set to be the same as in Section \ref{parameter_section}. We perturb $\lambda_0, N_0$ and $\alpha$ by $5\%$ and investigate how these perturbations affect passengers, drivers, and the TNC platform under the time-based charge.
Figure \ref{figures1}-\ref{figures3} show the number of drivers, passenger arrival rate, and the platform profit as functions of the time-based congestion charge under different $\lambda_0$ (the nominal value is 1049). Clearly, there are two regimes. When $\lambda_0$ increases, the TNC platform has more passengers, and therefore enjoys a higher profit. However, we note that in the first regime, the number of drivers is not affected by $\lambda_0$. This is because in the first regime, both (\ref{supply_constraint_time}) and (\ref{min_wage_const_time}) are active, which determines $N$ as $N=N_0F_d(w_0)$.
Figure \ref{figures4}-\ref{figures6} show the number of drivers, passenger arrival rate, and the platform profit as functions of the time-based charge for different $N_0$ (the nominal value is 10K). There are clearly two regimes for the three values of $N_0$. When $N_0$ increases, the platform hires more drivers, attracts more passengers and collects a higher profit. Platform profit is insensitive to the number of potential drivers.
Figure \ref{figures7}-\ref{figures9} show the number of drivers, passenger arrival rate, and the platform profit as functions of the time-based charge for different $\alpha$ (the nominal value is 2.33). When $\alpha$ increases, both passenger arrival rate and platform profit drop. We note that the platform profit is much more sensitive to $\alpha$ than it is to $\lambda_0$ and $N_0$.
\section{Conclusion}
This paper describes the impact of two proposed congestion charges on TNC: (a) a charge based on vehicle trips, and (b) a charge based on vehicle hours. We used a market equilibrium model to assess the joint effect of minimum wage with either of these two charges. Surprisingly, we find that neither charging scheme significantly affects the number of TNC vehicles since their effect is mitigated by the wage floor on TNC drivers. Furthermore, we find that the time-based charge is Pareto superior compared with the trip-based charge that is currently imposed in New York City. Under the time-based charge, more passengers take TNC rides at a cheaper overall travel cost, drivers remain unaffected, the platform earns a higher profit, and the city collects more tax revenue from the TNC system to subsidize public transit.
The policy implication of these results are profound. First of all, our results imply that the TNC driver minimum wage mitigates the effectiveness of the congestion charge (either time-based or trip-based) in reducing the TNC traffic. Therefore, when a driver minimum wage is imposed, the city can not merely count on the congestion charge to reduce the number of TNC vehicles on the city's street, unless the charge is significant and exceeds certain threshold. Second, { the TNC profit is rather sensitive to regulations such as minimum wage and congestion charges. Based on calibrated model parameters, we showed that the tax burden mainly falls on the ride-hailing platform as opposed to passengers and drivers. We argue that this effect should be taken into account in policy formulation, and an interesting research direction is to synthesize more effective policies that achieve the regulatory objective without jeopardizing the TNC business model, e.g., \cite{li2020off}. } Third,
our result suggests that the time-based congestion charge is superior to the trip-based congestion charge. While most city selects the trip-based congestion charge as a natural candidate of its charge scheme (e.g., NYC, Chicago, Seattle), a shift to the time-based congestion charge is not difficult to implement: the city only needs to periodically audit the operations data of the TNC and collect the charge based on the accumulated vehicle hours on the platform.
Future research directions include determining the optimal level of congestion charge that maximizes social welfare, extending the model to capture temporal and spatial aspect of the TNC market, and characterizing the impact of regulatory policies on TNC competition.
\section*{Acknowledgments}
This research was supported by the Hong Kong Research Grant Council project HKUST26200420 and National Science Foundation EAGER
award 1839843.
\bibliographystyle{unsrt}
|
1,477,468,751,402 | arxiv | \section{Introduction}
Word embedding models such as Skip-gram \cite{DBLP:conf/nips/MikolovSCCD13} and GloVe \cite{glove2014} use fixed-dimensional vectors to represent the meaning of words. These word vectors essentially capture a kind of similarity structure, which has proven to be useful in a wide range of Natural Language Processing (NLP) tasks. Today, one of the major applications of word embeddings is their interaction with neural network architectures, enabling a kind of generalization beyond those words that were only observed during training. For example, if a classification model has learned that news stories containing words such as `cinema', `restaurant' and `zoo' tend to be categorized as `entertainment', it may predict this latter label also for stories about theme parks due to the shared semantic properties encoded in word vectors. Word embeddings thus endow neural models with some form of world knowledge, without which they would be far less effective. This has prompted a prolific line of research focused on improving word embeddings not only with algorithmic sophistication, but also via explicit incorporation of external knowledge sources such as WordNet \cite{DBLP:conf/naacl/FaruquiDJDHS15}, BabelNet \cite{camacho2016nasari} or ConceptNet \cite{Speeretal2016}.
Regardless of how word vectors are learned, however, the use of fixed-dimensional representations constrains the kind of knowledge they can encode. Essentially, we can think of a word vector as a compact encoding of the salient attributes of the given word. For instance, the vector representation of \textit{lion} might implicitly encode that this word is a noun, and that lions have attributes such as `dangerous', `predator' and `carnivorous'. Beyond these properties, word embeddings can also encode \emph{relational knowledge}. For instance, the embedding might tell us that the words `lion' and `zebra' are semantically related, which together with the attributional knowledge that lions are predators and zebras are prey may allow us to plausibly infer that `lions eat zebras'. However, the way in which relational knowledge can be encoded in word embeddings is inherently limited. One issue is that only relationships which are sufficiently salient can affect the vector representations of their arguments; e.g.\ the fact that Trump has visited France is perhaps not important enough to be encoded in the embeddings of the words `Trump' and `France' (i.e.\ there may be insufficient corpus-based evidence on this fact). Note that this is not a matter of how the embedding is learned; forcing the vector representations to encode this fact would distort the similarity structure of the embedding. From a formal point of view, there are also severe limitations to what can be encoded \cite{gutierrez2018knowledge}. As a simple example, methods based on vector translations cannot model symmetric relations, and they are limited in the kind of many-to-many relations that can be encoded \cite{DBLP:conf/aaai/LinLSLZ15}.
Like word embeddings, semantic networks such as WordNet \cite{Miller1995}, BabelNet \cite{navigli2012babelnet} or ConceptNet \cite{Speeretal2016} also encode lexical and world knowledge. They use a graph representation in which nodes correspond to words, phrases, entities or word senses. Edge labels are typically chosen from a small set of discrete and well-defined lexical and ontological relationships. Compared to word embeddings, the knowledge captured in such resources is more explicit, and more focused on relational knowledge (although attributional knowledge can be encoded as well, e.g., by using edge labels modeling the \texttt{has\_property} relation). The use of discrete labels for encoding relation types, however, makes such representations too coarse-grained for many applications (e.g., a large proportion of the edges in ConceptNet are labelled with the generic `related to' relationship). It also means that subjective knowledge cannot be modeled in an adequate way (e.g., forcing us to make a hard choice between which animals are considered to have the property `dangerous' and which ones do not).
In this paper, we propose a hybrid representation, which we call SeVeN (Semantic Vector Networks). Similar to semantic networks, we use a graph based representation in which nodes are associated with words. In contrast to semantic networks, however, these edges are labelled with a vector, meaning that relation types are modeled in a continuous space.
To obtain a suitable \emph{relation vector} for two given words $a$ and $b$, we start by averaging the vector representations (from a pre-trained word embedding) of the words that appear in sentences that mention both $a$ and $b$. The resulting vectors have two main disadvantages, however. First, they are high-dimensional, as they are constructed as the concatenation of several averaged word vectors.
Second, the relation vectors are influenced by words that describe the relationship between $a$ and $b$, but also by words that rather relate to the individual words $a$ or $b$ (as well as some non-informative words). Intuitively we want to obtain a vector representation which only reflects the words that relate to the relationship. For example, the relation vector for (paris,france) should ideally be the same as the vector for (rome,italy), but this will not be the case for the averaged word vectors, as the former relation vector will also reflect the fact that these words represent places and that they relate to France. To address both issues, we introduce an autoencoder architecture in which the input to the decoder comprises both the encoded relation vector and the word vectors for $a$ and $b$. By explicitly feeding the word vectors for $a$ and $b$ into the decoder, we effectively encourage the encoder to focus on words that describe the relationship between $a$ and $b$.
Once the semantic vector network has been learned, it can be used in various ways. For instance, the relation vectors could be used for measuring relational similarity \cite{DBLP:conf/semeval/JurgensMTH12}, for identifying words that have a specific lexical relationship such as hypernyms \cite{Vylomova2016}, or complementing open information extraction systems \cite{dellibovietal:2015}. In this paper, however, we will assess the potential of SeVeN in terms of two tasks, namely using it for (1) unsupervised semantic similarity modeling, and for (2) enriching word vectors as the input to neural network architectures. The overarching idea in the latter case is that, instead of simply representing each word by its vector representation, the representation for each word position will be composed of (i) the vector representation of the word, (ii) the vector representations of the adjacent words in the semantic vector network, and (iii) the corresponding relation vectors (i.e.\ the edge labels).
\section{Related Work}
Related work broadly falls in two categories: methods which aim to improve word embeddings using relational knowledge, and methods which aim to learn relation vectors. To the best of our knowledge, there is no previous work which uses relation vectors with the aim of enriching word embeddings.\smallskip
\noindent \textbf{Improving Word Embeddings.} One of the most notable features of word embedding models, such as Skip-gram \cite{mikolov2013linguistic} and GloVe \cite{glove2014}, is the fact that various syntactic and semantic relationships approximately correspond to vector translations. One limitation of vector translations is that they are not well-suited for modeling transitive relations, which is problematic among others for the is-a relationship. To this end, a number of alternative vector space representations have been proposed, which are specifically aimed at modeling taxonomic relationships \cite{DBLP:journals/corr/VendrovKFU15,yu2015learning,nickel2017poincare}. Note that while such alternative embedding spaces can solve some of the limitations of standard embeddings w.r.t.\ modeling taxonomic relationships, there are many other types of relations that cannot be faithfully modeled in these representations. Moreover, these alternative embeddings are not necessarily well-suited for modeling word similarity. More generally, various authors have explored the idea of adapting word embeddings to fit the needs of specific tasks, e.g.\ aiming to make embeddings better suited for capturing antonyms \cite{ono2015word}, hypernyms \cite{DBLP:journals/corr/abs-1710-06371} or sentiment \cite{tang2014learning}.
As mentioned in the introduction, the use of semantic networks for improving word embeddings, based on the idea that words which are similar in the semantic network should have a similar embedding, has been explored by various authors \cite{DBLP:conf/naacl/FaruquiDJDHS15,camacho2016nasari,Speeretal2016}. Another possibility is to use a semantic network to decompose word embeddings into sense embeddings by imposing the constraint that the word vector is a convex combination of the corresponding sense vectors, as well as forcing similarity of the sense vector with the vector representations of its neighbors in the semantic network \cite{DBLP:conf/naacl/JohanssonP15}. Finally, let us refer to work that learns additional embeddings that coexist in the same space as lexemes, e.g., WordNet synsets \cite{rothe2015autoextend} or BabelNet synsets \cite{mancini2017embedding}. \smallskip
\noindent \textbf{Relation Vectors}
The idea of learning a relation vector for two words $a$ and $b$, based on the words that appear in their context, goes back at least to the Latent Relational Analysis (LRA) method from \cite{Turney:2005:MSS:1642293.1642475}. In that work, a matrix is constructed with one row for each considered word pair, where columns correspond to lexical patterns that have been extracted from sentences containing these words. The relation vectors are then obtained by applying Singular Value Decomposition (SVD) on that matrix. Along similar lines, in \cite{DBLP:conf/naacl/RiedelYMM13} relation vectors are learned by factorizing a matrix whose rows correspond to entity pairs and whose columns correspond to properties (in this case comprising both lexical patterns from a corpus and triples from a knowledge graph). More recently, several methods have been proposed that learn a vector describing the relationship between two words by averaging the embeddings of the words that appear in between them in a given corpus \cite{DBLP:conf/emnlp/WestonBYU13,DBLP:conf/conll/HashimotoSMT15,DBLP:conf/ranlp/FanCHG15}, or by learning a vector representation from PMI-like statistics on how strongly different words are associated with the considered word pair \cite{DBLP:journals/corr/abs-1711-05294}. Beyond these unsupervised methods, a wide variety of supervised neural network based architectures have been proposed for learning relation vectors that are predictive of a given relation type \cite{zeng2014relation,DBLP:conf/acl/SantosXZ15,xu2015classifying}.
\section{Constructing Semantic Vector Networks}
Our aim is to construct a graph whose nodes correspond to words, whose edges indicate which words are related, and whose edge labels are vectors that encode the specific relationship between the corresponding words. We will refer to this representation as a semantic vector network. In this section, we describe our methodology for constructing such semantic vector networks. First, in Section \ref{secStructure}, we provide details about the source corpus and explain how the structure of the network is chosen. In Section \ref{secLearningVectors} we then discuss how suitable relation vectors can be constructed.
\subsection{Defining the Network Structure}\label{secStructure}
Our source corpus is a dump of the English Wikipedia from January 2018. We opted to keep preprocessing at a minimum to ensure that any emergent linguistic or relational regularity is captured during the network construction stages. Specifically, we applied sentence segmentation and word tokenization using \textit{nltk}\footnote{\url{nltk.org}}. We also single-tokenized multiword expressions based on several lexicons \cite{schneider2014discriminative}, and finally removed stopwords using the CoreNLP list\footnote{\url{github.com/stanfordnlp/CoreNLP/blob/master/data/edu/stanford/nlp/patterns/surface/stopwords.txt}}\nb{Did we include stopwords for learning the relation vectors? If not then we would miss important ones such as "such", "as", "at", "of", or are these not included?}.
After the above steps, we selected the $10^5$ most frequent words as our vocabulary. To determine which words should be connected with an edge, we rely on Pointwise Mutual Information (PMI), which measures the strength of association between two random variables. It is commonly used in NLP as a method for identifying related words, e.g.\ in factorization based methods for learning word embeddings \cite{turney2010frequency}. Specifically, we express the strength of association between words $w_i$ and $w_j$ as follows:
$$
\textit{pmi}(w_i,w_j) = \log\left(\frac{x_{ij} x_* }{x_{i}x_j} \right)
$$
In our case, $x_{ij}$ is the number of times word $w_i$ appears near word $w_j$, weighted by the nearness of their co-occurrences, and $x_i=\sum_j$, $x_{ij}=\sum_j x_{ji}$ and $x_*=\sum_i\sum_j x_{ij}$. Specifically, let $I_{w_i}$ be the word positions in the corpus at which $w_i$ occurs, then we define:
\begin{align*}
x_{ij} = \sum_{p \in I_{w_i}}\sum_{q \in I_{w_j}} n(p,q)
\end{align*}
where $n(p,q)=0$ if the word positions $p$ and $q$ belong to a different sentence, or if $|p-q|>10$, i.e.\ if there are at least 10 words in between them. Otherwise we define $n(p,q) = \frac{1}{|p-q|}$.
\begin{table}[!h]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{cc|cc|cc|cc}
\multicolumn{2}{c|}{\texttt{\textbf{sorrow}}} & \multicolumn{2}{c|}{\texttt{\textbf{tournament}}} & \multicolumn{2}{c|}{\texttt{\textbf{videogame}}} & \multicolumn{2}{c}{\texttt{\textbf{riverbank}}} \\ \hline \hline
\texttt{ppmi} & \texttt{w2v} & \texttt{ppmi} & \texttt{w2v} & \texttt{ppmi} & \texttt{w2v} & \texttt{ppmi} & \texttt{w2v} \\ \hline
contrition & sadness & scotties & tourney & lego & videogames & danube & riverbanks \\
lamentation & anguish & double-elimination & tournaments & consoles & videogaming & erosion & river \\
woe & grief & single-elimination & Tournament & villains & next\_gen\_consoles & laboratories & creek \\
savior & profound\_sorrow & pre-olympic & tournment & arcade & Videogame & opposite & riverbed \\
everlasting & deepest\_sorrow & 4-day & tourament & sega & gamers & vegetation & riverside \\
anguish & heartfelt\_sorrow & eight-team & tourneys & ea & MMOG & tales & lake \\
grief & profound\_sadness & winnings & touranment & playstation & PS2 & washed & shoreline \\
\end{tabular}
}
\caption{Examples of the highest scoring (i.e.\ most strongly associated by \texttt{pmi}) words, as well as their nearest neighbors in the pretrained word2vec (\texttt{w2v}) Google news vector space.}
\label{tab:bestedges}
\end{table}
To choose the edges of the semantic vector network, we only consider word pairs which co-occur at least 10 times in the corpus. Among such pairs, for each word $w_i$, we first select the 10 words $w_j$ whose score $\textit{pmi}(w_i,w_j)$ is highest. This resulted in a total of about 900\,000 pairs. Then, we added the overall highest scoring pairs $(w_i,w_j)$ which had not yet been selected, until we ended up with a total of approximately $10^6$ edges involving the initial vocabulary of $10^5$ words. In the following we will write $N_w$ for the neighbors of $w$, i.e.\ the set of words $n$ such that $\{w,n\}$ was selected as an edge.
Note that by capturing pairs of words strongly connected by PMI, we encode a different type of relatedness than proximity in word embeddings.
To illustrate this, in Table \ref{tab:bestedges} we compare the most closely related words in our PMI graph, for some selected target words, with their nearest neighbors in the \textit{word2vec} Google News word embedding space\footnote{\url{https://code.google.com/archive/p/word2vec/}} (measured by cosine similarity).
While the word2vec neighbors mostly consist of near-synonyms and other syntagmatic relationships, the chosen PMI pairs include a wide variety of topically related linguistic items.
A semantic network based on such PMI pairs should thus capture information which is complementary to what is captured in word embeddings.
\subsection{Learning relation vectors}\label{secLearningVectors}
Our general strategy for learning relation vectors is based on averaging word vectors. Specifically, for each sentence $s$ in which $w_i$ occurs before $w_j$ (within a distance of at most 10), we construct three vectors, based on the words $a_1,...,a_k$ which appear before $w_i$, the words $b_1,...,b_l$ which appear in between $w_i$ and $w_j$ and the words $c_1,...,c_q$ which appear after $w_j$
\begin{align*}
\textit{pre}^s_{w_iw_j} &= \frac{1}{k} \sum_{r=1}^k \mathbf{v}_{a_r} &
\textit{mid}^s_{w_iw_j} &= \frac{1}{l} \sum_{r=1}^l \mathbf{v}_{b_r} &
\textit{post}^s_{w_iw_j} &= \frac{1}{q} \sum_{r=1}^q \mathbf{v}_{c_r}
\end{align*}
where we write $\mathbf{v}_w$ for the vector representation of the word $w$. These vectors are then averaged over all sentences $S_{ij}$ where $w_i$ occurs before $w_j$:
\begin{align*}
\textit{pre}_{w_iw_j} &= \frac{1}{|S_{ij}|} \sum_{s\in S_{ij}} \textit{pre}^s_{w_iw_j} &
\textit{mid}_{w_iw_j} &= \frac{1}{|S_{ij}|} \sum_{s\in S_{ij}} \textit{mid}^s_{w_iw_j} &
\textit{post}_{w_iw_j} &= \frac{1}{|S_{ij}|} \sum_{s\in S_{ij}} \textit{post}^s_{w_iw_j}
\end{align*}
Since we can similarly obtain such vectors from sentences where $w_j$ appears before $w_i$, we end up with a relation vector whose dimensionality is six times higher than the dimensionality of the word embedding, which would be impractical in the kind of applications we envisage (see Section \ref{secEvaluation}). Another problem with these vectors is that they do not only reflect the relationship between $w_i$ and $w_j$, but also the words $w_i$ and $w_j$ themselves. For instance, suppose we want to model the relationship between the words `movie' and `popcorn'. A sentence mentioning these two words could be:
\begin{quote}
\textit{Buttered popcorn is commonly eaten at movie theatres.}
\end{quote}
The most relevant words for describing the relationship are `eaten at'. In contrast, however, `buttered' is mostly related to the word `popcorn' itself rather than describing its relationship with `movie'. Similarly, `theatres' is related to `movie', but not relevant for characterizing the relationship.
To solve both issues, we propose to use an autoencoder architecture, in which the decoder has access to the word vectors $\mathbf{v}_{w_i}$ and $\mathbf{v}_{w_j}$, in addition to the encoded version of the relation vector. Let us write $\mathbf{z}_{w_iw_j}$ for the concatenation of $\textit{pre}_{w_iw_j}$, $\textit{mid}_{w_iw_j}$, $\textit{post}_{w_iw_j}$, $\textit{pre}_{w_jw_i}$, $\textit{mid}_{w_jw_i}$, $\textit{post}_{w_jw_i}$. Then the encoder is given by:
$$
\mathbf{r}_{w_iw_j} = A\, \mathbf{z}_{w_iw_j} + \mathbf{b}
$$
where $A \in \mathbb{R}^m \times \mathbb{R}^{6d}$ and $\mathbf{b}\in \mathbb{R}^m$, where $d$ is the dimensionality of the word vectors, and $m$ is the dimensionality of the encoded relation vectors $m$. In our experiments we experiment with different values for $m$. Empirically, we find that as the dimensionality of the compressed representations becomes smaller, the importance of word semantics gradually fades away in favor of their corresponding relational properties. Then, the decoder is then defined as:
$$
\mathbf{z}_{w_iw_j}^* = B (\mathbf{v}_{w_i} \oplus \mathbf{r}_{w_iw_j} \oplus \mathbf{v}_{w_j}) + \mathbf{c}
$$
where $\oplus$ denotes vector concatenation, $B \in \mathbb{R}^{6d} \times \mathbb{R}^{m+2d}$ and $\mathbf{c}\in \mathbb{R}^{6d}$.
To train the autoencoder, we use the following L2-regularized reconstruction loss:
$$
\mathcal{L} = \|\mathbf{z}_{w_iw_j} - \mathbf{z}_{w_iw_j}^* \|_2^2 + \lambda \|\mathbf{r}_{w_1n_1}\|_2^2
$$
with $\lambda>0$ a regularization parameter. This loss function balances two objectives: minimizing the reconstruction error and keeping the L2 norms of the encoded relation vectors as small as possible. Because of this latter part, we can think of the norm of the relation vectors $\mathbf{r}_{w_iw_j}$ as a measure of how strongly the words $w_i$ and $w_j$ are related. In particular, if sentences mentioning $w_i$ and $w_j$ contain few or no words that describe their relationship, we might expect $\mathbf{r}_{w_iw_j}$ to be close to the 0 vector.\footnote{We also conducted experiments without the regularization term, with slightly worse results across all evaluations.}
\section{Evaluation}\label{secEvaluation}
We propose to evaluate our semantic vector networks from three different standpoints. First, we provide a qualitative evaluation by exploring relation network spaces (both compressed and uncompressed) and discussing meaningful properties. Second, we perform experiments in word similarity where we compare against the standard approach of measuring the similarity of two words by means of the cosine distance between their corresponding word vectors. This evaluation serves as an illustration of how semantic vector networks could be used in an unsupervised application setting. Third, as a prototypical example of a supervised application setting, we analyze the impact of leveraging the enriched representation these networks provide in neural text classification, in particular topic categorization and sentiment analysis. In all experiments, the pretrained embeddings we use (both for baselines and for constructing the relation networks) are the \textit{word2vec} Google News embeddings \cite{mikolov2013linguistic}.
\subsection{Qualitative Evaluation}
One of the strongest selling points of word embeddings is that they enable inference of relational properties, which can be obtained by simple vector arithmetic such as summation and subtraction \cite{levy2015improving}. The basic idea is that the relationship between two words $w_i$ and $w_j$ is characterized by the vector difference $\mathbf{v}_{w_i} - \mathbf{v}_{w_j}$. Such vector differences, however, encode relations in a noisy way. For instance, while the differences $\mathbf{v}_{\textit{rome}} - \mathbf{v}_{\textit{italy}}$, $\mathbf{v}_{\textit{paris}} - \mathbf{v}_{\textit{france}}$ and $\mathbf{v}_{\textit{dublin}} - \mathbf{v}_{\textit{ireland}}$ are all rather similar, there are in fact many other word pairs (not in a capital-of relationship) whose difference is also similar to these differences \cite{ziedcoling}. Accordingly, it was found in \cite{Vylomova2016} that a relation classifier which is trained on word vector differences is prone to predicting many false positives. In contrast, we can expect that our relation vectors are modeling relations in a far less ambiguous way. On the other hand, these relation vectors are limited to word pairs that co-occur sufficiently frequently. Apart from the associated sparsity issues, this also suggests that relation vectors are not suitable for characterizing syntagmatic relationships and several types of syntactic relationships. We thus view these relation vectors as complementary to word vector differences.
\begin{table}[!th]
\centering
\renewcommand{\arraystretch}{1.075}
\footnotesize
\begin{tabular}{cccc}
\multicolumn{4}{c}{\textbf{lime\_juice}}
\\ \hline \hline
\texttt{original} & \texttt{compressed-10d} & \texttt{compressed-50d} & \texttt{diffvec} \\ \hline
lemon\_juice & lemon\_juice & lemon\_juice & lime\_soda \\
juice\_lemon & coconut\_milk & juice\_lemon & lime\_lemon \\
juice\_lime & marzipan\_paste & juice\_lime & lemon\_juice \\
lime\_lemon & juice\_lime & vinegar\_sour & citric\_juice \\
lemon\_lime & noodles\_egg & lemon\_lime & tamarind\_juice \\
pineapple\_juice & lime\_lemon & vinegar\_sauce & lime\_pie \\
orange\_juice & marinated\_beef & lime\_lemon & pineapple\_juice \\\Xhline{3\arrayrulewidth}
\multicolumn{4}{c}{\textbf{nintendo\_console}} \\ \hline \hline
\texttt{original} & \texttt{compressed-10d} & \texttt{compressed-50d} & \texttt{diffvec} \\ \hline
wii\_console & wii\_console & wii\_console & nintendo\_consoles \\
playstation\_console & playstation\_console & nintendo\_nes & nintendo\_handheld \\
nintendo\_nes & nintendo\_nes & playstation\_console & gamecube\_console \\
xbox\_console & witcher\_2 & xbox\_console & wii\_console \\
nintendo\_consoles & itunes\_download & nintendo\_consoles & dreamcast\_console \\
famicom\_console & imax\_2d & sega\_consoles & nintendo\_switch \\
nintendo\_64 & netflix\_streaming & nintendo\_handheld & 3ds\_console \\\Xhline{3\arrayrulewidth}
\multicolumn{4}{c}{\textbf{gmail\_email}} \\ \hline \hline
\texttt{original} & \texttt{compressed-10d} & \texttt{compressed-50d} & \texttt{diffvec} \\ \hline
yahoo\_email & renders\_firefox & yahoo\_email & gmail\_emails \\
inbox\_email & ie\_browser & gmail\_e-mail & yahoo\_email \\
hotmail\_email & infinitive\_suffix & inbox\_email & hotmail\_email \\
email\_yahoo & firefox\_browser & gmail\_emails & addy\_email \\
gmail\_e-mail & carnap\_semantics & email\_yahoo & imap\_email \\
sending\_email & helvetica\_font & hotmail\_email & smtp\_email \\
send\_email & cv\_syllable & google\_search & bugzilla\_email \\\Xhline{3\arrayrulewidth}
\multicolumn{4}{c}{\textbf{roman\_numerals}} \\ \hline \hline
\texttt{original} & \texttt{compressed-10d} & \texttt{compressed-50d} & \texttt{diffvec} \\ \hline
arabic\_numerals & arabic\_alphabet & arabic\_numerals & cyrillic\_numerals \\
letters\_numerals & greek\_alphabet & letters\_numerals & indic\_numerals \\
letters\_alphabet & 10-inch\_discs & uppercase\_letters & georgian\_numerals \\
lowercase\_letters & latin\_alphabet & lowercase\_letters & hieratic\_numerals \\
arabic\_alphabet & yemenite\_pronunciation & uppercase\_characters & brahmi\_numerals \\
latin\_alphabet & standard\_orthography & latin\_alphabet & sinhala\_numerals \\
symbols\_numerals & wii\_remote & alphabetic\_numerals & quantifiers\_numerals \\\Xhline{3\arrayrulewidth}
\multicolumn{4}{c}{\textbf{heavy\_metal}} \\\hline \hline
\texttt{original} & \texttt{compressed-10d} & \texttt{compressed-50d} & \texttt{diffvec} \\ \hline
thrash\_metal & metal\_heavy & thrash\_metal & heavy\_metals \\
glam\_metal & karma\_dharma & doom\_metal & cky\_metal \\
doom\_metal & techno\_rave & glam\_metal & manilla\_metal \\
symphonic\_metal & psychedelic\_garage & thrash\_slayer & annihilator\_metal \\
nu\_metal & cooking\_recipes & punk\_rock & heaviness\_metal \\
sludge\_metal & gita\_yoga & hardcore\_punk & doro\_metal \\
glam\_rock & post-punk\_punk & sludge\_metal & behemoth\_metal
\end{tabular}
\caption{Nearest neighbors (by cosine) for selected relation vectors and the three models under consideration.}
\label{tab:neighbors}
\end{table}
In this section we illustrate the semantic properties of different versions of SeVeN. To this end, we show the nearest neighbors of selected target relation vectors for a number of different representations: (1) the original 1800d SeVeN network (\texttt{original}), (2) an autoencoded 10-dimensional space (\texttt{compressed-10d}), (3) a slightly higher-dimensional version (\texttt{compressed-50d}), and finally (4) a baseline model according to which the relation between two words is modeled as the vector difference of the corresponding word vectors
(\texttt{diffvec}).
The five selected target relation vectors, along with their nearest neighbors, are shown in Table \ref{tab:neighbors}. These target relation vectors were chosen to capture a range of different types of relationships, including hypernymic (`nintendo - console' and `gmail - email') and attributional (`roman - numerals') relations.
One immediate observation is that, in most cases, the \texttt{diffvec} neighbors remain very close to the given word pair, where each word from the given pair is either preserved or replaced by a closely related word. The \texttt{original} and \texttt{compressed-50d} relation vectors largely follow a similar trend, although a few more interesting analogies are also found in these cases (e.g. \textit{arabic - alphabet} as a neighbor of \textit{roman - numerals}). The results for the \texttt{compressed-10d} vectors, however, follow a markedly different pattern. For these low-dimensional vectors, our autoencoder forces the relation vectors to focus on modeling the relationship between the two words, while abstracting away from the initial domain. This leads to several interesting neighbors, although this seems to come at the cost of some added noise.
Let us now analyze more closely the results of the \texttt{compressed-10d} vectors. If we read the first example along the lines of ``juice can be made from limes'', similar relations are found close in the space, such as `coconut - milk' and `marzipan - paste'. Note that the relation `noodles - egg' is also similar, although the two words appear in the incorrect order (i.e.\ noodles can be made from eggs rather than the other way around). As another example where the directionality of this pattern is not captured correctly, we also find the pair `juice - lime'. It would be interesting to analyze in future work whether such issues can be avoided by using features from a dependency parser, e.g.\ following a similar strategy as in \cite{levy2014dependency}. Note that while all the \texttt{compressed-10d} neighbors are still related to food, these vectors have generalized beyond the domain of citrus fruits (see e.g., `lime', `tamarind' or `lemon' in \texttt{diffvec}, or `lemon' and `orange' in \texttt{original}). A similar phenomenon occurs in some of the other examples. In the `nintendo-console' case, after interpreting the relation as ``major supplier of'' or ``entity which popularized'', we find nearest neighbors in the \texttt{compressed-10d} space where the same relation holds, but which do not belong to the video games domain, such as `itunes-download' or `netflix-streaming'. Next, we find that the relation holding between `gmail' and `email' is similar to `ie' and `firefox' and `browser', and even `helvetica' and `font'. The relation between `google' and `search', found for \texttt{compressed-50d}, is also of this kind. In contrast, the \texttt{diffvec} neighbors in this case all have \textit{email} as the second word. In the `roman-numerals' example, likewise, the \texttt{diffvec} neighbors similarly have `numerals' as the second word, while for \texttt{compressed-10d} we see more interesting neighbors such as `arabic-alphabet' and `yemenite-pronunciation'. We also find the seemingly unrelated `wii-remote' pair, although we may consider that the Nintendo Wii console introduced a fundamentally new type of remote, which at an abstract level is similar to the fact that the Romans introduced a fundamentally new way of writing numbers. This example also suggests, however, that the way in which relations are modeled in the 10-dimensional space might be too abstract for some applications. Finally, the `heavy-metal' case is a paramount example of how the the relation vectors may capture information which is fundamentally different than what is encoded by word vectors. In particular, the \texttt{diffvec} vectors all express relationships from the metalwork domain (e.g., `heavy-metals' or `annihilator-metal'), which reflects the fact that the music-related interpretation of the word `metal' is not its dominant sense. In contrast, since our relation vectors are exclusively learned from sentences where both words co-occur (`heavy' and `metal' in this example), the vector for `heavy metal' clearly captures the musical sense (see e.g., `thrash-metal' or `glam-metal' in the \texttt{original} space).
\subsection{Modeling Similarity}
The capacity to capture and \textit{embed} nuances of word meaning is one of the most celebrated features of word embeddings. The task of semantic similarity measurement, therefore, has been adopted as a \textit{de-facto} testbed for measuring the quality of representations of linguistic items. The standard practice is to consider a distance (or similarity) metric such as cosine similarity and compare the similarity in a given vector space model with respect to human judgement. We note, however, that there exist other similarity metrics discussed in the literature, e.g., Weighted Overlap \cite{pilehvar2013align} or Tanimoto Distance \cite{iacobacci2015sensembed}. Our proposed similarity measurement parts ways from the idea of improving the representation of individual words, and rather seeks to refine their meaning by incorporating complementary cues via relation vectors, as well as the corresponding neighborhood structure. There are many possible ways in which this could be done, but we restrict ourselves here to a simple strategy, based on identifying the closest neighbors of the two words $w_1$ and $w_2$. The main intuition is that when $w_1$ and $w_2$ are similar, they should also be related to similar words.
Specifically, we first determine the closest match between the neighbors of $w_1$ and the neighbors of $w_2$, as follows:
\begin{align*}
(n_1,n_2) = \argmax_{(n_1,n_2) \in N_{w_1} \times N_{w_2}} \cos(\mathbf{v}_{n_1},\mathbf{v}_{n_2})+\cos(\mathbf{r}_{w_1n_1},\mathbf{r}_{w_2n_2})
\end{align*}
Note that to identify these neighbors, we compare both their word vectors $\mathbf{v}_{n_1}$ and $\mathbf{v}_{n_2}$, and their relationships to the target words, $\mathbf{r}_{w_1n_1}$ and $\mathbf{r}_{w_2n_2}$. Once these neighbors have been identified, we compute the similarity between $w_1$ and $w_2$ as follows:
\begin{align*}
\textit{sim}(w_1,w_2) = \cos(\mathbf{v}_{w_1}\oplus \mu \mathbf{v}_{n_1} \oplus \mathbf{r}_{w_1n_1},\mathbf{v}_{w_2}\oplus \mu\mathbf{v}_{n_2}\oplus \mathbf{r}_{w_2n_2})
\end{align*}
where $0<\mu\leq 1$ is a scaling factor which is aimed at reducing the impact of the neighbors $n_1$ and $n_2$ on the overall similarity computation. The fact that $\mathbf{v}_{n_1}$ is similar to $\mathbf{v}_{n_2}$ is an important indicator for the similarity between $w_1$ and $w_2$, but it should not influence the resulting similarity score as much as the similarity of the word vectors of $w_1$ and $w_2$ themselves. Rather than tuning this value, in the experiments we have fixed it as $\mu=0.5$, which was found to give better results than $\mu=1$ (i.e.\ no scaling). Note that the proposed way of computing similarities favours words of the same type. For example, we may expect `Spain' and `France' to be more similar than `Spain' and `Barcelona', when this metric is used, since `Spain' and `France' are associated with the neighbors `Madrid' and `Paris' which are similar, and which are related in a similar way to the target words. In our experiments, we also consider a variant in which the relation vectors are only used for selecting the neighbors. The similarity itself is then calculated as:
\begin{align*}
\textit{sim}(w_1,w_2) = \cos(\mathbf{v}_{w_1}\oplus \mu \mathbf{v}_{n_1},\mathbf{v}_{w_2}\oplus \mu\mathbf{v}_{n_2})
\end{align*}
\noindent We evaluate the proposed similarity measure on four well-known benchmarking datasets for word representation learning. These are: (1) \texttt{rg65} \cite{rubenstein1965contextual}; (2) \texttt{wordsim} \cite{finkelstein2001placing}; (3) \texttt{mc} \cite{miller1991contextual}; and (4) the English portion of \texttt{semeval17} \cite{camacho2017semeval}. We restrict our experiment to single words, and do not consider multiword expressions (e.g., named entities), as this would require a different approach for compositional meaning representation. We compare against a baseline model based on cosine similarity between the vectors of the target words (\texttt{cosine}). As for our proposed models, and observing the similarity measurement described above, we consider a 10-dimensional relation space, without (10rv$_{w}$) and with (10rv$_{r}$) the relation vector as part of the similarity computation. We also provide results stemming from using the original 1800-dimensional relation vector model. As is customary in the literature, we use Pearson's (\textbf{p}) and Spearman's (\textbf{s}) correlation coefficients as evaluation metrics, as well as their average (\textbf{avg.}). Table \ref{tab:wordsim} shows that the 10rv$_{w}$ variant consistently outperforms the word-level baseline. Somewhat surprisingly, the variant 10rv$_{r}$ (which uses the relation vector also in the similarity computation) performs consistently worse than the variant 10rv$_{w}$. When using the original 1800-dimensional vectors, however, the situation is reversed, with 1800rv$_{r}$ outperforming 1800rv$_{w}$, and achieving the best results overall (with the exception of \texttt{mc}). These results clearly show that the relation vectors capture valuable information for measuring word similarity, although the information captured by the 10-dimensional vectors may in some cases be too abstract for this purpose.
\begin{table}[!h]
\centering
\small
\renewcommand{\arraystretch}{1.2}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lrrr|rrr|rrr|rrr}
& \multicolumn{3}{c|}{\texttt{rg}} & \multicolumn{3}{c|}{\texttt{wordsim}} & \multicolumn{3}{c|}{\texttt{mc}} & \multicolumn{3}{c}{\texttt{semeval17}} \\ \cline{2-13}
& \multicolumn{1}{c}{\textbf{p}} & \multicolumn{1}{c}{\textbf{s}} & \multicolumn{1}{c|}{\textbf{avg.}} & \multicolumn{1}{c}{\textbf{p}} & \multicolumn{1}{c}{\textbf{s}} & \multicolumn{1}{c|}{\textbf{avg.}} & \multicolumn{1}{c}{\textbf{p}} & \multicolumn{1}{c}{\textbf{s}} & \multicolumn{1}{c|}{\textbf{avg.}} & \multicolumn{1}{c}{\textbf{p}} & \multicolumn{1}{c}{\textbf{s}} & \multicolumn{1}{c}{\textbf{avg.}} \\ \hline \hline
\multicolumn{1}{l|}{\texttt{cosine}} & 77.2 & 76.0 & 76.6 & 64.9 & 69.4 & 67.1 & 79.2 & 80.0 & 79.6 & 69.4 & 70.0 & 69.7 \\ \hline
\multicolumn{1}{l|}{10rv$_{w}$} & 78.1 & 77.0 & 77.5 & 66.0 & 69.6 & 67.8 & 79.7 & 80.7 & \textbf{80.2} & 70.2 & 70.8 & 70.5 \\
\multicolumn{1}{l|}{10rv$_{r}$} & 77.4 & 75.5 & 76.4 & 65.8 & 69.5 & 67.6 & 78.8 & 77.9 & 78.3 & 70.0 & 70.7 & 70.3 \\
\multicolumn{1}{l|}{1800rv$_{w}$} & 79.5 & 80.6 & \textbf{80.0} & 67.4 & 69.8 & 68.6 & 79.4 & 79.0 & 79.2 & 71.4 & 71.8 & 71.6 \\
\multicolumn{1}{l|}{1800rv$_{r}$} & 78.9 & 80.2 & 79.5 & 68.1 & 70.1 & \textbf{69.1} & 79.2 & 79.7 & 79.4 & 72.2 & 73.0 & \textbf{72.6}
\end{tabular}
}
\caption{Correlation results for different configurations of our proposed approach and a competitor baseline based on cosine similarity of word embeddings.}
\label{tab:wordsim}
\end{table}
\subsection{Text Classification}
Semantic Vector Networks may be thought of as a natural way of enriching word-level semantic representations, which may in turn be useful for informing a neural architecture with relational (e.g., commonsense or lexical) knowledge. We will focus on two well known tasks, namely text categorization and sentiment analysis. Our goal is to examine the extent to which the performance of a vanilla neural network increases by being injected vector graph information as a complement to the information encoded in each individual word embedding. The strength of our proposal lies in the fact that this information comes exclusively from corpora, and thus the need to rely on often incomplete, costly and language dependent ontological or lexical resources is avoided.
As evaluation benchmarks we use
three text categorization datasets, namely \texttt{20news} \cite{lang1995newsweeder}, \texttt{bbc} \cite{greene2006practical} and \texttt{reuters} \cite{lewis2004rcv1}. We also consider two polarity detection datasets (positive or negative), namely the Polarity04 (\texttt{pol.04}) \cite{pang2004sentimental} and Polarity05 (\texttt{pol.05}) \cite{pang2005seeing} datasets, and finally a 10k document subset of the \textit{apps for android} (\texttt{apps4and.}) corpus\footnote{Obtained from \url{http://jmcauley.ucsd.edu/data/amazon/}.} \cite{he2016ups}, which features reviews and associated ratings on a scale from 1 to 5. The neural network model we use for our experiments is a combination of a CNN \cite{LeCunnetal1998} and a bidirectional LSTM \cite{HochreiterandSchmidhuber1997}. CNNs have been evaluated extensively in text classification \cite{johnson2014effective,tang2015document,xiao2016efficient,conneau2017very} and sentiment analysis \cite{kalchbrenner2014convolutional,Kim2014,dos2014deep,yin2017comparative}, and this specific model (CNN+BLSTM) has been explored in different NLP benchmarks \cite{Kim2014}. Finally, as evaluation metrics we use precision (\textbf{p}), recall (\textbf{r}) and f-score (\textbf{f}), as well as accuracty (\textbf{acc.}).
To use SeVeN for text classification, we keep the exact same neural network architecture, but use enriched vector representations for each word. As a proof-of-principle, in this paper, this enriched vector representation is simply obtained by concatenating the word vector of that word, with vector representations of its top-10 neighbors according to PMI (ordered by this PMI score), together with the corresponding relation vectors.
For example, with word embeddings of 300 dimensions and relation vectors of 10 dimensions, the input for each word is given by a 3,400-dimensional vector. We list experimental results for several configurations, where the number of neighbors stays fixed, but the relation vector (rv) changes in dimensionality (10, 20 or 50). Experimental results are provided in Table \ref{tab:classification}. We can see that for the 20-dimensional vectors, the results are consistently better, or at least as good as the \texttt{baseline}. The results for the 10-dimensional and 50-dimensional vectors are similar, although these configurations perform slightly worse than the baseline for \texttt{pol.05}.
Overall, these results show the usefulness of the relation vectors and neighborhood structure, despite the rather naive way in which this information is used. It seems plausible to assume that performance may be further improved by using network architectures which exploit the graph structure in a more direct way.
\begin{table}[!t]
\centering
\resizebox{\textwidth}{!}{
\renewcommand{\arraystretch}{1.15}
\begin{tabular}{lrrr|rrr|rrr|rrr|r|r}
& \multicolumn{3}{c|}{\texttt{bbc}} & \multicolumn{3}{c|}{\texttt{20news}} & \multicolumn{3}{c|}{\texttt{reuters-r56}} & \multicolumn{3}{c|}{\texttt{apps4and.}} & \multicolumn{1}{c|}{\texttt{pol.04}} & \multicolumn{1}{c}{\texttt{pol.05}} \\ \cline{2-15}
& \multicolumn{1}{c}{\textbf{p}} & \multicolumn{1}{c}{\textbf{r}} & \multicolumn{1}{c|}{\textbf{f}} & \multicolumn{1}{c}{\textbf{p}} & \multicolumn{1}{c}{\textbf{r}} & \multicolumn{1}{c|}{\textbf{f}} & \multicolumn{1}{c}{\textbf{p}} & \multicolumn{1}{c}{\textbf{r}} & \multicolumn{1}{c|}{\textbf{f}} & \multicolumn{1}{c}{\textbf{p}} & \multicolumn{1}{c}{\textbf{r}} & \multicolumn{1}{c|}{\textbf{f}} & \multicolumn{1}{c|}{\textbf{acc.}} & \multicolumn{1}{c}{\textbf{acc.}} \\ \hline \hline
\texttt{baseline} & 0.95 & 0.95 & \textbf{0.95} & 0.86 & 0.85 & 0.86 & 0.85 & 0.88 & 0.86 & 0.39 & 0.48 & 0.38 & 0.54 & \textbf{0.78} \\ \hline
10rv & 0.95 & 0.95 & \textbf{0.95} & 0.88 & 0.87 & 0.87 & 0.89 & 0.91 & \textbf{0.90} & 0.40 & 0.44 & \textbf{0.40} & 0.56 & 0.75 \\
20rv & 0.96 & 0.95 & \textbf{0.95} & 0.89 & 0.89 & \textbf{0.89} & 0.89 & 0.92 & \textbf{0.90} & 0.38 & 0.48 & \textbf{0.40} & 0.59 & \textbf{0.78} \\
50rv & 0.94 & 0.94 & 0.94 & 0.88 & 0.87 & 0.88 & 0.89 & 0.91 & \textbf{0.90} & 0.35 & 0.46 & 0.38 & \textbf{0.60} & 0.77
\end{tabular}
}
\caption{Experimental results on six benchmarking datasets for text classification.}
\label{tab:classification}
\end{table}
\section{Conclusions and Future Work}
In this paper we have presented SeVeN, a dedicated vector space model for relational knowledge. These relation vectors encode corpus-based evidence capturing the different contexts in which a pair of words may occur. An initially high-dimensional relation vector is is further ``purified'' thanks to a simple \textit{ad-hoc} autoencoder architecture, designed to only retain relational knowledge. We have explored the characteristics of these vector networks qualitatively, by showing highly correlated word pairs, as opposed to, for example, difference vectors. While the latter are often assumed to capture relational properties, we found that the relational similarities they capture largely reflect the similarities of the individual words, with little relational generalization capability. In addition, we have evaluated our SeVeN vectors in terms of their usefulness in two standard NLP tasks: word similarity and text classification. In both cases we obtained better results than baselines that use standard word vectors alone.
There are several interesting avenues for future work. First, an obvious way to improve these unsupervised representations would be to leverage structured knowledge retrieved from knowledge graphs and/or Open Information Extraction systems. Such knowledge could easily be exploited by feeding any available structured knowledge as additional inputs to the autoencoder. Another way in which structured knowledge could be harnessed would simply be to label relation vectors, i.e.\ to identify regions in the relation vector space that correspond to particular relation types (e.g.\ hypernymy). Another possibility would be to improve SeVeN by aggregating relation vectors along paths in the graph. In this way, we may learn to predict missing edges (or to smooth relation vectors that were learned from too few or too uninformative sentences), similarly to the random walk based strategies that have been developed for completing traditional semantic networks and knowledge graphs \cite{DBLP:conf/emnlp/GardnerTKM14}.
\section*{Acknowledgments}
We would like to thank the anonymous reviewers for their helpful comments. This work was supported by ERC Starting Grant 637277.
\bibliographystyle{acl}
|
1,477,468,751,403 | arxiv | \section{Introduction}
Thermoelectric (TE) devices are highly desirable since they can directly convert between thermal and electrical energy. Electrical power can be supplied to such a device to either heat or cool adjoining reservoirs (Peltier effect) or alternatively, the flow of heat (e.g. from a factory or car exhaust) can be converted into usable electrical power (Seebeck effect). Often, the efficiency of a TE device is characterized by the dimensionless figure-of-merit $ZT$=$S^2GT/\kappa$, constructed with the rationale that an efficient TE device should simultaneously: maximize the electrical conductance $G$ so that current can flow without much Joule heating, minimize the thermal conductance $\kappa$ in order to maintain a temperature gradient across the device, and maximize the Seebeck coefficient $S$ to ensure that the coupling between the electronic and thermal currents is as large as possible.\cite{Bell08,DiSalvo99} Generally, however, $ZT$ is difficult to maximize because these properties are {\em highly correlated} with one another,\cite{Hochbaum08, Majumbdar04, Snyder08} a fact that becomes more pronounced at the nanoscale where the number of degrees of freedom available is small.
If a TE material were found exhibiting $ZT$$\geq$4 it would constitute a commercially viable solution for many heating and cooling problems at both the macro- and nano-scales, with no operational carbon footprint.\cite{DiSalvo99} Currently, the best TE materials available in the laboratory exhibit $ZT$$\approx$3, whereas for commercially available TE {\em devices} $ZT$$\approx$1, owing to various packaging and fabrication challenges.\cite{Bell08,Harman02}
In a previous article, enhanced thermoelectric effects were found in the vicinity of a transmission node of a quantum tunneling device. Generically, the transmission probability vanishes quadratically as a function of energy at such a transmission node.\cite{Bergfield09b} Here we present results for a class of two-terminal single-molecule junctions (SMJ) with higher-order `supernodes' in their transmission spectra. In the vicinity of a 2$n$$^{\rm th}$ order supernode:
\begin{equation}
{\gcal T}(E) \propto (E-\mu_{\rm node})^{2n},
\label{eq:T_supernode}
\end{equation}
where $\mu_{\rm node}$ is the energy of the node. We find that junctions possessing such supernodes exhibit a scalable order-dependent quantum-enhanced thermoelectric response.
As an example, $ZT$ of a supernode-possessing polyphenyl ether (PPE)-based SMJ is shown as a function of repeated phenyl unit number $n$ in Fig.~(\ref{fig:ZT_vs_n}). As illustrated in the figure, $ZT_{\rm peak}$ scales super-linearly in $n$ whereby $ZT_{\rm peak}$=4.1 in a junction composed of just four phenyl groups ($n$=4). Although we focus on molecular junctions in this article, it should be stressed that our results are applicable to any device with transmission nodes arising from coherent electronic transport.
\begin{figure}[b]
\centering
\includegraphics[width=\mycolumnwidth]{ZT_vs_n_fig1.eps}
\caption{Near a 2$n$$^{\rm th}$ order {\em supernode} in a device's transmission spectrum, we find an order-dependent enhancement of the thermoelectric response which is limited only by the electronic coherence length. Calculations were performed for a polyphenyl ether (PPE) SMJ with $n$ repeated phenyl groups at room temperature ($T$=300K) with $\Gamma$=0.5eV. Notice that the enhancement is super-linear in $n$. Inset: $ZT$ as a function $\mu$ for n=$1\ldots5$.}
\label{fig:ZT_vs_n}
\end{figure}
As an engineering rule-of-thumb, $ZT$ has been widely used to characterize the bulk thermoelectric response of materials.\cite{Bell08,DiSalvo99,Snyder08} At the nanoscale, however, it is unclear the extent to which $ZT$ is applicable, since bulk scaling relations for transport may break down due to quantum effects.\cite{Datta95} Moreover, $ZT$ is a linear response metric, and cannot {\em a priori} predict nonequilibrium thermoelectric response.
We investigate the efficacy of $ZT$ as a predictor of nonequilibrium device {\em performance} at the nanoscale by calculating the thermodynamic efficiency and power of an interacting quantum system using both nonequilibrium many-body\cite{Bergfield09} and H\"uckel theories. We discover that in both theories, variations of $ZT$ and thermodynamic efficiency are in good qualitative agreement. However, large discrepancies between thermoelectric effects calculated within many-body and H\"uckel theory are found in the resonant tunneling regime, indicating the essential role of electron-electron interactions in nanoscale thermoelectricity. For a thermoelectric quantum tunneling device, we find that the power output can be changed significantly by varying an external parameter, such
as a gate voltage, and that this variation is {\it not correlated} with the variation of $ZT$.
Neglecting inelastic processes, which are strongly suppressed at room temperature in SMJs, the current flowing into lead $1$ of a two-terminal junction may be written as follows:\cite{Bergfield09b}
\begin{equation}
\label{eq:Iq_ButtikerForm}
I^{(\nu)}_1=\frac{1}{h} \int_{-\infty}^\infty dE\; (E-\mu_1)^\nu \,{\gcal T}(E)\left[f_2(E)-f_1(E)\right],
\end{equation}
where $\nu=0$ ($\nu$=1) for the number (heat) current, $f_\alpha(E)$ is the Fermi function for lead $\alpha$ with chemical potential $\mu_\alpha$ and inverse temperature $\beta_\alpha$, and ${\gcal T}(E)$ is the transmission probability for an electron of energy $E$ to tunnel across the junction. This transmission function may be expressed in terms of the junction's Green's functions as:\cite{Datta95}
\begin{equation}
{{\gcal T}}(E)={\rm Tr}\left\{ \Gamma^1(E) G(E) \Gamma^2(E) G^\dagger(E)\right\},
\label{eq:transmission_prob}
\end{equation}
where $\Gamma^\alpha(E)$ is the tunneling-width matrix for lead $\alpha$
and $G(E)$ is the retarded Green's function of the SMJ.
In organic molecules, such as those considered here, electron-phonon coupling is weak, allowing $ZT$ to be expressed as follows:
\begin{equation}
ZT = \left. ZT \right|_{el}\left(\frac{1}{1+\kappa^{ph}/\kappa^{el}}\right),
\label{eq:ZT_full}
\end{equation}
where\cite{Finch09}
\begin{equation}
\left. ZT \right|_{el} = \left(\frac{\myL{0}\myL{2}}{\left[\myL{1}\right]^2}-1\right)^{-1}
\label{eq:ZT_in_L}
\end{equation}
and
\begin{equation}
\myL{\nu}\left(\mu,T\right) = \int dE (E-\mu)^{\nu}\,{\gcal T}(E) \left(-\frac{\partial f_0}{\partial E}\right).
\label{eq:Lnu}
\end{equation}
Here $f_0$ is the equilibrium Fermi function and $\kappa^{ph}$=$\kappa_0 {\gcal T}^{ph}$ is the phonon's thermal conductance, where $\kappa_0$=$(\pi^2/3)(k_{\rm B}^2 T/h)$ is the thermal conductance quantum\cite{Rego99} and ${\gcal T}^{ph}$ is the phonon transmission probability. Since the Debye frequency in the metal lead is typically smaller than the lowest vibrational mode of a small organic molecule, the spectral overlap of phonon modes between the two is small, implying ${\gcal T}^{ph}$$\ll$1 and consequently that $ZT$$\approx$$\left.ZT\right|_{el}$.
\begin{figure}[htb]
\centering
\includegraphics[width=2.5in]{heat_pump7.eps}
\captionsetup{singlelinecheck=off,justification=RaggedRight}
\caption[Heat pump]{Schematic diagram of a thermoelectric device, where $I^{(1)}_\alpha$ is the heat current flowing into lead $\alpha$, $T_\alpha$ is the temperature and ${\gcal P}$ is the power output.
}
\label{fig:thermo_diagram}
\vspace{-.5cm}
\end{figure}
Thermodynamically, a system's response is characterized by the efficiency $\eta$ with which heat can be converted into usable power ${\gcal P}$ and the amount of power that can be generated. Applying the first law of thermodynamics to the device shown in Fig.~(\ref{fig:thermo_diagram}) gives
\begin{equation}
{\gcal P}= - {I}^{(1)}_1-{I}^{(1)}_2 = I_1^{(0)} (\mu_1 - \mu_2),
\label{eq:thermo_power}
\end{equation}
where
we mention that the power is equivalently phrased in terms of heat or electrical currents. The efficiency $\eta$ is defined as the ratio of power output to input heat current:
\begin{equation}
\eta=\frac{{\gcal P}}{\left|I^{(1)}_1\right|}=
- \frac{I^{(1)}_1 + I^{(1)}_2}{\left|I^{(1)}_1\right|},
\label{eq:thermo_eff}
\end{equation}
where we have assumed that $T_1>T_2$.
With these expressions for the power and efficiency, we can completely quantify the performance of a quantum device, both near and far from equilibrium.
\begin{figure*}[tb]
\centering
\subfloat[many-body theory]{\label{fig:many-body_benzene}
\put(135,162){\includegraphics[width=.65in]{benzene_meta_cartoon.eps}}
\includegraphics[width=.51\linewidth]{splot_fig2a.eps}}
\subfloat[H\"uckel theory]{\label{fig:Huckel_benzene}
\includegraphics[width=.51\linewidth]{splot_fig2b.eps}}
\caption[Full-width many-body vs. H\"uckel]{The transmission probability ${\gcal T}(E)$, figure-of-merit $ZT$, Carnot-normalized efficiency $\eta/\eta_{\rm C}$, and electrical power output ${\gcal P}$ of a two terminal 1,3-benzene SMJ, with lead temperatures $T_1$=300K and $T_2$=250K, calculated using (a) many-body and (b) H\"uckel theory, highlighting the discrepancies near resonances and the similarities near the node in the two theories. As a function of $\mu$, $\eta$ and $ZT$ are in excellent qualitative agreement while ${\gcal P}$ is only peaked near resonance, suggesting that $ZT$ is incomplete as a device performance metric. (a) Many-body calculations give ${\gcal P}_{\rm peak}$=33$\mu$W and $\eta_{\rm peak}/\eta_{\rm C}$=11.5\% near resonance. (b) H\"uckel calculations give ${\gcal P}_{\rm peak}$=21$\mu$W and $\eta_{\rm peak}/\eta_{\rm C}$=2.7\% near resonance. The mid-gap region is discussed in Fig.~(\ref{fig:many-body_huckel_closeup}). Note that the peak $ZT$=0.75 is on par with currently available commercial thermoelectrics.\cite{Snyder08,Bell08} Calculations were performed using the model and parameterization of benzene discussed in detail in Ref.7 with $\Gamma$=0.63eV.
\label{fig:benzene_figure3}
\vspace{-.5cm}
}
\end{figure*}
As a first example, we calculate the non-linear thermodynamic response of a meta-connected Au-benzene-Au SMJ using many-body\cite{Bergfield09} and H\"uckel theory, shown in Fig.~(\ref{fig:many-body_benzene}) and Fig.~(\ref{fig:Huckel_benzene}), respectively. Although the transmission spectrum of this junction doesn't possess a supernode, it does possess a quadratic node within $\pi$-electron theory,\cite{Bergfield09b, Cardamone06} and will allow us to ascertain the importance of interactions on the thermoelectric response of a SMJ.
In the top panel of each figure is a section of the transmission spectrum, showing the {\sc homo} and {\sc lumo} resonances and the quadratic node directly in between at $\mu$=$\mu_0$. Associated with this node is an enhancement in many linear-response metrics\cite{Bergfield09b} including $ZT$, which is shown in the second panel from the top. The bottom two portions of each figure show the calculated efficiency $\eta$ and power ${\gcal P}$ when a junction with $T_1$=300K and $T_2$=250K is further pushed out of equilibrium via the application of a bias voltage $\Delta V$. In all simulations presented here, the lead-molecule coupling is taken to be symmetric such that $\Gamma^\alpha_{nm}$=$\Gamma \delta_{na}\delta_{ma}$, where $n$, $m$, and $a$ are $\pi$-orbital labels and $a$ is coupled to lead $\alpha$. The efficiency is normalized with respect to the maximum allowed by the second law of thermodynamics, the Carnot efficiency $\eta_{\rm C}=\Delta T /T_1$, where $\Delta T$=$T_1$-$T_2$.
The nonequilibrium thermodynamic response of a 1,3-benzene SMJ calculated using many-body theory is shown in Fig.~(\ref{fig:many-body_benzene}). The $ZT$ and $\eta$ spectra, shown in two middle panels of the same figure, exhibit peaks in the vicinity of both transmission nodes and resonances whereas the power ${\gcal P}$, shown in the bottom panel, is only peaked near transmission resonances. Around either the {\sc homo} or {\sc lumo} resonance, the peak power ${\gcal P}_{\rm peak}$=33$\mu$W and peak efficiency $\eta_{\rm peak}/\eta_{\rm C}$=11.5\% are only realized when the junction operates out of equilibrium at a bias voltage $\Delta V$=3mV. With a chemical potential near the mid-gap node and $\Delta V$=3.6mV $\eta_{\rm peak}/\eta_{\rm C}$=14.9\%, larger than near resonance but with a much lower peak power ${\gcal P}_{\rm peak}$=.088nW.
In the vicinity of a resonance, there are both quantitative and qualitative differences in the linear and non-linear thermodynamic response predicted by the two theories.
By neglecting interactions, the H\"uckel theory fails to accurately predict both the degeneracy and position of electronic resonances. It also incorrectly determines the peak values of $ZT$, $\eta$ and ${\gcal P}$ in the vicinity of a resonance. As can be seen near either ({\sc homo} or {\sc lumo}) resonance in Fig.~(\ref{fig:benzene_figure3}), the H\"uckel theory predicts a Carnot-normalized peak efficiency of 2.7\% which is nearly five times less than the 11.5\% predicted by the many-body theory. The peak power near a resonance also varies considerably between the two theories, where the H\"uckel calculations give ${\gcal P}_{\rm peak}$=21$\mu$W while many-body theory predicts ${\gcal P}_{\rm peak}$=33$\mu$W. These results indicate that interactions are required to accurately predict the thermoelectric response of devices operating in the resonant-tunneling regime. It is interesting to note, however, that in both models the linear-response metric $ZT$ qualitatively captures the features of the non-linear metric $\eta$.
\begin{figure}[b]
\centering
\subfloat[$ZT$]{\label{fig:closeup_a} \includegraphics[width=.333\mycolumnwidth]{fig3a_zt.eps}}
\subfloat[$\eta/\eta_{\rm C}$]{\label{fig:closeup_b}\includegraphics[width=.333\mycolumnwidth]{fig3b_eff.eps}}
\subfloat[50$\times {\gcal P}$ ($\mu$W)]{\label{fig:closeup_c}\includegraphics[width=.333\mycolumnwidth]{fig3c_power.eps}}
\caption[Figure 4, Hueckel vs. many-body]{Calculations of $ZT$, $\eta$ and ${\gcal P}$ in the vicinity of the transmission node at $\mu$=$\mu_0$ of a meta-benzene SMJ using many-body (red line and panel i) and H\"uckel (black line and panel ii) theories. (a) and (b): $ZT$ and $\eta$ are found to be identical and independent of theory. (c) ${\gcal P}$ is strongly affected by interactions where, at peak efficiency ($\eta_{\rm peak}/\eta_{\rm C}$=14.91\%), many-body and H\"uckel calculations give ${\gcal P}_{\rm max}$=.088nW and ${\gcal P}_{\rm max}$=1.87nW, respectively. The simulation parameters and colorscale are the same as in Fig.~(\ref{fig:benzene_figure3})
}
\label{fig:many-body_huckel_closeup}
\end{figure}
In this article, we are interested in thermoelectric enhancement near nodes
far away from any resonances. Although interactions are {\em required} in order to ensure the invariance of transport quantities under a global voltage shift (i.e.~gauge-invariance), near the particle-hole symmetric point the effect of interactions on the thermoelectric response should be small. In panels a-b of Fig.~(\ref{fig:many-body_huckel_closeup}), a comparison of $ZT$ and $\eta$ using both many-body and H\"uckel theories is shown near $\mu_0$ for a 1,3-benzene SMJ. Near this point, $ZT$ and $\eta$ are independent of theory employed.
In contrast, the power, shown in panel c of the same figure, exhibits an order of magnitude difference between the two theories. This observation can be understood by noticing that the calculated {\sc homo-lumo} gap is $\approx$10eV using many-body theory (panel c-i) whereas it is only $\approx$5.5eV when interactions are neglected in the H\"uckel theory (panel c-ii). Since the power is peaked near transmission resonances, whose widths are fixed by the lead-molecule coupling $\Gamma$, the larger gap found using many-body theory gives a correspondingly lower predicted power.
While the H\"uckel theory is not able to accurately characterize the thermoelectric response of a junction in the resonant-tunneling regime, it is sufficient for predicting $\eta$ and $ZT$ in the vicinity of the transmission node. Since we are interested in these quantities for mid-gap supernodes, we shall use H\"uckel theory to simulate the larger molecules presented below.
The transmission node in a meta-benzene junction can be understood in terms of destructive interference of electron waves
traversing the ring at the Fermi energy.\cite{Cardamone06}
According to Luttinger's theorem,\cite{Langreth66, Luttinger60} the Fermi volume is unaffected by the inclusion of electron-electron interactions.
Consequently, in an aromatic ring such as benzene
the Fermi wavevector $k_{\rm F}$=$\pi/2d$ is conserved and is therefore sufficient to characterize quantum interference both with and without interactions near $\mu_0$, since $\Delta\phi$=$ k_{\rm F}\Delta l$, where $\Delta\phi$ is the relative phase between transport paths with length difference $\Delta l$, and $d$ is the inter-site distance.
\begin{figure}[b]
\centering
\begin{picture}(0,0)
\put(137,99){\includegraphics[width=.95in]{biphenyl_cartoon2.eps}}
\end{picture}
\includegraphics[width=\mycolumnwidth]{fig_biphenyl.eps}
\caption[Biphenyl $ZT$ and $\eta$]{A closeup of $ZT$ and $\eta$ near the quartic supernode of a 3,3'-biphenyl SMJ showing $ZT_{\rm peak}$=1.84 and $\eta_{\rm peak}/\eta_{\rm C}$=26.86\% at a predicted power of 0.75pW. The junction geometry is shown schematically in the inset of the upper panel. Simulations were performed using H\"uckel theory with $T_1$=300K, $T_2$=250K and $\Gamma$=0.5eV.}
\label{fig:Biphenyl}
\end{figure}
This is an important result, since the energy of resonant levels will generally depend strongly on whether or not interactions are included.
Since $k_{\rm F}$ is protected, however, the transmission node across a single phenyl group
is not so much a coincidence of energy levels as a {\em wave phenomenon}, meaning that interference in molecules composed of
multiple aromatic rings in series
can be understood in terms of the interference within each subunit
rather than the energy spectrum of the entire molecule.
We find that such polycyclic molecules
can exhibit higher-order {\em supernodes},
and that associated with a supernode is an order-dependent quantum enhancement of the junction's thermoelectric response. Additional transport channels (e.g. $\sigma$-orbitals) or incoherent processes may lift the supernode. The effect on the thermoelectric response is small provided the processes are weak, as discussed in Ref.~(\onlinecite{Bergfield09b}).
\begin{figure}[tb]
\centering
\begin{center}
\includegraphics[width=3.1in]{PPE_cartoon5.eps}
\hspace{-1cm}
\vspace{-.4cm}
\end{center}
\includegraphics[width=\mycolumnwidth]{fig_super_triplot.eps}
\caption[Supernode triplot]{Supernode enhancement of $ZT$, thermopower $S$ and Lorenz number $L$ for polyphenyl ether (PPE) SMJs with $n$ repeated phenyl groups, shown schematically above the top panel. As a function of $n$, $ZT_{\rm peak}$ scales super-linearly exhibiting a peak value of 6.86 for $n$=6. The thermopower and Lorenz number are also enhanced with $S_{\rm peak}$=957$\mu$V/K and $L_{\rm peak}$=55.33$L_{\rm WF}$ at the same value of $n$. Simulations were performed using H\"uckel theory at room temperature ($T$=300K) with $\Gamma$=0.5eV. Inter-phenyl electronic hopping was set an order of magnitude below the intra-phenyl value of 2.64eV.}
\label{fig:supernode_triplot}
\end{figure}
The 3,3'-biphenyl junction, drawn schematically in the top panel of Fig.~(\ref{fig:Biphenyl}), can viewed as two meta-connected benzene rings in series. This junction
geometry is similar to that studied by Mayor et al.\cite{Mayor03} In agreement with the prediction that a biphenyl junction should possess a quartic supernode, the linear and non-linear response shown in Fig.~(\ref{fig:Biphenyl}) exhibits peak values of efficiency ($\eta/\eta_{\rm C}$=26.86\%) and $ZT$ (1.84) that are over twice those of benzene. With $ZT$$\approx$2, the biphenyl junction exhibits sufficient thermoelectric performance to be attractive for many commerical solid-state heating and cooling applications.\cite{Bell08, DiSalvo99, Snyder08} As we shall see, this is only the first in an entire class of supernode-possesing molecules which exhibit even larger values of $\eta$ and $ZT$.
In larger molecules composed of $n$ meta-connected phenyl group in series, we expect that the transmission nodes should combine and give rise to a 2$n$$^{\rm th}$ order supernode. Polyphenyl ether (PPE), shown schematically at the top of Fig.~(\ref{fig:supernode_triplot}), consists of $n$ phenyl rings connected in series with ether linkages. Based on our previous discussion, we predict that a PPE-based junction should exhibit a 2$n$$^{th}$ order supernode. The figure-of-merit $ZT$, thermopower $S$ and Lorenz number $L$=$\kappa/GT$ for PPE junctions are shown in the top, middle and bottom panels of Fig.~(\ref{fig:supernode_triplot}), respectively, where the Lorenz number is normalized with respect to the Wiedemann--Franz (WF) value L$_{\rm WF}$=$\left(\pi^2/3\right)(k_{\rm B}/e)^2$.
The bottom panel of Fig.~(\ref{fig:supernode_triplot}) shows an increasing peak Lorenz number $L_{\rm peak}$ with increasing $n$. In linear-response, $L$ and $S$ can be expressed in terms of Eq.~(\ref{eq:Lnu}) as:
\begin{equation}
\left. L\right|_{el} = \frac{1}{(eT)^2} \left(\frac{\myL{2}}{\myL{0}} - \left[\frac{\myL{1}}{\myL{0}} \right]^2 \right),
\label{eq:Lorenz}
\end{equation}
and $S$=$-\frac{1}{eT} \frac{\myL{1}}{\myL{0}}$, where $e$ is the magnitude of the electron's charge and $T$ is the temperature. Using Eq.~(\ref{eq:Lorenz}) and Eq.~(\ref{eq:Lnu}) with the transmission function of Eq.~(\ref{eq:T_supernode}) we find that:
\begin{equation}
\left. \frac{L_{\rm max}}{L_{\rm WF}} \right|_{el} = \left( \frac{3}{\pi^2} \right)\frac{\left. \left[ \partial_b^{2n+2} b\pi \csc(b\pi)\right] \right|_{b=0}}{\left. \left[ \partial_b^{2n} b\pi \csc(b\pi) \right] \right|_{b=0}}.
\label{eq:Lmax_analytic}
\end{equation}
Setting $n$=6 in Eq.~(\ref{eq:Lmax_analytic}) gives $L_{\rm max}$=55.33$L_{\rm WF}$, corresponding exactly to the result of the full simulation shown in the bottom panel of Fig.~(\ref{fig:supernode_triplot}). Similar agreement is found for the other values of $n$, confirming the presence of 2n$^{th}$ order supernodes in these junctions.
We find that higher-order supernodes in the transmission spectrum of a nanoscale junction give rise to an order-dependent quantum-enhancement of the linear and non-linear thermoelectric response. The full nonequilibrium spectrum of thermodynamic efficiency qualitatively resembles the figure-of-merit $ZT$ spectrum, suggesting that $ZT$ encapsulates the salient physics related to efficiency even at the nanoscale. Efficiency, however, is only part of a device's performance. Another important quantity is the usable power produced by a device, whose variations are poorly characterized by $ZT$ at the nanoscale.
Thermoelectric devices based on individual SMJs are ideally suited for local cooling in integrated nanoscale circuit architectures. Supernode-based
devices have a low transmission probability and thus a large electrical impedance capable of withstanding voltage surges.
Moreover, high-power macroscopic devices could be constructed by growing layers of densely packed
molecules. For example, a self-assembled
monolayer with a surface density\cite{Zangmeister04}
of 4$\times$10$^{15}$molecules/cm$^2$ would give 352kW/cm$^2$ at peak efficiency for a meta-benzene film.
The efficiency of PPE-based devices increases with ring number and is only limited by the electronic coherence length, suggesting that highly efficient molecular-based thermoelectric devices may soon be realized.
|
1,477,468,751,404 | arxiv | \section{Introduction}
\label{sec:Intro}
The effect of random disorder on otherwise well-understood statistical mechanical problems is an important topic, going back to the Ising model \cite{Watson1969}.
The study of the lattice polymer model on randomly diluted lattices goes back almost as far and has been closely related to the problem of percolation.
Fundamental scaling laws for the self-avoiding walk model of polymers persist on inhomogeneous lattices, provided the disorder is above the percolation limit $p_c$ \cite{Kremer1981, Duplantier1988}.
Change in the scaling behaviour only occurs at the percolation limit $p_c$ \cite{Blavatska2008}.
These results have been confirmed with numerical work \cite{Lee1988,Rintoul1994} and exact enumeration \cite{Lam1990,Ordemann2000,Nakanishi1991}.
The addition of disorder also introduces new considerations such as how the type of averaging over disorder affects SAWs \cite{Nakanishi1992,Birkner2010} and when scaling laws are well-defined \cite{Janssen2007}.
In particular, we are interested in polymer collapse in a disordered medium.
Without disorder polymer collapse is a critical transition between a high-temperature extended phase and a low-temperature random globule phase known as the $\theta$ point.
It is also possible to have a third phase at low-temperature characterised that is collapsed but also more ordered than the globule phase and also maximally dense \cite{Bastolla1997}.
The canonical model for polymer collapse, the interacting self-avoiding walk (ISAW) can be extended to included stiffness, and this model exhibits a third phase characterised by anisotropic crystalline configurations and critical transitions to the extended and globule phases \cite{Krawczyk2009,Krawczyk2010}.
We previously \cite{Bradly2021} looked at the semi-stiff ISAW model on an inhomogeneous square lattice and found that the introduction of lattice defects causes a slight swelling of configurations in the globule phase and disrupts the formation of globally crystalline configurations in the crystal phase.
At larger amounts of inhomogeneity the critical transition between the globule and crystal phases disappears.
In this work we look at another model for studying polymer collapse, using self-avoiding trails (SATs).
Whereas a SAW does not allow a lattice site to be visited more than once, a SAT relaxes this condition slightly, allowing sites to be visited more than once, but not bonds between sites.
Trails still exhibit the excluded-volume effect that makes these objects suitable for representing polymers, but can have slightly different properties to SAWs.
In particular, the question of whether collapse transitions in trail models are in the same universality class as for walk models \cite{Owczarek1995, Prellberg1995, Owczarek2007}.
Polymer collapse with trail models works by assigning an interaction energy to sites with multiple visits.
By considering the trails on the triangular lattice we can assign different energies to doubly- or triply-visited sites, which induces another collapsed phase in two dimensions.
The homogeneous lattice case of this model has been studied previously \cite{Doukas2010}, showing that the collapse transition to the globule phase is $\theta$-like and the other collapsed phase is characterised by maximally dense configurations whose interior is dominated by triply-visited sites. The important difference to the third phase of the semi-stiff ISAW model is that this maximally dense phase is not ordered in a real crystalline sense as so may behave differently to the introduction of disorder. In another slightly different model \cite{Bedini2017} three collapsed phases were observed separately. To investigate the effect of disorder on this third type of collapsed phase we extend the model of Doukas {\it et al.\ }\cite{Doukas2010} to include lattice inhomogeneity.
\section{Model and simulation}
\label{sec:Model}
We consider single polymers in dilute solution modelled as self-avoiding trails (SATs) on the triangular lattice.
The extended interacting SAT (eISAT) model allows for both doubly- and triply-visited sites, with different interactions energies based on the number of visits.
The canonical partition function for such SATs of length $n$ is
\begin{equation}
Z_n(\omega_2, \omega_3) = \sum_{m_2,m_3} d_{n}(m_2,m_3) \, \omega_2^{m_2} \omega_3^{m_3},
\label{eq:CombinedPartition}
\end{equation}
where $m_i$ is the number of sites with $i$ visits, $\omega_i$ is the Boltzmann weight for sites with $i$ visits and $d_{n}(m_2,m_3)$ is the density of states, or the number of configurations of length $n$, with $m_2$ doubly-visited sites and $m_3$ triply visited sites.
Here we consider both weights independently but certain special cases can be constructed by relating $\omega_3$ to $\omega_2$ \cite{Doukas2010,Owczarek1995}.
We represent the lattice defects like a site percolation model where lattice sites have a probability $p$ to be available.
This means a fraction $1-p$ of lattice sites is unavailable to the SAT and the partition function $Z_n(\omega_2, \omega_3; p)$ is now dependent on $p$.
We are interested in how the introduction of disorder affects the collapsed phases so we look at values of $1-p$ that are smaller than the percolation limit, which for site-percolation on the triangular lattice is $p_c = 1/2$ \cite{Stauffer1992}.
An example trail is shown in \fref{fig:InhomoTriISAT}.
Details of how the lattice configuration is chosen are given below when discussing the flatPERM algorithm.
\begin{figure}[t!]
\centering
\includegraphics[width=0.35\columnwidth]{example_inhomo_tri_eISAT}
\caption{A self-avoiding trail on the triangular lattice with three doubly-visited sites (green circles) and one triply-visited site (red circle).
Impurities in the lattice are marked with black crosses and prevent adjacent sites being triply-visited.}
\label{fig:InhomoTriISAT}
\end{figure}
To characterise the phases of the system we calculate the average density of doubly- and triply-visited sites $\langle m_2 \rangle / n$ and $\langle m_3 \rangle / n$, respectively.
For the transitions between these phases we consider the variance of parameter $m_i$,
\begin{equation}
c_n^{(i)} = \frac{\text{var}(m_i)}{n} = \frac{\langle m_i^2 \rangle - \langle m_i \rangle^2}{n}.
\label{eq:VarM}
\end{equation}
In the thermodynamic limit this quantity becomes the specific heat which has singular behaviour $c_\infty \sim |T - T_c|^{-\alpha}$ governed by the universal scaling exponent $\alpha$.
If $\alpha < 1$ the transition is continuous and if $\alpha = 1$ then it is a first-order transition, in addition to a discontinuous jump in the densities.
For the finite-size system a crossover scaling ansatz is introduced and the singular part of the specific heat has the form
\begin{equation}
c_n \sim n^{\alpha\phi} \mathcal{F}\left[n^\phi(T-T_c)\right],
\label{eq:CnScalingModel}
\end{equation}
for some scaling function $\mathcal{F}$.
Near the critical point $T_c$ the scaling function is considered to be a positive constant and the exponent $\alpha$ can be found from the leading-order scaling of the peak of the variance
\begin{equation}
c_{n,\text{peak}}^{(i)} \sim n^{\alpha\phi}.
\label{eq:CnPeakScaling}
\end{equation}
In some cases it is useful to consider the third derivative of the free energy $t_n$, whose peaks scale with exponent $(1+\alpha)\phi$.
Along with the well-known relation $1/\phi = 2-\alpha$ \cite{Brak1993} the scaling of these quantities can be used to determine $\alpha$ and thus the nature of the transition.
For the full model it is useful to generalise the specific heat or variance to include the covariance of both parameters via the Hessian matrix
\begin{equation}
H_n =
\begin{pmatrix}
\frac{\partial^2 f_n}{\partial \omega_2^2} & \frac{\partial^2 f_n}{\partial \omega_2 \partial \omega_3} \\
\frac{\partial^2 f_n}{\partial \omega_3 \partial \omega_2} & \frac{\partial^2 f_n}{\partial \omega_3^2}
\end{pmatrix}
,
\label{eq:Hessian}
\end{equation}
where $f_n = -\tfrac{1}{n}\log Z_n$ is the reduced free energy.
The largest eigenvalue of $H_n$, which we denote $c_n^{(\lambda)}$, reduces to $c_n^{(i)}$ in cases where variance of one parameter $m_i$ is dominant.
In general, phase transitions are indicated by large $c_n^{(\lambda)}$.
In addition to derivatives of the free energy we are interested in metric quantities, for example the mean-square end-to-end distance
\begin{equation}
\langle R_n^2 \rangle = \langle ({\mathbf r}_{n} - {\mathbf r}_0)^2 \rangle,
\label{eq:EndToEndRadius}
\end{equation}
where ${\mathbf r}_i$ is the position of the $i^\text{th}$ monomer in the chain.
The scaling of metric quantities is governed by the Flory exponent $\nu$, i.e.~$\langle R_n^2 \rangle \sim n^{2\nu}$.
The model is simulated using the flatPERM algorithm \cite{Prellberg2004}, an extension of the pruned and enriched Rosenbluth method (PERM) \cite{Grassberger1997}.
The simulation works by growing a trail up to some maximum length $N_\text{max}$ and counting the number of multiply-visited sites $m_2$ and $m_3$ at each step. Along the way the cumulative Rosenbluth \& Rosenbluth weight \cite{Rosenbluth1955} of the sample is recorded and used to update the sample weights $W_{n,m_2,m_3}$, which are an approximation to the athermal density of states $d_{n}(m_2,m_3)$ in \eref{eq:CombinedPartition}, for all $n\le N_\text{max}$.
FlatPERM prunes samples with low weight and enriches samples with high weight (relative to the current estimate of $W_{n,m_2,m_3}$) in order to maintain a flat histogram of samples over $n$, $m_2$, and $m_3$.
Flat histogram methods greatly enhance the sampling of low probability states, in this case those configurations with large values of $m_2$ and $m_3$.
The main output of the simulation are the weights $W_{n,m_2,m_3}$, from which thermodynamic quantities are calculated by specifying Boltzmann weights and using the weighted sum
\begin{equation}
\langle Q \rangle_n(\omega_2,\omega_3) = \frac{\sum_{m_2,m_3} Q_{m_2,m_3} \omega_2^{m_2} \omega_3^{m_3} W_{n,m_2,m_3}}{\sum_{m_2,m_3} \omega_2^{m_2} \omega_3^{m_3} W_{n,m_2,m_3}}.
\label{eq:FPQuantity}
\end{equation}
In certain cases it is advantageous to simulate a restricted model by fixing one of the Boltzmann weights $\omega_i$ at the beginning of the simulation.
The sum over the corresponding microcanonical parameter $m_i$ in \eref{eq:FPQuantity} is effectively performed within the simulation by altering the weight by a factor $\omega_i^{m_i}$.
The value of $m_i$ is only used locally at each step and the output weights array is two dimensional instead of three-dimensional for the full model.
The benefit is that the flatPERM algorithm is targeting a flat histogram in two parameters rather than three for the full model and so much larger lengths can be simulated in the same amount of time.
These restricted simulations correspond to a horizontal or vertical line in the $(\omega_2,\omega_3)$ parameter space which is useful for focusing on particular transitions in the phase diagram.
The inhomogeneous lattice is implemented by choosing a set of lattice sites to be inaccessible to the trail before it is grown.
The number of impurities is drawn from the appropriate binomial distribution with $p$ being the probability of any particular site being a valid site for the walk.
These impurities are distributed uniformly over the area of the lattice that would be accessible to a walk of length $n$.
The set of inaccessible sites is reseeded at the beginning of each flatPERM iteration (growing the walk from the origin).
The initial weight of each iteration is set to be the probability of the configuration of lattice impurities.
In this way the output weights $W_{n,m,s}$ contain the sum over disorder such that any $\langle Q \rangle$ in \eref{eq:FPQuantity} also represents a quenched-type average over disorder \cite{Nakanishi1992}.
It was recently demonstrated in \cite{Campbell2020} that a parallel implementation of the flatPERM algorithm is possible, whereby each thread grows samples independently but contributes to a global histogram and weights array in shared memory.
This is in contrast to the usual method of running multiple independent instances and then combining the results.
The shared memory approach does not simulate samples at a higher rate but does have the advantage that the approach to equilibrium is much faster in the early stages of running, and thus the algorithm does not need to be run for as long to achieve similar results to the serial implementation.
In this work we employ a parallel implementation of flatPERM to simulate both the restricted and full eISAT models, which involve two and three microcanonical parameters, respectively.
We still run several independent simulations for the same model with each independent instance using multiple threads in parallel.
This provides a measure of statistical uncertainty as well as enough iterations to properly sample the lattice defect configurations.
We thus effectively employ 100s of CPU threads for each model enabling us to simulate $10^5$ iterations of the full model up to length $n = 600$ and $10^6$ iterations of the restricted model up to length $n = 1444$ in less than 100 hours of server walltime, compared to smaller lengths taking several weeks with a serial implementation.
These system sizes are significantly greater than earlier studies of the eISAT model \cite{Doukas2010} and the semi-stiff ISAW model on the inhomogeneous lattice \cite{Bradly2021}.
We are also aided by the fact that self-avoiding trails are sampled slightly faster than self-avoiding walks, since trails typically have more moves available at each step so less pruning is required.
We also remark that our implementation ignores race conditions from a shared memory implementation but that this has little or no effect on efficiency for the system sizes considered.
This is similar to naive parallelisation that can be applied to the Wang-Landau algorithm \cite{Zhan2008}.
\begin{figure}[ht!]
\centering
\includegraphics[width=\textwidth]{full_phase_diagram_N600}
\caption{The behaviour of the model in the full phase space is elucidated by considering the average densities of doubly-visited sites $\langle m_2 \rangle / n$ (left) and triply-visited sites $\langle m_3 \rangle / n$ (middle) and the logarithm of the largest eigenvalue of the covariance matrix $H_n$ (right). In this way a phase diagram can be inferred.
Plots are for length $n = 600$ and $1-p = 0$ (top) and $1-p = 0.2$ (bottom).
Black points in (c) refer to typical configurations of \fref{fig:Configurations}.}%
\label{fig:FullPhase}%
\vspace{0.25cm}
\includegraphics[width=\textwidth]{best_configs_homo}
\includegraphics[width=\textwidth]{best_configs_inhomo}
\caption{Typical configurations at points in the phase space indicated on \fref{fig:FullPhase}(c), which corresponds to the swollen phase, globule phase and maximally dense phase, respectively. Top row (a-c) are for the homogeneous lattice with $1-p = 0$ and bottom row (d-f) are for the inhomogeneous lattice $1-p = 0.2$.}%
\label{fig:Configurations}%
\end{figure}
\section{Phase diagram}
\label{sec:Phase}
First we characterise the phases by looking at the densities and the expected configurations, with and without lattice impurities.
For this we simulated the full eISAT model up to maximum length $n = 600$ using parallel flatPERM.
In \fref{fig:FullPhase} we plot the average density of doubly-visited sites $\langle m_2 \rangle / n$ (left) and average density of triply-visited sites $\langle m_3 \rangle / n$ (middle).
The variance of the microcanonical parameters is also shown in the plots on the right, which plot the logarithm of the largest eigenvalue $\lambda$ of the covariance matrix $H_n$, \eref{eq:Hessian}.
The top row is for the homogeneous lattice, $1-p = 0$, and the bottom row is with lattice defects present, $1-p = 0.2$.
Further visualisation of the phases is given in \fref{fig:Configurations} which shows typical configurations at points in the $(\omega_2,\omega_3)$ phase diagram that are indicative of each phase.
These points are marked with black dots on \fref{fig:FullPhase}(c).
On the homogeneous lattice, $1-p = 0$ we infer that there are three phases, as previously conjectured \cite{Doukas2010}.
For small $\omega_2$ and $\omega_3 \lesssim 8$ the extended phase is characterised by both densities $\langle m_2 \rangle / n$ and $\langle m_3 \rangle / n$ being very small, though non-zero. In this phase the trails are in an extended or swollen configuration like in \fref{fig:Configurations}(a). We confirm below that the Flory exponent is the expected $\nu=3/4$.
For larger $\omega_2$ the system enters the globule phase characterised by collapsed configurations as in \fref{fig:Configurations}(b).
Here $\langle m_2 \rangle / n$ has a significantly larger value that smoothly increases as $\omega_2$ increases, trending to the maximum value $1/2$ at very large $\omega_2$ (very low temperature).
The density of triply-visited sites, $\langle m_3 \rangle / n$, is still small for $\omega_3 \lesssim 8$, but starts to increase as $\omega_3 $ increases, which we argue below is the approach to a maximally dense phase.
The transition to the globule phase from the extended phase is expected to be $\theta$-like and occurs at a critical value $\omega_2^\text{c}$ that depends on $\omega_3$ and decreases as $\omega_3$ increases.
However, it is a weak transition and it is difficult to make out even on the logarithmic scale of \fref{fig:FullPhase}(c).
Lastly, the maximally dense phase appears for large $\omega_3$ where $\langle m_2 \rangle / n$ again becomes small and would vanish as $\langle m_3 \rangle / n$ quickly approaches its maximum value of $1/3$. In fact, the phase is expected to be characterised by the thermodynamic limit $\lim_{n\rightarrow \infty} \langle m_3 \rangle / n = 1/3$ for any point $(\omega_2,\omega_3)$ in this phase.
\fref{fig:Configurations}(d) shows a typical configuration in this phase were the trail is dense in the interior with only a small fraction of the trail in singly- or doubly-visited sites, mainly on the boundary.
The transition to the maximally dense phase from the extended phase is first-order, shown by a line of high variance in \fref{fig:FullPhase}(c).
The transition from the globule phase to the maximally dense phase is continuous, but appears stronger than the $\theta$-like extended-globule transition.
It is expected that the phase boundaries meet at the multi-critical point $(\omega_2,\omega_3) = (5/3,25/3)$, where the eISAT model corresponds to unweighted pure kinetic growth of trails \cite{Doukas2010}.
In our finite size data where the phase boundaries meet differs from the exact kinetic growth point by a small but noticeable amount, despite the much longer length we simulate here, suggesting that there are still sizable finite-size corrections to consider.
In the case of the inhomogeneous lattice, $1-p = 0.2$, where a considerable fraction of the lattice is unavailable to the walks, the extended and globule phases are largely unchanged but there are several differences regarding the maximally dense phase.
The extended phase is still characterised by small values of $\langle m_2 \rangle / n$ and $\langle m_3 \rangle / n$.
In the globule phase $\langle m_3 \rangle / n$ is very close to zero, except near the transition to the maximally dense phase, and $\langle m_2 \rangle / n$ has a larger finite value increasing with $\omega_2$ though still small compared to its possible maximum of $1/2$.
The configurations, shown in \fref{fig:Configurations}(e,f), have the same character as the homogeneous lattice.
The transition between the extended and globule phases is still too weak to seen on this scale, even when the other transitions are weakened by the presence of lattice defects.
The largest change when lattice inhomogeneity is introduced is the disruption to the maximally dense phase.
Firstly, the densities have significantly different values compared to the homogeneous lattice case.
Comparing \fref{fig:FullPhase}(a) and (d), we see that the density of doubly-visited sites $\langle m_2 \rangle / n$ is now non-zero for $\omega_2 > 1$ and large $\omega_3$.
From \fref{fig:FullPhase}(b) and (e) we also see that the density of triply-visited sites $\langle m_3 \rangle / n$ is reduced but still substantial.
When looking at the variances in \fref{fig:FullPhase}(c) and (f) it appears that the sharp first-order transition boundary between the extended and maximally dense phases is gone.
There is evidence that a weaker transition remains in roughly the same place and, in fact, the same could be the case for the globule-maximally dense transition. However, if the maximally dense phase disappears and becomes simply a denser version of the globule phase there can be no thermodynamically sharp transition.
The finite size nature of this analysis urges caution and a conservative interpretation suggests that there is a smooth transition as $\omega_3$ is increased for large $\omega_2$.
In fact there are many artefacts arising from the difficulty to obtain good convergence for low temperatures in \fref{fig:FullPhase}(f) that make it difficult to ascertain the phase diagram clearly and we will look more closely at some of these possible transitions below.
Lastly, we note that the kinetic growth model does not map to a critical point of the ISAT model on the inhomogeneous lattice because the presence of defects allows for the kinetic growth trails to become trapped and it is also worth noting that a mapping of kinetic growth to a static model induces an interaction with the defect.
From these plots of the densities, it appears at first sight that there is a difference between the eISAT model and the semi-stiff ISAW model \cite{Bradly2021} regarding the effect of the lattice inhomogeneity on the maximally dense and crystal phases.
In the latter case, lattice inhomogeneity clearly erased the distinction between the globule and crystal phases as the lattice defects prevented anisotropic configurations and the phase diagram showed only a extended phase and a collapsed phase (\cite{Bradly2021} Fig.~2).
In the eISAT model there still seems to be a transition between the globule phase and the region of the phase diagram that contained the maximally dense phase on the homogeneous case in respect that the densities even if the difference is smaller.
Regarding the typical configurations, \fref{fig:Configurations}(h) shows that the lattice inhomogeneity breaks the trail into several sub-clusters, each exhibiting a maximally dense interior. However the overall configuration is no longer maximally dense.
The separation into clusters (blobs) joined by strands of singly-visited sites, and thus an increase in the size of the surface relative to the bulk, accounts for the increase in $\langle m_2 \rangle / n$ and the decrease in $\langle m_3 \rangle / n$ compared to the homogeneous lattice case. So from this point of view the maximally dense phase is replaced by a denser version of the globule phase where the blobs become dense.
This is similar to the semi-stiff ISAW model where well separated sub-clusters form, each with internal anisotropy.
However, the subtle difference is that in that model the global anisotropy of the whole walk becomes drastically reduced when lattice inhomogeneity is introduced since the sub-clusters are not correlated. Overall, this reinforces our interpretation that the maximally dense phase is broken and no real transition between small and large $\omega_3$ occurs.
The prime issue is of finite size scaling and the effective lengths at which our simulations are performed. One way to understand this is via the scaling of metric quantities, for example the mean-square end-to-end distance $R_n^2 \sim n^{2\nu}$.
In two dimensions the exponent has well-known values $\nu = 3/4$ in the extended phase, and $\nu = 1/2$ in collapsed phases.
In \fref{fig:R2Scaling} we show log-log plots of $R_n^2$ at points in the phase diagram representing each of the three phases.
Although the specific values of the weights do not matter for this picture, the data for each phase is: extended, $(\omega_2,\omega_3) = (1,1)$; globule, $(\omega_2,\omega_3) = (5,1)$; and maximally dense, $(\omega_2,\omega_3) = (1,20)$.
On the homogeneous lattice (a) all phases have expected scaling.
Note that the maximally dense phase is not well-formed for the smallest values of $n$ even at the large value of $\omega_3$ chosen as the representative point and so the data for this phase does not indicate any real scaling behaviour until larger $n$.
On the inhomogeneous lattice (c) with $1-p = 0.2$, the scaling in the collapsed phases clearly departs from $\nu = 1/2$ at all values of $n$ and $\langle R_n^2 \rangle$ appears to scale with an effective finite size exponent between $\nu=1/2$ and $\nu = 3/4$. This indicates that the lengths of our simulations are too small to proper see the low temperature behaviour in a finite size scaling analysis. The alternate explanation is that the impurities not only disrupt the maximally dense phase but also destroy the globular phase. This was not seen for the ISAW model but the lengths of those simulations were shorter than we have conducted here. We shall return to this point in the conclusion. Important for this work is that in the presence of impurities the trails appear to behave in the same way in both collapsed regions of the phase space.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{R2_loglog_scaling_N600}
\caption{The mean-squared end-to-end distance $\langle R_n^2 \rangle$ without and with lattice inhomogeneity at representative points of each phase.
Data is from the full model up to length $n = 600$.
Dashed reference lines indicate scaling corresponding to $\nu = 1/2, 3/4$.}%
\label{fig:R2Scaling}%
\end{figure}
\section{Phase transitions}
\label{sec:PhaseTransitions}
We now consider the each of the homogeneous phase transitions and how they are affected by the introduction of defects more closely as the amount of defects becomes small.
\subsection{Extended-globule transition}
\label{sec:ExtendedGlobuleTransition}
We first look at the critical transition between the extended and globule phases.
As we have seen in \fref{fig:FullPhase}, this transition is weaker than the others and on the homogeneous lattice it is expected to be a $\theta$-like transition.
In two dimensions the $\theta$ point transition is characterised by $\alpha = -1/3$, thus the peak value of the variance $c_n^{(2)}$ does not diverge and the scaling form of \eref{eq:CnPeakScaling} is not useful.
However, the peak of the third derivative of the free energy $t_n^{(2)}$ \emph{does} diverge, with exponent $2/7$, and we can visualise the peak values to determine the nature of the transition.
We consider moments of $m_2$ as the indicators of this transition, since $\langle m_3 \rangle / n$ changes only slowly near this transition.
In \fref{fig:CnPeaksExtGlob} we plot the peak values of (a) the variance $c_n^{(2)}$ and (b) the third derivative of the free energy $t_n^{(2)}$ using data from the full model but at a fixed value $\omega_3 = 5$, across the extended-globule transition.
For both the homogeneous lattice and and inhomogeneous lattice with small amount of defects, $1-p = 0.05$, the peaks in $c_n^{(2)}$ are clear.
For larger amount of inhomogeneity, the peaks are only clear for a smaller range in $n$; for larger lengths the peaks are indistinguishable from the numerical noise.
Where the peaks are well-defined, their magnitudes diverge slowly with increasing $n$ and corrections to scaling are significant, judging by the curvature of the data.
For the homogeneous case we can show in (b) that the peaks of $t_n^{(2)}$ for the homogeneous lattice do diverge, along with a dashed line with slope $2/7$.
Thus, we see that the extended-globule transition on the homogeneous lattice has the expected $\theta$-like characteristics.
The data for the inhomogeneous lattice cases is inconclusive on this point due to significant noise in the data.
The extended-globule transition persists on the inhomogeneous lattice, at least for small values of $1-p$, but we cannot be definitive about the nature of this transition,
although it is expected to remain a $\theta$-like transition \cite{Duplantier1988}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{ex-glob_theta_transition}
\caption{The peak values of (a) variance of doubly-visited sites $c_n^{(2)}$ and (b) the third derivative of the free energy $t_n^{(2)}$ near the extended-globule transition for $\omega_3 = 5$.
Data is from the full model simulations up to length $n = 600$.}
\label{fig:CnPeaksExtGlob}%
\end{figure}
\subsection{Globule-maximally dense transition}
\label{sec:GlobuleDenseTransition}
Next, we consider the transition between the globule and maximally dense phases.
We ran additional simulations of the restricted model with fixed $\omega_2 = 3$ up to length $n = 1444$.
Since both phases are collapsed we look at the covariance $c_n^{(\lambda)}$ for a signature of a transition.
In \fref{fig:CnPeaksGlobMax} we show (a) a log-log plot of peaks of $c_n^{(\lambda)}$ and (b) a log-log plot of the peaks of $|t_n^{(\lambda)}|$.
In the homogeneous lattice case we expect a continuous transition with scaling exponent close to $\alpha = 1/2$.
Although we do not have enough data to estimate $\alpha$ or corrections to scaling accurately, the data appears consistent with this exponent, shown by the reference lines in the plots.
For the inhomogeneous lattice cases we only plot points for a limited range of $n$ where the peaks are distinct.
At larger $n$ there is not a clear peak indicating a transition and this valid range shrinks as $1-p$ decreases.
Within this valid range the magnitudes of the peaks of $c_n^{(\lambda)}$ overlap well with the homogeneous lattice case.
This suggests that for lengths that are not too disturbed by the lattice defects the transition exists and is unaltered.
This behaviour persists until some maximum length, dependent on $1-p$, after which the transition is not evident and the two collapsed phases merge.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{glob-max_peaks_loglog_N1444_fixed_w2eq3}
\caption{The peak values of (a) $c_n^{(\lambda)}$ and (b) $|t_n^{(\lambda)}|$ near the globule-maximally dense phase transition for several amounts of lattice inhomogeneity.
Data is from restricted model simulations at $\omega_2 = 3$ up to length $n = 1444$.
Reference lines a show scaling for exponent $\alpha = 1/2$.}%
\label{fig:CnPeaksGlobMax}%
\end{figure}
\subsection{Extended-maximally dense transition}
\label{sec:ExtendedDenseTransition}
To look at the extended-maximally dense transition more closely we ran additional simulations of the restricted model with fixed $\omega_2 = 1.5$ up to length $n = 1444$.
In \fref{fig:CnPeaksExtMax} we plot the peaks of the variance of triply-visited sites $c_{n,\text{peak}}^{(3)}$ near the extended-maximally dense transition.
In the case of the homogeneous lattice $1-p = 0$, the first order nature of the transition is clear, since the peaks scale linearly with $n$ suggesting an exponent $\alpha = 1$.
In the presence of a small amount of inhomogeneity, $1-p = 0.05$, the linear scaling persists up to some maximum, and this maximum reduces as inhomogeneity increases to $1-p = 0.10$.
Similar to the globule-maximally dense transition, at large $n$ the variance has no identifiable peak to indicate a transition and these points are not shown on \fref{fig:CnPeaksExtMax}.
Unlike the globule-maximally dense transition however, there is a small window where a peak can be identified but the magnitude has sublinear scaling. So we can confidentially conclude that the first order transition disappears but less confident about its replacement.
If the addition of a small amount of lattice inhomogeneity allows a single collapsed phase to persist but without a distinction between globule and maximally dense phases then one expects that the extended-maximally dense transition must change to match the extended-globule transition, which we know to be at least continuous, possibly $\theta$-like.
The fact that there is a small window in the data where this may occur is tantalising but but we cannot be conclusive.
We do not have reliable enough data to probe with certainty, for example even where peaks in the variance can be identified, the simulations needs further convergence to reliably estimate the third derivative $t_n$ and thus the continuous transition scaling.
\begin{figure}[t!]
\centering
\includegraphics[width=0.35\linewidth]{ext-max_peaks_loglog_N1444_fixed_w2eq1-5}
\caption{The peak values of the variance of triply-visited sites $c_n^{(3)}$ near the extended-maximally dense phase transition for several amounts of lattice inhomogeneity.
The dashed reference line has a slope of 1.
Data is from restricted model simulations with fixed $\omega_2 = 1.5$ up to length $n = 1444$.}%
\label{fig:CnPeaksExtMax}%
\end{figure}
\section{Crossover to disordered system}
\label{sec:CrossoverToDisorder}
The extent of the disruption caused by increasing inhomogeneity is different for each transition and each phase.
However, a common feature is that as the inhomogeneity increases, there is a range in $n$ where expected behaviour persists, and above these lengths the transitions are altered to some degree.
The more inhomogeneity is present, the smaller this range is but it is somewhat \emph{ad hoc} to determine this range from where the scaling behaviour of $c_{n\text{peak}^{(i)}}$ changes.
Since we have a finite-size system the obvious way to characterise the amount of disorder is by the parameter $\chi = n^\nu\sqrt{1-p}$, which is the ratio of the leading order scaling of metric quantities (e.g.~end-to-end distance) to the mean separation of defects $1/\sqrt{1-p}$.
We are focused on the maximally dense phase so we will use the collapsed phase value for the exponent $\nu$, i.e.~$\nu = 1/2$ and see where this breaks down.
As a measure of the effect of the lattice defects we look at the densities in the inhomogeneous lattice cases relative to the homogeneous lattice case.
These quantities have smaller numerical uncertainty from simulations on an inhomogeneous lattice, compared to the variances consider in the previous section.
We define
\begin{equation}
\delta \langle m_i \rangle = \frac{\langle m_i \rangle_p - \langle m_i \rangle_0}{\langle m_i \rangle_0},
\label{eq:DeltaMi}
\end{equation}
where $\langle m_i \rangle_p$ and $\langle m_i \rangle_0$ are the densities calculated for the inhomogeneous and homogeneous lattice cases, respectively.
In \fref{fig:MiVsChi} we plot $\delta \langle m_i \rangle$ as a function of $\chi$ using data from the restricted model with fixed $\omega_2 = 3$ at a large value of $\omega_3 = 100$ to highlight the effect in the maximally dense phase.
It is worth remarking that at this point in the phase diagram $\langle m_2 \rangle_p$ is small and $\langle m_3 \rangle_p$ is close to $1/3$, regardless of $1-p$.
We identify low- and high-disorder regimes, delineated around $\chi \approx 6$.
This point is common to both densities and it also corresponds to the values of $n$ where the peaks of the variances change behaviour in \sref{sec:PhaseTransitions}.
In \fref{fig:MiVsChi}(a) $\delta \langle m_2 \rangle$ is largely independent of the inhomogeneity in the low-disorder regime, where lattice defects are present but are too few to disrupt very dense configurations.
There is a marked change in behaviour in the high-disorder regime where $\delta \langle m_2 \rangle$ increases with $\chi$; there is still some small dependence on $1-p$ but it is not clear from this data if this is significant.
There are two possible effects that contribute to this enhancement.
Firstly, a lattice defect prevents triply-visited sites in its immediate vicinity so more doubly-visited sites appear in the interior of a configuration.
Secondly, lattice defects inhibit a single dense globule in favour of more smaller sub-clusters thus increasing the surface of the configuration (where doubly-visited sites appear) relative to the bulk (dominated by triply-visited sites).
Judging by the most probable configurations shown in \fref{fig:Configurations} it seems that the second effect is stronger.
The effect of inhomogeneity on the density of triply visited sites is different, shown in \fref{fig:MiVsChi}(b).
In the low-disorder regime $\delta \langle m_3 \rangle$ appears to be enhanced relative to the homogeneous lattice case, but this is actually a finite-size effect as the enhancement decreases as $\omega_3$ is increased.
We speculate that a small amount of inhomogeneity inhibits the average size of configurations which reduces the size of the surface (dominated by doubly-visited sites) relative to the bulk (dominated by triply-visited sites).
Recall that for larger $1-p$ small $\chi$ corresponds to smaller $n$, where this effect is more significant.
In the high-disorder regime $\delta \langle m_3 \rangle$ is reduced as $\chi$ increases and a residual dependence on $1-p$ is more prominent.
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{mi_vs_chi_w2_3_w3_100}
\caption{The densities of the inhomogeneous lattice model relative to the homogeneous lattice model, versus the scaling parameter $\chi$, in the maximally dense phase, $(\omega_2,\omega_3) = (3,100).$}%
\label{fig:MiVsChi}%
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.7\columnwidth]{schematic_phases}
\caption{Schematic phase diagrams for (a) low disorder, including homogeneous lattice, and (b) near $\chi \approx 6$.
The solid blue line is a first-order transition, the dotted black line is a $\theta$-like phase transition and the dashed red line is a continuous phase transition.}%
\label{fig:SchematicPhases}%
\end{figure}
We summarise our findings in \fref{fig:SchematicPhases} with two schematic phase diagrams.
When the amount of disorder is zero or asymptotically small there is a scaling regime, $\chi \lesssim 6$, which includes the homogeneous lattice case, such that the system contains three phases, shown in (a).
In this phase diagram the behaviour of the transitions between the phases is known including that they meet at a multi-critical point.
Thermodynamically then this phase diagram is only valid for the homogeneous lattice but there is a scaling regime characterised by $\chi$.
At some point around $\chi \approx 6$ the maximally dense phase is disrupted and the transition to the globule phase disappears.
Further, the extended-maximally dense transition changes to a continuous one and in order to be consistent with what was the extended-globule transition, we expect that it becomes $\theta$-like.
We show in (b) a schematic phase diagram for finite impurity case with only two phases.
It is possible that the phase boundaries may have shifted relative to the small $\chi$ phase diagram, but we cannot quantify this shift.
However, we do expect that the phase boundary, if it does exist in this regime, does not include the kinetic growth point from the homogeneous lattice case.
Of course, the alternate hypothesis is that there are no longer any phase boundaries for fixed finite levels of impurities in the thermodynamic limit.
The resolution of this question requires further work with longer length simulations.
\section{Conclusion}
\label{sec:Conclusion}
We have simulated the extended ISAT model of lattice polymers on the homogeneous and inhomogeneous triangular lattices.
The presence of lattice defects disrupts the maximally dense phase and the transitions to the extended and globule phases in different ways.
This work complements a previous study of the semi-stiff ISAW model on the square lattice \cite{Bradly2021}.
In that model the low temperature analogue to the maximally dense phase is a crystal phase (also maximally dense but with added anisotropy) characterised by closely packed long straight segments.
It was intuitive that lattice defects would inhibit such crystalline configurations and this was most apparent in the average anisotropy of the configurations.
In particular, the value of the anisotropy in the crystal phase displayed crossover behaviour between low and high disorder regimes when parameterised by an appropriate scaling parameter.
Anisotropy is not useful in the eISAT model but by introducing the same scaling parameter $\chi$ we find a crossover scaling between homogeneous and inhomogeneous lattice regimes.
The crossover is apparent in the values of the densities $\langle m_2 \rangle / n$ and $\langle m_3 \rangle / n$ in the maximally dense phase and the scaling of peaks of $c_n$ near the transitions.
Although the maximally dense phases in the eISAT model is different to the crystalline phase in the ISAW model, the introduction of lattice defects disrupts these dense phases in similar ways, causing the formation of dense sub-clusters.
Our findings are consistent with the expectation that a critical transition between the extended and collapsed phases persists as the amount of lattice inhomogeneity increases and that the transition between the globule and maximally dense phase becomes a thermodynamically smooth change. However, our simulations sizes are not large enough to verify exponents. One question that needs addressing with longer simulations is whether lattice impurities also disrupt the globule phase.
\begin{acknowledgements}
Financial support from the Australian Research Council via its Discovery Projects scheme (DP160103562) is gratefully acknowledged by the authors.
\end{acknowledgements}
|
1,477,468,751,405 | arxiv | \section{Introduction}
In a nearly critical system, a local defect that prefers the ordered phase can induce the
nucleation of a droplet of local order in the nonordered background. Such droplets arise,
e.g., in disordered systems due to the presence of rare strongly coupled spatial regions.
They can have surprisingly strong consequences for the properties of the phase transition.
In a classical magnet at nonzero temperature, a large finite-size droplet does not have a
static magnetization; instead it fluctuates very slowly because flipping the droplet
requires coherently changing the order parameter in a large volume. More than 30 years
ago, Griffiths \cite{Griffiths69} showed that rare regions and the magnetic droplets
formed on them, lead to a singularity in the free energy in a whole temperature region
above the critical point, which is now known as the Griffiths region or the Griffiths
phase.\cite{RanderiaSethnaPalmer85} Later, it was shown that this singularity is only an
essential one \cite{Wortis74,Harris75,BrayHuifang89} and thus probably unobservable in
experiment (see also Ref.\ \onlinecite{Imry77}).
The effects of magnetic droplets are greatly enhanced if the underlying defects are
extended macroscopic objects (linear or planar defects). In these cases, the droplet
dynamics is even slower and so increases their effects. This was first found in the
McCoy-Wu model, \cite{McCoyWu68,McCoyWu68a} a 2D Ising model with linear defects. Later
it was studied in great detail in the context of the quantum phase transition of the
random transverse-field Ising model where the defects are extended in the imaginary time
dimension. \cite{Fisher92,Fisher95} In these systems, the Griffiths singularity in the
free energy actually takes a power-law form, and the susceptibility diverges inside the
Griffiths region.
Recently, it has been shown that even stronger effects than these power-law Griffiths
singularities can occur in Ising magnets with planar
defects.\cite{Vojta03b,SknepnekVojta04} Droplets that are extended in two dimensions can
undergo the magnetic phase transition (and develop a static order parameter)
independently from the bulk system. This leads to a destruction of the global sharp phase
transition by smearing. (Note that an unusual magnetization-temperature relation was
already found in the numerical mean-field analysis, Ref.\ \onlinecite{BBIP98}, but it was
interpreted as power-law critical behavior with a very large exponent $\beta$.) Similar
smeared phase transitions have also been found in a non-equilibrium system in the
presence of linear defects \cite{Vojta04}. A recent review of these and other rare region
effects can be found in Ref.\ \onlinecite{Vojta06}.
One particularly interesting class of problems concerns droplets in metallic quantum
magnets. In these systems, the dynamics of the magnetic modes is overdamped because they
couple to gapless fermionic excitations. In metallic (Ising) antiferromagnets, this
dissipative environment strongly suppresses tunneling, and sufficiently large droplets
completely freeze at low temperatures.
\cite{CastroNetoJones00,MillisMorrSchmalian01,MillisMorrSchmalian02} The global quantum
phase transition is thus smeared. \cite{Vojta03a}
In metallic ferromagnets, the situation is further complicated because the coupling
between the magnetic modes and the gapless fermionic degrees of freedom generates an
effective long-range spatial interaction between the magnetic fluctuations.
\cite{VBNK96,VBNK97,BelitzKirkpatrickVojta97} This interaction which takes the form
$r^{-(2d-1)}$ for clean electrons and $r^{-(2d-2)}$ for diffusive electrons, where $d\ge
2$ is the spatial dimensionality, can be viewed as a result of generic scale invariance
(for a recent review see Ref.\ \onlinecite{BelitzKirkpatrickVojta05}). Understanding
defects in nearly critical metallic quantum ferromagnets thus leads to the question of
whether the existence and the properties of magnetic droplets are influenced by the
long-range spatial interaction.
In this paper, we therefore develop the theory of a single defect coupling to the square
of the order parameter in a nearly critical classical or quantum magnet with power-law
spatial interactions of the form $r^{-(d+\sigma)}$ with $\sigma > 0$ to ensure a proper
thermodynamic limit. A crucial effect of the long-range interactions is that the tail of
the magnetic droplet decays into the bulk region like a power-law of the distance as
mandated by Griffiths theorem. \cite{Griffiths67} Such a strong tail extending into the
region that prefers the disordered phase can be expected to be energetically unfavorable.
To find out to what extent this hinders the formation of the magnetic droplet, we study
the droplet free energy within the saddle-point approach. In the quantum case, we also
consider the tunneling dynamics of the droplet for three cases: undamped dynamics,
overdamped dynamics due to Ohmic dissipation, and a conserved overdamped dynamics as in
the itinerant ferromagnet.
Our paper is organized as follows: We introduce our model, a classical or quantum
$\phi^4$-theory with long-range spatial interactions, in Sec.~\ref{sec:The-Model}. In
Sec.~\ref{sec:Droplet-static-profile}, we analyse the free energy of a droplet within
saddle-point approximation, and we discuss fluctuations. The droplet dynamics in the
quantum case is considered in Sec.~\ref{sec:The-dynamics}. The concluding
Sec.~\ref{sec:conclusions} is devoted to a summary as well as a discussion of the order
parameter symmetry and the consequences of our results for quantum Griffiths effects.
\section{The Model\label{sec:The-Model}}
In this section we introduce our model, a $d$-dimensional Landau-Ginzburg-Wilson field
theory with long-range power-law interactions for a scalar order parameter field
$\varphi$. We first formulate the model for the case of a zero-temperature quantum phase
transition, and we later discuss the necessary changes for a classical thermal phase
transition. The action of our quantum $\phi^4$-theory reads
\begin{equation}
S=S_{\rm stat}+S_{\rm dyn},\label{eq:total_action}
\end{equation}
with the static part given by
\begin{eqnarray}
S_{\rm stat} &=& \int d\tau \int d\mathbf{x}
d\mathbf{y}\varphi\left(\mathbf{x},\tau\right)
\Gamma\left(\mathbf{x},\mathbf{y}\right)\varphi\left(\mathbf{y},\tau\right)
\nonumber\\
&+&\frac{u}{2}\int d\tau d\mathbf{x}\varphi^{4}\left(\mathbf{x},\tau\right)~.
\label{eq:S_stat}
\end{eqnarray}
Here, $\mathbf{x}$ and $\mathbf{y}$ are position vectors and $\tau$ is imaginary time.
The bare two-point vertex, $\Gamma\left(\mathbf{x},\mathbf{y}\right)= \Gamma_{\rm
NI}(\mathbf{x})\delta\left(\mathbf{x}-\mathbf{y}\right) +\Gamma_{\rm
I}\left(\mathbf{x},\mathbf{y}\right)$, contains a non-interacting part and the attractive
long-range interaction. The latter is given by
\begin{equation}
\Gamma_{\rm I}\left(\mathbf{x},\mathbf{y}\right)=
-\gamma\left[\xi_{0}^{2}+\left|\mathbf{x}-\mathbf{y}\right|^{2}\right]^{-\left(\frac{d+\sigma}{2}\right)}.
\label{eq:I_kernel}
\end{equation}
Here, $\gamma$ is the interaction strength, $\xi_0$ is a microscopic cutoff length scale
of the order of the lattice constant, and $\sigma$ controls the range of the interaction.
To ensure a proper thermodynamic limit (an extensive free energy), $\sigma$ must be
positive. Note that an additional short-range interaction of the usual form
$|\nabla\varphi|^2$ can be added, if desired. As will be shown in Sec.\
\ref{subsec:SP-equation}, its contribution is subleading. The noninteracting part of the
vertex reads
\begin{equation}
\Gamma_{\rm NI}\left(\mathbf{x}\right)=t_{0}+\delta t\left(\mathbf{x}\right)+\Gamma_{0},
\label{eq:NI_kernel}
\end{equation}
where $t_{0}$ is the bulk distance from criticality,\footnote{In principle, one must
distinguish between the bare and the renormalized distance from the critical point. We
will suppress this difference unless otherwise noted, because it is of no importance for
our considerations.}
and the constant $\Gamma_{0}$ is chosen
such that it cancels the $(\mathbf{q}=0)$ Fourier component of the interaction (thus ensuring
that the bulk critical point is indeed at $t_{0}=0$). It takes the value
$\Gamma_0=\Omega_d\gamma\xi_0^{-\sigma}\,B(\sigma/2,d/2)/2$.
Here $\Omega_d$ is the surface of a $d$-dimensional unit sphere, and $B(x,y)$ is Euler's beta function.
$\delta t\left(\mathbf{x}\right)$
is the defect potential. For definiteness we consider a single spherically symmetric defect at
the origin,
\begin{equation}
\delta t (\mathbf{x}) = \left\{\begin{array}{rr} -V & \quad (|\mathbf{x}|<a) \\ 0 & (|\mathbf{x}|>a) \end{array}
\right.~.
\label{eq:defect}
\end{equation}
We are interested in the case $V>0$, i.e., in defects that favor the ordered phase.
When discussing the quantum tunneling dynamics of the magnetic droplets in
Sec.~\ref{sec:The-dynamics}, we will compare three different dynamical actions.
(i) In the undamped case, the dynamical action is given by
\begin{equation}
S_{\rm dyn}^{(1)}= T\sum_{\omega_{n}}\int d\mathbf{q}~\frac {\omega_{n}^2} {c^2}
\left|\tilde{\varphi}\left(\mathbf{q},\omega_{n}\right)\right|^{2} \label{eq:S_dyn_z1}
\end{equation}
where $\tilde{\varphi}(\mathbf{q},\omega_{n})$ is the Fourier transform of the order
parameter field in terms of wave number $\mathbf{q}$ and Matsubara frequency $\omega_n$,
$T$ is the temperature, and $c$ plays the role of a velocity of the undamped modes.
(ii) If the magnetic modes are coupled to an ohmic bath, the leading term in the dynamic action
takes the form
\begin{equation}
S_{\rm dyn}^{(2)}=\tilde\alpha T\sum_{\omega_{n}}\int d\mathbf{q}~\left|\omega_{n}\right|
\left|\tilde{\varphi}\left(\mathbf{q},\omega_{n}\right)\right|^{2}, \label{eq:S_dyn_z2}
\end{equation}
where $\tilde\alpha$ measures the strength of the dissipation, and there is a microscopic
frequency cutoff $|\omega_n| < \omega_{\rm mic}$.
(iii) Finally, we also consider the case of overdamped dynamics with order parameter conservation
analogous to the itinerant ferromagnet. The leading term in the dynamic action is given by
\begin{equation}
S_{\rm dyn}^{(3)}= \tilde\alpha_c T\sum_{\omega_{n}}\int
d\mathbf{q}\frac{\left|\omega_{n}\right|}{q}\left|
\tilde{\varphi}\left(\mathbf{q},\omega_{n}\right)\right|^{2}.
\label{eq:S_dyn_z3}
\end{equation}
The action defined in Eqs.\ (\ref{eq:total_action}) to (\ref{eq:S_dyn_z3}) describes a
system close to a \emph{quantum} phase transition. In order to investigate a droplet in a
system close to a \emph{classical thermal} phase transition, we simply drop the dynamical
piece of the action and eliminate the imaginary time-dependence of the order parameter
field.
\section{Existence of magnetic droplets\label{sec:Droplet-static-profile}}
In this section we investigate to what extent the existence of droplets is influenced by
the long-range spatial interaction. The basic idea is as follows: If the local potential
$t_0-V$ on the defect is negative, magnetic order is preferred on the defect even though
the bulk system may be nonmagnetic, $t_0>0$. Figure \ref{fig:profile} shows a schematic
of the local order parameter profile in this situation, comparing short-range and
long-range interactions.
\begin{figure}
\includegraphics[width=6cm]{profile.eps}
\caption{(Color online) Schematic local order parameter profiles for defect induced
droplets for short-range (a) and long-range (b) interactions. The dashed line
depicts the defect potential.}
\label{fig:profile}
\end{figure}
In the short-range case, the tail of the droplet profile falls off exponentially outside
the defect.\cite{NVBK99b,MillisMorrSchmalian01,MillisMorrSchmalian02} Thus the tail
provides only a subleading surface term to the droplet free energy. In contrast, for the
long-range interaction (\ref{eq:I_kernel}), the tail must take a power-law form because
Griffiths theorem\cite{Griffiths67} dictates that the magnetic correlations cannot decay
faster than the interaction. The tail thus extends far into the bulk region where the
local potential is positive, and therefore leads to a large positive contribution to the
droplet free energy. In this section we study whether this mechanism hinders the
formation of magnetic droplets for long-range interactions.
\subsection{Saddle-point equation\label{subsec:SP-equation}}
We start by analyzing the action (\ref{eq:total_action}) within saddle-point
approximation, focusing on the case $t_0>0$ (noncritical bulk system) because it is
relevant for Griffiths phenomena. We can restrict ourselves to time-independent solutions
because they have the lowest saddle-point actions (any time dependence produces an extra,
strictly positive contribution from $S_{\rm dyn}$). Setting
$\varphi(\mathbf{x},\tau)=\phi(\mathbf{x})$ and minimizing the total action with respect
to this field leads to the saddle-point equation
\begin{equation}
\left(t_0 +\delta t\left(\mathbf{x}\right)+\Gamma_{0}\right)\phi\left(\mathbf{x}\right)+u\phi^{3}
\left(\mathbf{x}\right) = \! \int \! \frac{\gamma\phi\left(\mathbf{y}\right)d\mathbf{y}}
{\left[\xi_{0}^{2}+\left|\mathbf{x}-\mathbf{y}\right|^{2}\right]^{\frac{d+\sigma}{2}}}.
\label{eq:Saddle-point_stat}
\end{equation}
Note that the classical action discussed at the end of section \ref{sec:The-Model} leads
to the same saddle-point equation. Therefore, the remainder of this section applies to
both the classical and quantum cases.
We have not managed to solve the nonlinear integral equation (\ref{eq:Saddle-point_stat})
in closed form. We therefore first present analytical results for the behavior of
$\phi(\mathbf{x})$ far away from the defect, and then we complement them by a numerical
solution. For sufficiently large $V$ (such that $t_0-V$ is sufficiently negative), we
expect the order parameter in the droplet core, $|\mathbf{x}|<a$, to be roughly constant.
Griffiths' theorem\cite{Griffiths67} mandates that the droplet tail cannot decay faster
than $|\mathbf{x}|^{-(d+\sigma)}$; we therefore try the spherically symmetric ansatz
\begin{equation}
\phi\left(\mathbf{x}\right)= \left\{ \begin{array}{lr} \phi_{0} & \quad
(|\mathbf{x}|<a)\\ C/|\mathbf{x}|^{d+\sigma} & (|\mathbf{x}|> a)\end{array}\right. ~,
\label{eq:ansatz_phi}
\end{equation}
with parameters $\phi_0$ and $C$. Note that in general, the ansatz (\ref{eq:ansatz_phi})
is not continuous at $|\mathbf{x}|= a$. To cure this unphysical behavior, there must be an
intermediate region $a<|\mathbf{x}|<a+\xi_m$ which connects the core with the asymptotic region
in (\ref{eq:ansatz_phi}). We will come back to this point later in this
section.
We now insert the ansatz (\ref{eq:ansatz_phi}) into the saddle-point equation
(\ref{eq:Saddle-point_stat}) and analyze it in the limit of large defects, $a \gg \xi_0$,
and large distance, $|\mathbf{x}| \gg a$ where (\ref{eq:Saddle-point_stat}) can be
linearized in $\phi$. We find that the ansatz indeed solves the linearized saddle-point
equation with the amplitude $C$ given by (to leading order in $a$)
\begin{equation}
C=\frac{\Omega_d\phi_{0}\gamma}{dt_{0}} a^{d}~.
\label{eq:C-scaling}
\end{equation}
Note that $C$ diverges when the bulk system approaches criticality ($t_0 \to 0$)
indicating that the ansatz (\ref{eq:ansatz_phi}) is not valid for a defect in a
\emph{critical} bulk. We will come back to this point in the next subsection.
To determine $\phi_0$, we now calculate the saddle-point action by inserting the solution
(\ref{eq:ansatz_phi}) with (\ref{eq:C-scaling}) into the action (\ref{eq:total_action}).
The result is the sum of a droplet core term, a tail term, and a core-tail interaction
term. The core term takes the form $(\Omega_d/d)a^d\phi_0^2 (t_0-V +u\phi_0^2/2)$. The
contribution of the long-range interaction is exactly cancelled by the $\Gamma_0$-term,
as must be the case for a constant order parameter. Interestingly, the tail term and the
core-tail interaction term are subleading in the limit of large defects, $a \gg \xi_0$.
Their leading $a$-dependencies are $a^{d-2\sigma}$ and $a^{d-\sigma-1}$ (up to possible
logarithmic corrections), respectively. Finally, we have to consider the intermediate
region $a<|\mathbf{x}|<a+\xi_m$ in which droplet core smoothly connects to the asymptotic
tail. From the numerical solution of the saddle-point equation (discussed in the next
subsection) we found that the width of the intermediate region is of the order of the
microscopic scale, $\xi_m \sim \xi_0$ (at least as long as the bulk system is not too
close to criticality; see next subsection for details). Importantly, $\xi_m$ does not
depend on the defect size $a$. Therefore, the intermediate region can at most make a
surface-type contribution to the droplet free energy, i.e., it can at most scale like
$a^{d-1}$.
Collecting all the terms, we find that the saddle-point action takes the form
\begin{equation}
S_{\rm SP}=\frac{\Omega_d}{d}\phi_{0}^{2} a^{d}\left(t_{0}-V+\frac{u}{2}\phi_{0}^{2}\right)+{\cal O}\left(a^{d-1},a^{d-2\sigma}\right)
\label{eq:SP-action}
\end{equation}
in the limit of a large defect ($a\to \infty$). Minimizing $S_{\rm SP}$ with respect to $\phi_0$ gives the
optimal value
\begin{equation}
\phi_{0}=\sqrt{\frac{V-t_{0}}{u}}~.
\label{eq:phi_0-scaling}
\end{equation}
This means, in the limit of a large defect, a droplet of local order starts to form as
soon as the local potential $t_0-V$ on the defect becomes negative. For finite $a$, the
subleading terms in (\ref{eq:SP-action}) lead to a shift in the onset of local order that
can be described by finite-size scaling in the usual way.
The results (\ref{eq:SP-action}) and (\ref{eq:phi_0-scaling}) are identical to the case
of short-range interactions.\cite{NVBK99b,MillisMorrSchmalian01,MillisMorrSchmalian02} We
thus arrive at the somewhat surprising conclusion that even though the long-range
interactions do induce a power-law tail of the droplet, they do not change the leading
behavior of its free energy (in the limit of large defects), and thus do not hinder the
existence of large droplets.
We also note that an additional short-range interaction of the form
$\left|\nabla\phi\right|^{2}$ in the static action (\ref{eq:S_stat}) will not modify our
results. Clearly, in the core region of the droplet it plays no role, and faraway from
the core $\left(\mathbf{x} \gg a\right)$, it only produces a subleading power-law. Its
contribution in the intermediate region can at most be of order $a^{d-1}$.
\subsection{Fluctuations}
\label{subsec:Fluctuations}
So far, we have analyzed the magnetic droplets within saddle-point approximation. In this
subsection we discuss to what extent fluctuations modify the above saddle-point analysis.
It is useful to divide the fluctuations into two classes, small fluctuations about the
saddle-point solution and collective reorientations of the entire droplet in (imaginary)
time. These two classes are well separated if the local order on the defect is properly
developed, i.e., $V-t_0\gtrsim u$. The collective reorientations determine the long-time
quantum dynamics of the droplet. They will be considered in more detail in Sec.\
\ref{sec:The-dynamics}.
In contrast, small long-wavelength fluctuations could potentially modify the droplet
profile (\ref{eq:ansatz_phi}), in particular the form of the magnetization tail. To study
the relevance of these fluctuations, we expand the action (\ref{eq:total_action}) about
the saddle-point solution and perform a tree-level (power-counting) renormalization group
analysis. The results depend qualitatively on whether or not the bulk system is critical.
As long as the bulk system is in its disordered phase, $t_0>0$, the asymptotic
long-distance decay of the droplet magnetization is controlled by the \emph {stable}
large-$t_0$ fixed point of the bulk rather than its critical fixed point. Since this
stable fixed point does not have anomalous dimensions, the saddle-point analysis is
qualitatively correct and the decay exponent in (\ref{eq:ansatz_phi}) remains unchanged.
Thus, the fluctuations only renormalize nonuniversal prefactors. Analogous results were
found in Ref.\ \onlinecite{MillisMorrSchmalian01} for the case of short-range case
interaction. Note that critical fluctuations \emph{on the defect} can change the exponent
in the relation (\ref{eq:phi_0-scaling}) close to the onset of local order at $t_0-V=0$,
provided the system is below its upper critical dimension. However, this has no bearing
on the form of the tail.
In contrast, if the bulk system is right at the transition, $t_0=0$, the long-distance
magnetization decay is controlled by the exponent $\eta$ of the \emph{critical} fixed
point via $\phi(\mathbf{x}) \sim |\mathbf{x}|^{-d+2-\eta}$ (because far from the defect,
$\phi(\mathbf{x})$ falls off as the bulk correlation function). For a classical magnet
with long-range interactions this fixed point was studied in the seminal work of Fisher,
Ma, and Nickel.\cite{FisherMaNickel72} They found that the critical behavior is
mean-field-like for $\sigma<d/2$ with $\eta=2-\sigma$. For $\sigma>2-\eta_{\rm SR}$
(where $\eta_{\rm SR}$ is the exponent of the corresponding short-range model), the
critical behavior is identical to that of the short-range
model.\cite{Sak73,LuijtenBlote02,Cardy_book96} In between, the exponents are nonclassical
and interpolate between mean-field and short-range behavior. Lets us also point out that
interesting crossover phenomena occur when the bulk system is close but not exactly at
the critical point. In this case the critical fixed point controls the magnetization
decay at intermediate distances (of the order of the bulk correlation length) from the
defect while the asymptotic behavior is again given by the saddle-point result
(\ref{eq:ansatz_phi}).
\subsection{Numerical solutions of the saddle-point equation}
\label{subsec:Numerics}
In this subsection we confirm and complement the asymptotic analysis of the saddle-point
equation (\ref{eq:Saddle-point_stat}) by a numerically exact solution.
We study both one and three space dimensions. In the three-dimensional case, for a
spherical defect and droplet, the angular integration on the r.h.s. of the saddle-point
equation (\ref{eq:Saddle-point_stat}) can be carried out analytically leading to a
one-dimensional integral equation in radial direction. We now discretize space in units
of the microscopic length $\xi_{0}$ and fix the energy scale by setting $u=1$. The
resulting set of nonlinear equations is solved by the following procedure: We start from
an ansatz for $\phi$ (e.g., the ansatz given in (\ref{eq:ansatz_phi})) and numerically
perform the integral in the long-range term of (\ref{eq:Saddle-point_stat}). We then
determine an improved value for $\phi$ by solving the remaining cubic equation at each
point by standard methods. These steps are repeated iteratively until the solution
converges.
In this way, we have analyzed one-dimensional systems with $2\times 10^{4}$ to $2\times
10^{5}$ points and three-dimensional systems with $10^{4}$ to $10^{5}$ points in radial
direction. We have studied the cases $\sigma=1,2,3$, large defects $a \gg 1$ and various
values of $t_0$, $V$ and $\gamma$. For weak long-range interactions and away from bulk
criticality, our procedure converges rapidly. With increasing $\gamma$ and decreasing
$t_0$, the convergence becomes slower. However, in all cases, our self-consistency cycle
eventually converges, giving us a numerically exact solution of the saddle-point
equation.
We now present and discuss a few characteristic results from these calculations.
In Fig.\ \ref{fig:droplet-profile-3d}, we show saddle-point solutions for $d=3$,
$\sigma=1$ and different values of the distance $t_0$ from bulk criticality.
\begin{figure}
\begin{center}\includegraphics[width=7.2cm]{fig2_hoyos_new.eps}\end{center}
\caption{(Color online) Local order parameter $\phi$ of a three-dimensional droplet
as a function of distance $x$ from the defect center for different distances
$t_0=0.1$ to 51.2 from bulk criticality (from top to bottom).}
\label{fig:droplet-profile-3d}
\end{figure}
In agreement with the analytical predictions of the last subsection, the order parameter
is essentially constant on the defect. For large $|\mathbf{x}|$, the droplet tail falls
off with the predicted power-law $\phi \sim |\mathbf{x}|^{-(d+\sigma)}= |\mathbf{x}|^{-4}$
for all values of $t_0$. The amplitude $C$ of this power-law decay is analyzed in Fig.\
\ref{fig:prefactor-3d}.
\begin{figure}
\begin{center}\includegraphics[width=6.8cm]{fig3_hoyos_new.eps}\end{center}
\caption{(Color online) Amplitude $C$ of the asymptotic power-law decay of the droplet tail
for the system shown in Fig.\ \ref{fig:droplet-profile-3d}. The solid line is the theoretical
prediction, Eqs.\ (\ref{eq:C-scaling}) and (\ref{eq:phi_0-scaling}). }
\label{fig:prefactor-3d}
\end{figure}
As predicted in the last subsection, for small $t_0$, $C$ behaves like $1/t_0$
(the small deviations are the lowest $t_0$ stem from the fact that in these cases,
$10^5$ sites is not sufficient to reach the asymptotic regime).
Figure \ref{fig:droplet-profile-1d} shows the dependence of the droplet profile on the
size $a$ of the defect for a system with $d=\sigma=1$.
\begin{figure}
\begin{center}\includegraphics[width=7.2cm]{fig4_hoyos_new.eps}\end{center}
\caption{(Color online) Local order parameter $\phi$ of a one-dimensional droplet
as a function of distance $x$ from the defect center for different defect
sizes $a=50$ to 1600 (from left to right).}
\label{fig:droplet-profile-1d}
\end{figure}
For all $a$, the asymptotic decay of the droplet tail takes the predicted power-law form,
$\phi \sim |\mathbf{x}|^{-(d+\sigma)}= |\mathbf{x}|^{-2}$. This figure also shows that
the width $\xi_m$ of the intermediate $\mathbf{x}$-region which connects the droplet core
with the power-law tail does not change with $a$ as discussed in the last subsection.
(This becomes even more obvious when a linear rather than the logarithmic $x$-scale is
used.) Moreover, in agreement with (\ref{eq:phi_0-scaling}), $\phi_{0}$ does not depend
on $a$. The amplitude $C$ of this power-law decay is analyzed in Fig.\
\ref{fig:prefactor-1d}.
\begin{figure}
\begin{center}\includegraphics[width=6.8cm]{fig5_hoyos_new.eps}\end{center}
\caption{(Color online) Amplitude $C$ of the asymptotic power-law decay of the droplet tail
for the system shown in Fig.\ \ref{fig:droplet-profile-1d}. The solid line is the theoretical
prediction, Eqs.\ (\ref{eq:C-scaling}) and (\ref{eq:phi_0-scaling}). }
\label{fig:prefactor-1d}
\end{figure}
In agreement with the theoretical prediction (\ref{eq:C-scaling}), the amplitude grows
linearly with the defect size $a$.
We have performed analogous calculations for other parameter sets, including varying
$t_0$ for $a=300,~600, ~1600$ as well as varying $a$ for $V=30$. In all cases, we have found
excellent agreement with the predictions of the asymptotic analysis of Sec.\
\ref{subsec:Numerics}.
\section{Tunneling dynamics\label{sec:The-dynamics}}
In this section, we study the tunneling dynamics of a single droplet at a
zero-temperature quantum phase transition. Our approach starts from the pioneering work
of Callan and Coleman\cite{CallanColeman77} and Leggett and
coworkers\cite{CaldeiraLeggett83,LCDFGZ87} (in the case of dissipative dynamics). In the
following subsections we separately discuss the droplet dynamics for the three dynamical
actions given in Eqs.\ (\ref{eq:S_dyn_z1}) to (\ref{eq:S_dyn_z3}), starting with the
undamped case.
\subsection{Undamped magnet}
\label{subsec:undamped}
Following Callan and Coleman\cite{CallanColeman77}, the tunneling rate between the
\emph{up} and \emph{down} states of the droplet (i.e., the tunnel splitting of the ground
state) can be estimated from the action of instanton-like saddle-point solutions
$\varphi(\mathbf{x},\tau)$ fulfilling the boundary conditions $\varphi(\mathbf{x},\tau)
\to \pm \phi(\mathbf{x})$ for $\tau \to \pm \infty$. In principle, several processes
contribute to the overall tunneling rate. In the simplest one, the droplet retains its
shape while collapsing and reforming.\cite{BeanLivingston59,ChudnovskyGunther88} A
competing process consists of the nucleation of a domain wall that then sweeps the
droplet.\cite{Stauffer76}
We start by considering the collapse-and-reformation process which can be described by an
ansatz
\begin{equation}
\varphi\left(\mathbf{x},\tau\right)=\phi\left(\mathbf{x}\right)\eta\left(\tau\right)
\label{eq:phi(x)eta(t)}
\end{equation}
with $\phi(\mathbf{x})$ being the static saddle-point solution of section
\ref{sec:Droplet-static-profile} and $\eta(\tau) \to \pm 1$ for $\tau \to \pm \infty$.
Inserting this ansatz into the action (\ref{eq:total_action}) and integrating over the
spatial variables yields the following excess effective action (above the
time-independent solution $\eta\equiv 1$)
\begin{equation}
\Delta S^{(1)}=\frac{\Omega_d}{d}\phi_{0}^{2}a^{d}\int d\tau\left[ \frac{\phi_0^2
u}{2}\left(1-\eta^{2}\right)^2+
\frac{1}{c^{2}}\left(\frac{d\eta}{d\tau}\right)^{2}\right]
\label{eq:S_eff_z1}
\end{equation}
to leading order in the defect size $a$. The saddle-point instanton solution of this action
can be found exactly. It takes the form $\eta\left(\tau\right)=\tanh\left(\tau/\tau_0\right)$, with
$\tau_0^{-2}=c^2\phi_0^2u/2$. The resulting instanton action reads
\begin{equation}
\Delta S_{\rm inst}^{(1)}=\frac {4\Omega_d}{3d} u\phi_{0}^{4}a^{d}\tau_0
\label{eq:Sinst_1}
\end{equation}
giving a tunnel splitting
\begin{equation}
\omega^{(1)}\approx \omega_0 e^{-\Delta S_{\rm inst}^{(1)}}.
\label{eq:rate_1}
\end{equation}
The ``attempt frequency'' $\omega_0$ can be determined by standard quantum tunneling
considerations\cite{LCDFGZ87} from the fluctuations about the instanton solution.
Importantly, the tunneling rate decays exponentially with the volume of the droplet.
Equations (\ref{eq:S_eff_z1}) to (\ref{eq:rate_1}) are in complete agreement with the
corresponding results for the case of short-range
interactions,\cite{MillisMorrSchmalian01,MillisMorrSchmalian02,HoyosVojta_unpublished}
reflecting the fact that the leading terms of the instanton action stem from the droplet
core rather than the tail.
To discuss the contribution of the moving domain wall processes to the tunneling rate,
we use the ansatz
\begin{equation}
\varphi(\mathbf{x},\tau)=\phi(\mathbf{x})\eta(\tau-x/v)
\label{eq:phi(x)eta(t-xv)}
\end{equation}
which describes a domain wall that sweeps the droplet in $x$-direction with
velocity $v$. $\eta$ describes the domain wall shape and fulfills the boundary conditions
$\eta(z) \to \pm 1$ for $z \to \pm \infty$ as before. Inserting this into the
action (\ref{eq:total_action}) gives the same effective action (\ref{eq:S_eff_z1}) plus
one additional positive term from the spatial dependence of $\eta$ (this term corresponds
to the domain wall energy). Therefore, the minimal action for this process is bounded by
(\ref{eq:Sinst_1}), and to exponential accuracy the corresponding tunnelling rate cannot
be larger than (\ref{eq:rate_1}). This is in qualitative agreement with earlier results
for short-range interactions. Stauffer\cite{Stauffer76} estimated the tunneling rate of a
domain wall within quasiclassical WKB approximation and found that it depends exponentially
on the droplet volume. Senthil and Sachdev\cite{SenthilSachdev96} estimated the tunnel
splitting of a locally ordered island in a transverse-field Ising model using perturbation
arguments. Again, the result (which should contain all possible processes) is
exponentially small in the droplet volume.
\subsection{Overdamped dynamics}
\label{subsec:overdamped}
We now consider overdamped dynamics with the action $S_{\rm dyn}^{(2)}$ as given in
(\ref{eq:S_dyn_z2}). Inserting the ansatz $\varphi\left(\mathbf{x},\tau\right)=
\phi\left(\mathbf{x}\right)\eta\left(\tau\right)$
into $S_{\rm dyn}^{(2)}$ and integrating over the spatial variables gives the following
contribution to the effective action
\begin{equation}
\Delta S^{(2)}= \frac {\Omega_d}{d} a^d \phi_0^2
\int d\tau d\tau^{\prime} \frac{\tilde\alpha}{2\pi}
\frac{(\eta(\tau)-\eta(\tau^{\prime}))^2}{\left(\tau-\tau^{\prime}\right)^{2}}
\label{eq:S_eff_z2}
\end{equation}
to leading order in the defect size $a$. The other terms are as in eq.\
(\ref{eq:S_eff_z1}). A straight forward saddle-point instanton analysis of the
effective action analogous to the last subsection fails because the interaction
of the trajectory $\eta(\tau)$ at large positive times with the trajectory at
large negative times causes a logarithmic divergence. Following Refs.\
\onlinecite{LCDFGZ87} and \onlinecite{DorseyFisherWartak86}, the calculation
therefore proceeds in two stages.
In the first stage, we introduce a \emph{low-frequency} cutoff $\omega_c$ in the dynamic
action (\ref{eq:S_dyn_z2}). This changes the interaction kernel in (\ref{eq:S_eff_z2}),
\begin{equation}
\frac{1}{\left(\tau-\tau^{\prime}\right)^{2}} \to
\frac{1+2\omega_c|\tau-\tau^{\prime}|}{\left(\tau-\tau^{\prime}\right)^{2}(1+\omega_c|\tau-\tau^{\prime}|)^2}~,
\end{equation}
and removes the divergence. We have not been able to solve analytically for the instanton
but we have used the ansatz $\eta(\tau)=\tanh(\tau/\tau_0)$ with variational parameter
$\tau_0$. Minimizing the effective action $\Delta S^{(1)}+ \Delta S^{(2)}$ with respect
to $\tau_0$ gives
\begin{equation}
\tau_0 = \frac {3 \tilde \alpha}{\pi \phi_0^2 u} \left(1+ \sqrt{1+\frac{2 \pi^2 \phi_0^2
u}{9 c^2 \tilde\alpha^2}} \right)~.
\end{equation}
In the limit of weak dissipation, $\tilde\alpha \to 0$, we recover the result for
undamped dynamics while strong dissipation, $\tilde\alpha\to \infty$, leads to $\tau_0 =
6\tilde\alpha /(\pi\phi_0^2 u)$. The resulting instanton action can be expressed in terms
of the dimensionless dissipation strength parameter \cite{LCDFGZ87,DorseyFisherWartak86}
\begin{equation}
\alpha=\frac {4\Omega_d}{\pi d} \phi_{0}^{2}a^{d} \tilde\alpha~.
\label{eq:alpha}
\end{equation}
We note that $\alpha$ is proportional to the defect volume $a^d$. (Analogous results have
been obtained for dissipative random quantum Ising
models.\cite{SchehrRieger06,HoyosVojta06}) The dissipative part of the instanton action
reads
\begin{equation}
\Delta S_{\rm inst}^{(2)}= -\alpha \ln(\omega_c) +f(\alpha) ~, \label{eq:Sinst_2}
\end{equation}
where the function $f(\alpha)$ is given by $f(\alpha) = c\, \alpha +O(\alpha^2)$ for
weak dissipation and $f(\alpha) = -\alpha \ln \alpha + c^{\prime}\, \alpha
+O(\ln(\alpha))$ for strong dissipation. $c$ and $c^{\prime}$ are constants of order one.
For comparison we have also studied a piecewise linear ansatz for $\eta(\tau)$. The
resulting instanton action is identical to (\ref{eq:Sinst_2}) except for different
numerical values of the constants $c,c^\prime$. At the end of the first stage of the
calculation we thus obtain the bare tunnel splitting
\begin{equation}
\omega_{\rm bare}^{(2)}\approx \omega_0 e^{-\Delta S_{\rm inst}^{(2)}}~.
\label{eq:bare_rate_2}
\end{equation}
In the second stage of the calculation the resulting dissipative two-level system
is treated using renormalization group methods. \cite{LCDFGZ87} It is well-known
that instanton-instanton interactions renormalize the tunnel splitting, yielding
\begin{equation}
\omega^{(2)} \sim \omega_{\rm bare}^{(2)}\left[\frac{\omega_{\rm bare}^{(2)}}{\omega_c}
\right]^{\alpha/(1-\alpha)}.
\label{eq:rate_2}
\end{equation}
This also eliminates the unphysical dependence of the tunnel splitting on the arbitrary
cutoff parameter $\omega_c$. We thus find that the smaller defects with $\alpha<1$
continue to tunnel, albeit with a strongly reduced rate. The larger defects with
$\alpha>1$ cease to tunnel, i.e., they are on the localized side of the
Kosterlitz-Thouless phase transition of the dissipative two-level system. These results
are in qualitative agreement with the case of short-range
interactions.\cite{MillisMorrSchmalian01,MillisMorrSchmalian02}
\subsection{Conserved overdamped dynamics}
\label{subsec:ferro}
Finally, we consider the case of overdamped dynamics with a conserved order parameter
as given by the dynamic action (\ref{eq:S_dyn_z3}). Such an action arises,
e.g., in the case of an itinerant quantum ferromagnet.
Order parameter conservation requires some care in discussing the dynamics of our locally
ordered droplet. In particular, the homogeneous magnetization $\int d\mathbf{x}
\varphi(\mathbf{x},\tau)$ must not be time dependent. Therefore, the product form
$\varphi\left(\mathbf{x},\tau\right)= \phi\left(\mathbf{x}\right)\eta\left(\tau\right)$
with $\phi(\mathbf{x})$ the static solution of section \ref{sec:Droplet-static-profile}
is not a suitable ansatz in this case. This can be fixed (in a crude way) by subtracting
a constant from the droplet profile, $\phi^\prime(\mathbf{x})= \phi(\mathbf{x}) -{\rm
const}$ such that the $\mathbf{q}=0$ Fourier component is cancelled. The ansatz
$\varphi\left(\mathbf{x},\tau\right)=
\phi^\prime\left(\mathbf{x}\right)\eta\left(\tau\right)$ then provides a variational
upper bound for the instanton action.
Inserting this ansatz into (\ref{eq:S_dyn_z3}) and carrying out the integral over the
spatial variables leads to a dissipative term in the effective $\eta(\tau)$ action with
the same functional form as (\ref{eq:S_eff_z2}). The prefactor and the resulting
dimensionless dissipation strength $\alpha$, however, are different. To leading order in
the defect size $a$, we find
\begin{equation}
\alpha = \left\{\begin{array}{cc} 8\phi_0^2 a^4 \tilde\alpha_c/\pi & \quad (d=3)
\\ 32 \phi_0^2 a^3 \tilde\alpha_c /(3\pi)& \quad (d=2)
\end{array} \right.~.
\end{equation}
In general dimension $d\ge 2$, the dimensionless dissipation strength is now proportional
to $a^{d+1}$ instead $a^d$. The extra factor $a$ compared to the non-conserved case in
Sec.\ \ref{subsec:overdamped} can be understood as follows. To invert the magnetization
of a droplet of linear size $a$, magnetization must be transported over a distance that
is at least of order $a$ (because the order parameter conservation prevents simply
flipping the sign of the magnetization on the defect). This involves modes with wave
vectors of the order of $q\sim 1/a$. Since the dissipation strength in
(\ref{eq:S_dyn_z3}) is inversely proportional to $q$, we expect an additional factor $a$
in the effective action. This argument strongly suggests that this extra factor is
\emph{not} an artefact of our simple ansatz for $\varphi\left(\mathbf{x},\tau\right)$ but
correctly reflects the physics of the conserved order parameter case.
In all other respects, the calculation proceeds as in the non-conserved case in
Sec.\ \ref{subsec:overdamped}. The resulting dynamic behavior of the droplets depends
on the value of the dimensionless dissipation strength parameter $\alpha$.
Small droplets ($\alpha<1$) still tunnel while the larger ones ($\alpha>1$) freeze.
Because $\alpha$ is now proportional to $a^{d+1}$ the tunneling of large droplets is
even more strongly suppressed than in the non-conserved case.
\section{Discussion and conclusions\label{sec:conclusions}}
To summarize, we have studied the physics of a single defect coupling to the square of
the order parameter in a nearly critical system with long-range spatial interactions of
the form $r^{-(d+\sigma)}$ with $\sigma>0$. Such a defect can induce the nucleation of a
magnetic droplet while the bulk system is still in the nonmagnetic phase. Due to the
long-range interactions, the droplet magnetization develops a long power-law tail, i.e.,
at large distances $r$ from the defect, it decays like $r^{-(d+\sigma)}$ in agreement
with Griffiths' theorem.\cite{Griffiths67} Nonetheless, the droplet free energy is
dominated by the core (on-defect) contribution while the tail contribution is subleading
in the limit of large defects. Therefore, droplets will nucleate on large defects as soon
as the local potential (the local distance from criticality) becomes negative, in
complete agreement with the case of short-range interactions. Our explicit calculations
of the droplet magnetization profile have been performed within saddle-point
approximation, but as long as the bulk system is noncritical, fluctuations do not change
the functional form of the droplet. They only renormalize nonuniversal parameters.
In addition to the existence of the magnetic droplets, we have also investigated their
dynamics. As is well known,\cite{Vojta06} in the case of a classical (thermal) phase
transition, the droplet cannot order statically. Instead, it fluctuates between `up' and
`down' due to thermal fluctuations. For a zero-temperature quantum phase transition, the
behavior is potentially different, depending on the form of the dynamic action. We have
studied three cases. In the absence of dissipation, even very large droplets can always
tunnel, but with a rate that decreases exponentially with the droplet volume. This
changes in the presence of (Ohmic) dissipation. The qualitative behavior now depends on
the dimensionless dissipation strength $\alpha$. For $\alpha<1$, the droplet still
tunnels albeit with a further reduced rate while for $\alpha>1$, tunneling ceases and the
droplet magnetization becomes static. For overdamped dynamics without order parameter
conservation, $\alpha$ is proportional to the volume of the droplet core. Thus,
sufficiently large droplets always freeze in agreement with Refs.\
\onlinecite{CastroNetoJones00,MillisMorrSchmalian01,MillisMorrSchmalian02}. In the case
of overdamped dynamics with order parameter conservation as in the itinerant quantum
ferromagnet, the dissipation effects are further enhanced because the
dimensionless dissipation strength $\alpha$ for a droplet of linear core size $a$ is
proportional to $a^{d+1}$ rather than $a^d$.
Let us comment on the order parameter symmetry. Our explicit results have been for the
case of a scalar (Ising) order parameter. However, the analysis of the droplet existence
in Sec.\ \ref{sec:Droplet-static-profile} relied on saddle-point arguments and thus
applies equally to continuous $O(N)$ order parameters with $N>1$. In contrast, to
generalize the discussion of the dynamics in Sec.\ \ref{sec:The-dynamics} to such order
parameters, other types of fluctuations (rotational ones) must be considered.
We also emphasize that we have discussed the case of an isotropic attractive long-range
interaction. Droplet \emph{formation} dominated by oscillating and/or anisotropic
interactions such as the dipolar or the RKKY interactions is likely of different type and
not considered here.
Finally, we briefly discuss the consequences of our results for the (quantum) Griffiths
effects in systems with long-range spatial interactions. Because the power-law
magnetization tail only makes a subleading contribution to the free energy of a magnetic
droplet, such droplets can form on rare (strongly coupled) spatial regions of the
disordered system essentially in the same way as in the case of short-range interactions.
Therefore, as long as droplet-droplet coupling can be neglected, the Griffiths effects
should be identical to those in short-range interacting systems. However, it is clear
that the droplet-droplet coupling is more important for long-range interactions than for
short-range ones. This means, it must be considered for lower droplet density and, in the
quantum case, for higher temperatures. The complicated physics caused by the coupling of
several droplets is beyond the scope of this paper. Recently, it has been argued
\cite{DobrosavljevicMiranda05} that this coupling can qualitatively change the Griffiths
effects at least in some cases. A complete understanding of this phenomenon remains a
task for the future.
\begin{acknowledgments}
We gratefully acknowledge discussions with J.\ Schmalian and M.\ Vojta. This work has
been supported by the NSF under grant no. DMR-0339147, and by Research Corporation. We
also thank the Aspen Center for Physics, where part of this work has been performed.
\end{acknowledgments}
\bibliographystyle{apsrev}
|
1,477,468,751,406 | arxiv | \section{Introduction}\label{Sec:1}
Symplectic maps represent the discrete-time analogues of Hamiltonian dynamical systems ~\cite{Meiss1992RMP} which have a wide range of applications, for instance in the study of magnetic fields~\cite{Morrison2000PhysPlas} and fluid dynamics~\cite{Aref1984JFM}. Such systems often display rich dynamical phenomena including chaotic trajectories, periodic orbits, and invariant circles, making them a popular subject of interdisciplinary research for mathematicians, physicists, and other scientists~\cite{Aubry1990PhysD}. A particularly well-studied example is the two-dimensional area-preserving map on the cylinder $\mathbb{T}\times\mathbb{R}$ given by
\begin{equation}\label{eq:SM}
f_{0} : \left\{ \begin{array}{l}
x' = x + y' \mod 1,\\
y' = y + \epsilon g(x).
\end{array} \right.
\end{equation}
Here $x$ and $y$ represent angle and action variables, respectively, $x'$ and $y'$ denote their values at the next time iterate, $\epsilon$ is a nonlinearity parameter, and the \emph{force} $g$ is assumed to be smooth and periodic (mod 1). In particular, the choice
\begin{align}
g(x) = \frac{1}{2\pi} \sin(2\pi x) \label{eq:g}
\end{align}
yields Chirikov's standard map~\cite{Chirikov1979PhysRep}, which exhibits a rich ensemble of dynamical phenomena despite its simple form and has therefore been the subject of significant research.
When $\epsilon=0$ the phase space of the standard map, Eqs.~(\ref{eq:SM}) and (\ref{eq:g}), is foliated by invariant circles with constant action on which the dynamics is a simple rotation with rotation number $\omega = y_0$. KAM Theory guarantees the persistence of invariant circles in the standard map under small perturbation when $\omega$ is ``sufficiently irrational'' \cite{delaLlave01}. For moderate values of $\epsilon$ the dynamics of the standard map can be characterized as either \emph{resonant} or \emph{non resonant}. In the resonant regions, those around where $p \cdot \omega = q$, $(p,q)\in\mathbb{Z}^2$, contractible circles, often referred to as \emph{secondary} circles or \emph{islands}, arise alongside chaotic orbits \cite{Dua08}. These effects are most pronounced near the low-order resonances, or those with small $(p,q)$. The dynamics of the non resonant regions is similar to the dynamics when $\epsilon =0$. Invariant \emph{rotational} circles, homotopic to the circles in the $\epsilon=0$ map, permeate this space, however thin secondary circles and chaotic orbits do exist between them.
The invariant circles of Eqs.~(\ref{eq:SM})--(\ref{eq:g}), as well as many other systems, organize the phase space and determine the extent of possible transport~\cite{MacKay1984PhysD,Szezech2009Chaos}. Each invariant circle acts as a barrier for motion -- insulating dynamics on either side from one another~\cite{Greene1986PhysD}. Thus, in the noiseless case transport is limited to the movement of trajectories throughout a single invariant set. More widespread transport can only occur once one or more circles acting as barriers are destroyed as a result of an increase in the nonlinearity parameter $\epsilon$~\cite{Bensimon1984PhysD}. Once destroyed, a circle typically gives rise to a {\it cantorus} -- a fractional dimensional cantor set -- that allows a slow ``leaky'' transport~\cite{Baesens1993PhysD,Li1986PRL,MacKay1992NonlinB,Percival1980}. The breakup of these invariant circles in the standard map and other area- and volume-preserving maps has been an active area of research~\cite{Greene1979JMP,Ketoja1989PhysD,MacKay1992NonlinA,MacKay1985CMP,FM14}.
However, all real-world systems -- either natural or man-made -- display some degree of noise or stochasticity, usually in the form of some temporal fluctuations. In many classes of physical systems it has been shown that even very small amounts of noise can fundamentally alter the dynamics of the system~\cite{Bulsara1996PhysTod,Zhou2002PRL}. Investigation into the effect of stochasticity in the context of Eqs.~(\ref{eq:SM}) and other well-studied area- and volume-preserving maps to date have been limited~\cite{Froeschle1975ASS,Karney1982PhysD}. In this paper we study the effects of an added stochastic term in Chirikov's standard map. Specifically, we assume the sinusoidal form of the force $g$, and, after adding a stochastic term $\xi_{\sigma}$, we obtain the \emph{stochastic standard map}
\begin{equation}\label{eq:SSM}
f_{\sigma} : \left\{ \begin{array}{l}
x' = x + y' \mod 1,\\
y' = y + \displaystyle\frac{\epsilon}{2\pi}\sin(2\pi x) + \xi_{\sigma}.
\end{array} \right.
\end{equation}
Here, $\xi_{\sigma}$ is an iid random variable generated at each time iterate. For simplicity, we assume here that $\xi_{\sigma}$ is drawn from the zero-mean normal distribution with variance $\sigma^2$, a tunable parameter that quantifies the noise intensity. We note that the particular shape of the noise distribution does not alter the results we present below, provided that the variance of the noise is $\sigma^2$, but for simplicity we consider Gaussian white noise, i.e., $\xi_\sigma\sim \mathcal{N}(0,\sigma^2)$.
The addition of the stochastic forcing term dramatically alters the dynamics. Most notably, the invariant sets that are so pivotal - fixed points, periodic orbits, and circles - are broken. An immediate consequence is that the action, i.e., the $y$ variable, of all orbits will be unbounded. It is our goal to understand how transport occurs in this new paradigm, and therefore we will study the dynamics in the action (or $y$) direction.
In this paper we employ appropriately defined hitting times to quantify the behavior of the stochastic standard map (\ref{eq:SSM}) and find that the transport is a novel combination of the linear noise and the dynamical nonlinearity. In particular, when the nonlinearity (i.e., the parameter $\epsilon$) is small, we show that the transport is dominated by the noise term and can be well-captured by simple Brownian motion properties. However, for larger nonlinearity we observe that the noise combines with the dynamics to give rise to regions of rapid nonlinear transport. Specifically, there is a significant speed-up in transport in the resonant regions of phase space relative to the non resonant regions.
The remainder of this paper is organized as follows. In Sec.~\ref{Sec:2} we present a brief survey that demonstrates the effect of temporal noise on the dynamics and define the hitting times we use for quantifying the transport dynamics. In Sec.~\ref{Sec:3} we begin our analysis, first presenting numerical results for hitting times, then presenting an approximation using a simple Brownian motion. In Sec~\ref{Sec:4} we illustrate the scaling properties of the transport and highlight the nonlinear transport effects in the system. In Sec.~\ref{Sec:5} we present an additional demonstration of the nonlinear transport effects. Finally, in Sec.~\ref{Sec:6} we conclude with a discussion of our results.
\section{Survey and Definitions}\label{Sec:2}
We begin by demonstrating the effect that added stochasticity has on the dynamics of (\ref{eq:SSM}). In particular, we consider three levels of noise: $\sigma=0$ (i.e., no noise), $10^{-4}$, and $10^{-3}$. Setting the nonlinearity parameter to $\epsilon = 0.5$ we simulate (\ref{eq:SSM}), using several different initial conditions for each value of $\sigma$ and plot the results in Fig.~\ref{fig1}. Results for $\sigma=0$, $10^{-4}$, and $10^{-3}$ are shown in panels (a)--(c), respectively, and different colors indicate different orbits obtained using different initial conditions. Each orbit plotted consists of $800$ time iterations. The $\sigma=0$ case [panel (a)] corresponds to the classical standard map, i.e., Eqs.~(\ref{eq:SM})-(\ref{eq:g}) where each orbit shown is bounded away from every other. When $\sigma$ is increased to $10^{-4}$ [panel (b)] we observe a similar structure to the phase space, save for a slight thickening of the orbits. While mixing will eventually occur, the short-term dynamics shown in panel (b) resemble to a remarkable degree those in panel (a). When $\sigma$ is further increased to $10^{-3}$ the thickening of each orbit is made even more pronounced. While the orbits are not as easy to distinguish as in panels (a) or (b), the overall structure of the dynamics is maintained to a certain degree.
\begin{figure*}[ht]
\centering
\epsfig{file =fig1a, clip =,width=0.32\linewidth }
\epsfig{file =fig1b, clip =,width=0.32\linewidth }
\epsfig{file =fig1c, clip =,width=0.32\linewidth }
\caption{Effect of noise. Phase space of the stochastic standard map (\ref{eq:SSM}) for fixed $\epsilon = 0.5$ and noise intensities $\sigma = 0$ (a), $10^{-4}$ (b), and $10^{-3}$ (c).} \label{fig1}
\end{figure*}
This survey clearly illustrates the fundamental change in dynamical behavior that occurs in the standard map with the addition of stochasticity. In particular, the invariance that is key to the dynamics in the noise-less standard map is destroyed. Most importantly, this allows for transport not just within an invariant set in the noise-less case, but across the entire phase space. Evidence for this more robust transport can be first observed in panel (b) ($\sigma=10^{-4}$) with the slight thickening of the orbits. However, transport is more clearly seen in panel (c) ($\sigma=10^{-3}$) as orbits are already overlapping after only $800$ iterations.
To investigate the effect of stochasticity on the new dynamics of (\ref{eq:SSM}) and especially to quantify the transport throughout the phase space, we will use an appropriately defined hitting time. In particular, we examine how long it takes an orbit of the stochastic standard map to reach a distance $a$ from an invariant set of the noiseless standard map. Our exploration of these dynamics will be split into two parts. In Secs~\ref{Sec:3} and \ref{Sec:4} we examine the behavior near a rotational circle of the noiseless system and in Sec~\ref{Sec:5} we study transport near a periodic orbit of the noiseless system.
More precisely, let
\begin{align}
\Phi_0(x_0,y_0)=\{f_0^t(x_0,y_0)|t\in\mathbb{N}\}.\label{eq:flow02}
\end{align}
denote an orbit of the noise-less map $f_0$ [see Eqs.~(\ref{eq:SM})]. We examine the distance between $\Phi_0(x_0,y_0)$ and orbits of the stochastically forced standard map (\ref{eq:SSM}) with nose level $\sigma$ beginning at the same initial condition $(x_0,y_0)$. Specifically, we calculate for a given distance $a$ the hitting time $\tau_a$, which is defined as the first time $t$ that the orbit of $f_\sigma$ [see Eqs.~(\ref{eq:SSM})] beginning at $(x_0,y_0)$ equals or exceeds a distance $a$ from the set $\Phi_0(x_0,y_0)$:
\begin{align}
\tau_a(x_0,y_0;\sigma)=\inf\{t\in\mathbb{N} \; | \; d[f_\sigma^t(x_0,y_0),\Phi_0(x_0,y_0)]\ge a\}.\label{eq:hittingtime}
\end{align}
Thus, given an appropriate distance metric $d(\cdot,\cdot)$ on the space $\mathbb{T}\times\mathbb{R}$, the hitting time $\tau_a$ represents the time it takes for a noisy trajectory to diverge from the noiseless trajectory by a distance $a$, and here we are interested in the expected hitting time.
As an illustrative example to begin with, in Sec.~\ref{Sec:3} we draw initial conditions from the {\it golden-mean circle}, the rotational circle with $\omega = \frac{-1+\sqrt{5}}{2}$, which we illustrate in Figs.~\ref{fig2}(a), (b), and (c) for nonlinearity parameters $\epsilon=0.01$, $0.2$, and $0.5$, respectively. The golden-mean circle if denoted as s thick blue curve, and other orbits are denoted in red. We choose the golden-mean circle in particular because of its robustness, i.e., the golden mean circle is believed to be the last circle to survive and is only destroyed when the nonlinearity parameter is increased to $\epsilon=\epsilon_c\approx 0.971635$~\cite{Mac93,Olvera2008SIAM}. Points on this circle are computed using the quasi-Newton method developed by de la Llave et al.~\cite{HdlLS14,FM14}.
\begin{figure*}[ht]
\centering
\epsfig{file =fig2a, clip =,width=0.32\linewidth }
\epsfig{file =fig2b, clip =,width=0.32\linewidth }
\epsfig{file =fig2c, clip =,width=0.32\linewidth }
\caption{Golden-mean circle. For the noise-less standard map, the golden-mean invariant torus for $\epsilon = 0.01$ (a), $0.2$ (b), and $0.5$ (c) as the thick blue curve, compared to other trajectories in red.} \label{fig2}
\end{figure*}
Given the topology of the cylinder and the fact that we are most interested in exploring the noise-induced transport in the action variable, we consider the simple distance metric defined solely by the displacement in the $y$-direction. In practice we begin with an initial condition on the circle and iterate $f_0$ 1000 times to generate a sample of points on $\Phi_0$. To measure the distance from a point $(x,y)$ to $\Phi_0$ we use a linear interpolation to approximate the point on the circle $(x_c,y_c)$ with the same $x$ coordinate, $x_c=x$. The distance is then given by $d=|y_c-y|$. This is well-defined since every rotational invariant circle of the standard map is a graph \cite{Meiss1992RMP}. This choice for $d$ is further motivated by a simplification that it will allow in the analysis we present below.
\section{Hitting Times and Brownian Motion}\label{Sec:3}
Using the hitting times defined above we now study the effect of different noise levels on the dynamics of (\ref{eq:SSM}). Starting on the golden mean circle described above, we calculate the hitting times $\tau_a$ for various distances $a$, nonlinearity parameters $\epsilon$, and noise levels $\sigma$. Since the process is stochastic, we will be interested in the expected hitting times, and therefore we will calculate for each set of parameters the mean $\tau_a$ from $10^4$ realizations. In Fig.~\ref{fig3} we plot the hitting times $\tau_a$ vs the distance $a$ for $\epsilon=0.01$, $0.2$, and $0.5$ in panels (a)--(c), respectively, and for noise levels $\sigma=10^{-4}$, $10^{-3.5}$, and $10^{-3}$, plotted in blue, red, and green circles, respectively. For each value of $\epsilon$, as expected, the expected hitting time $\tau_a$ increases both as the distance $a$ increases and as the noise level decreases. Note, however, that for smaller $\epsilon$, e.g., panel (a), the rate at which $\tau_a$ increases with $a$ is quite regular, increasing very much like a power-law, while for larger $\epsilon$, e.g., panel (c), the rate of increase is much less regular, especially at larger distances $a$.
\begin{figure*}[ht]
\centering
\epsfig{file =fig3a, clip =,width=0.32\linewidth }
\epsfig{file =fig3b, clip =,width=0.32\linewidth }
\epsfig{file =fig3c, clip =,width=0.32\linewidth }
\caption{Hitting times. For nonlinearity parameters $\epsilon=0.01$ (a), $0.2$ (b), and $0.5$ (c), we plot the mean hitting time $\tau_a$ vs distance $a$ for noise intensities $\sigma = 10^{-4}$, $10^{-3.5}$, and $10^{-3}$, which are plotting in blue, red, and green circles, respectively. Each data point represents the mean over $10^4$ realizations. The Brownian motion approximations $\tau_a=a^2/\sigma^2$ are plotted as appropriately colored dashed curves.} \label{fig3}
\end{figure*}
To gain a better understanding of the behavior of $\tau_a$ we consider the behavior of the process in the limit of small nonlinearity, i.e., $\epsilon\to0$. In particular, we note that in this limit the vertical motion depends only on the noise terms, i.e., $y'-y=\xi_{\sigma}$. Thus, given our notion of distance defined by displacement in the action variable, we ignore motion in the angle variable, obtaining effectively a one-dimensional system. Next, since the noise $\xi_{\sigma}$ is assumed to have variance $\sigma^2$, a simple application of the Central Limit Theorem implies that, at a large enough time $t$, the distribution for the displacement $y_t-y_0$ is Gaussian with variance $\sigma^2t$. It follows that at large enough times the discrete process can thus be approximated as finite-time slices of a one-dimensional Brownian motion with variance $\sigma^2$, for which the hitting time is well known to be given by
\begin{align}
\tau_a=\frac{a^2}{\sigma^2}.\label{eq:Brownian}
\end{align}
Eq.~(\ref{eq:Brownian}) thus provides an approximation for the hitting times for small $\epsilon$, assuming that $\tau_a$ is not too small (i.e., $\sigma$ and $a$ are not simultaneously too big and small, respectively).
To compare this approximation with our results, we plot in Figs.~\ref{fig3}(a)--(c) the Brownian motion approximation $\tau_a=a^2/\sigma^2$ as dashed curves. For the case of $\epsilon=0.01$ [panel (a)], the approximation captures the observed behavior extremely well as the scaling $\tau_a\propto a^2$ is almost exactly obtained. For $\epsilon=0.2$ the approximation remains remarkably accurate for $a$ not too large, until it begins to break down near $a\approx0.4$. Finally, for $\epsilon=0.5$ the approximation breaks down earlier still, near $a\approx0.05$, but provides a good benchmark for smaller $a$. We note that in our simulations there is a small discrepancy for $\sigma=10^{3}$ and small $a$, as the green circles towards the beginning of each plot slightly overshoot the approximation. We find that this is a result of modeling a discrete process with a continuous one - an effect that arises when the expected hitting time is not large enough. These results beg the question of why the approximation fails at certain distances $a$ for larger nonlinearity parameters $\epsilon$ - a point we address next.
\section{Rescaling and Nonlinear Transport}\label{Sec:4}
While the approximation via Brownian motion presented above represents a useful benchmark for understanding the dynamics of the noisy system, two interesting points remain. First, as the Brownian motion approximation fails, we observe a speed-up in the transport of the system. In other words, when the approximation $\tau_a\approx a^2/\sigma^2$ loses accuracy, the observed mean hitting time is always smaller, indicating that transport always occurs quicker - and never slower - than predicted by the Brownian motion approximation. Second, the speed-up in transport of the system seems to scale with the noise level. In particular, for a given value of $\epsilon$, the deviation from the approximation appears the same up to a rescaling for different values of $\sigma$. [See in particular the right-hand-side of Fig.~\ref{fig3}(a), where the blue, red, and green circles appear to undershoot their respective approximations by the same amount in each case.]
We begin by noting that the dependence of the hitting time on noise level can be scaled out of the Brownian motion approximation in Eq.~(\ref{eq:Brownian}) by considering the rescaling $\tau_a\mapsto\sigma^2\tau_a$. Therefore, we plot in Fig.~\ref{fig4} the scaled quantities $\sigma^2\tau_a$ vs the distance $a$ using the same results presented in Fig.~\ref{fig3}. Results using $\epsilon=0.01$, $0.2$, and $0.5$ are plotted in panels (a)--(c), respectively, and results using $\sigma=10^{-4}$, $10^{-3.5}$, and $10^{-3}$ are plotted in blue, red, and green circles, respectively. As expected, for small $\epsilon$, e.g., panel (a), the results collapse onto the curve $\sigma^2\tau_a=a^2$. More surprisingly, however, we observe that the results for larger $\epsilon$, i.e., panels (b) and (c), also collapse on one another, even in the regime where the Brownian motion approximation fails.
\begin{figure*}[ht]
\centering
\epsfig{file =fig4a, clip =,width=0.32\linewidth }
\epsfig{file =fig4b, clip =,width=0.32\linewidth }
\epsfig{file =fig4c, clip =,width=0.32\linewidth }
\caption{Scaled hitting times. For nonlinearity parameters $\epsilon=0.01$ (a), $0.2$ (b), and $0.5$ (c), we plot the scaled mean hitting time $\sigma^2 \tau_a$ vs distance $a$ for noise intensities $\sigma = 10^{-4}$, $10^{-3.5}$, and $10^{-3}$, which are plotting in blue, red, and green circles, respectively. Each data point represents the mean over $10^4$ realizations. The Brownian motion approximation $\sigma^2\tau_a=a^2$ is plotted as a black dashed curve.} \label{fig4}
\end{figure*}
The fact that even the deviations from the Brownian motion approximation scale is remarkable and suggests two important points. First, the observed speed-up in transport is not solely a result of the added stochasticity, but also from the underlying nonlinear dynamics which are deterministic. Second, the transport we observe falls into two regimes. We refer to these two regimes as a {\it linear} transport regime and a {\it nonlinear} transport regime. The linear transport regime corresponds to dynamics that adhere well to the Brownian motion approximation, indicating that the dynamics are driven primarily by the linear stochastic term and subsequently the expected hitting time is well-described by the power-law $\tau_a\propto a^2$. The nonlinear transport regime corresponds to dynamics that fail to be described well by the Brownian motion approximation and where the underlying nonlinear dynamics of the standard map contribute to the significant speed-up in transport.
Finally, these results beg for the answer to the question of what the underlying cause for the transition from linear to nonlinear transport. While we find that this effect depends significantly on the structure of the dynamics, and thus on both the nonlinearity parameter and the initial conditions, we find that transport is linear in regions of phase space far from low-order resonance which are dominated by rotational circles. Conversely, transport is nonlinear in resonant regions of the phase space which are dominated by secondary circles. This can be understood as follows. In regions of phase space primarily populated by rotational circles the dynamics push the trajectories primarily in the angle direction, so that the majority of the motion in the action direction is driven by the stochasticity and is captured well by the Brownian motion approximation. On the other hand, in regions of phase space primarily populated by secondary circles, the underlying dynamics can induce a significant displacement in the action variable in just a few iterations. Thus, in these regions transport is facilitated by the nonlinear dynamics which can transport much quicker than the stochasticity can.
As an example, consider the case of $\epsilon=0.5$, the hitting times and scaled hitting times for which are presented in Figs.~\ref{fig3}(c) and \ref{fig4}(c). We observe that the deviation from the Brownian motion approximation, i.e., the kink in the results, occurs roughly at $a\approx0.05$. In Fig.~\ref{fig2} we can see that this is approximately the distance from the golden mean circle to the resonance centered around $y=0.5$. Another sizable kink occurs at $a\approx0.25$, which corresponds to the distance from the golden mean circle to the resonance centered around $y=1$.
\section{Nonlinear Transport: Another Example}\label{Sec:5}
To further demonstrate the nonlinear transport effects in the dynamics of Eqs.~(\ref{eq:SSM}) we consider a modified scenario where, rather than starting at the golden mean circle or another rotational circle, we begin at the period-two orbit $\Phi_0=\left \{ (0,\frac{1}{2}),(\frac{1}{2},\frac{1}{2}) \right \}$. This orbit is plotted as blue dots in Figure~\ref{fig5}(a), with other trajectories plotted in red, for $\epsilon=0.2$. Conveniently, this orbit is fixed for every value of $\epsilon$, making it an illustrative choice of initial conditions for the following experiment.
\begin{figure*}[ht]
\centering
\epsfig{file =fig5a, clip =,width=0.32\linewidth }
\epsfig{file =fig5b, clip =,width=0.32\linewidth }
\epsfig{file =fig5c, clip =,width=0.32\linewidth }
\caption{Example two. (a) The orbit $\Phi_0=\{(0,\frac{1}{2}),(\frac{1}{2},\frac{1}{2})\}$ plotted in blue dots, compared to other trajectories in red for $\epsilon=0.2$. (b) Scaled hitting times $\sigma^2\tau_a$ vs distance $a$ for noise intensities $\sigma=10^{-4}$, $10^{-3.5}$, and $10^{-3}$, plotted in blue, red, and green circles, respectively. Each data point represents the mean over $10^4$ realizations. The Brownian motion approximation $\sigma^2\tau_a=a^2$ is plotted as a dashed black curve. (c) A zoomed-in view on the flat region in panel (b) with finer resolution.} \label{fig5}
\end{figure*}
We proceed by simulating the dynamics of Equations~(\ref{eq:SSM}), again calculating the mean hitting times $\tau_a$ as a function of the distance $a$. We note that, since the orbit is not a graph over the angle variable, we calculate distance simply as the $y$-displacement from the value $y_0=0.5$. In Figure~\ref{fig5}(b) we plot the mean hitting times scaled by the noise squared, $\sigma^2\tau_a$ vs the distance $a$ for $\sigma=10^{-4}$ (blue circles), $10^{-3.5}$ (red circles), and $10^{-3}$ (green circles), again averaged over $10^4$ realizations. The Brownian motion approximation $\sigma^2\tau_a=a^2$ is denoted by the dashed black curve. Note that the results for different values of $\sigma$ collapse nicely when scaled appropriately.
In Figure~\ref{fig5}(b) we observe two nonlinear transport effects, manifesting as significant deviations from the linear transport along the Brownian motion approximation. First, and most subtle, we observe that for small distances (i.e., $a\lesssim10^{-1}$) there is a speed-up in the hitting times. Upon further investigation, we find that this turns out to be due to the curvature of the trajectories around the period-two orbit depicted in Figure~\ref{fig5}(a). In particular, the mildly sinuous shape of the orbits - an effect of the nonlinear dynamics of the system - causes a small speed-up in transport with respect to the action variable. Second, we see a more pronounced nonlinear transport effect at larger distances (i.e., the second- and third-to-last data points) where the hitting times flatten out. Since the logarithmic scaling of the plot may diminish the apparent size of this region, we plot in Figure~\ref{fig5}(c) a zoomed-in view with a ten-fold finer discretization. Here we see more clearly the dramatic, flat section of the plot in the range $0.4\lesssim a\lesssim0.6$ where transport occurs very quickly. Looking back to the foliation of the phase-space in Figure~\ref{fig5}(a), we note that this flat region corresponds almost precisely with the distances from the initial value $y_0=\frac{1}{2}$ to and extending through the large resonances centered at $(\frac{1}{2},0)$ and $(\frac{1}{2},1)$. Once past these large resonant regions the transport returns to the linear regime, as shown by the final point in Figure~\ref{fig5}(b).
\section{Discussion}\label{Sec:6}
In this paper we have studied the dynamics of Chirikov's standard map with an added stochasticity term [see Eqs.~(\ref{eq:SSM})]. The added noise term in the stochastic standard map destroys the invariant manifolds present in the noiseless case that organize the phase space and bound transport. The destruction of these invariant objects facilitates widespread, unbounded transport in the action direction not present in the noiseless case. Using appropriately defined hitting times, we have quantified the transport that occurs in the stochastic standard map and found that transport falls into two broad categories: \emph{linear} transport and \emph{nonlinear} transport. In the case of linear transport, movement in the action direction is dominated by the stochastic term and is well-described by a simple Brownian motion such that hitting times scale with the square of the distance. In the case of nonlinear transport, the stochasticity combines with the underlying nonlinear dynamics of the map to facilitate a significant speed-up in the mean hitting times. Importantly, we find that linear transport prevails in regions of phase-space dominated by non-resonant dynamics, while nonlinear transport prevails in regions of phase-space dominated by resonant dynamics - which become more pronounced as the nonlinearity parameter of the dynamics is increased.
The effect of added stochasticity to the standard map or other conservative systems has been studied in a handful of other works, e.g., see Refs.~\cite{Froeschle1975ASS,Karney1982PhysD}, however very few recent results exist. To our knowledge, this is the first study concerned with the effect that added stochasticity has on the the transport that occurs as a result of breaking the invariant objects in phase space. Subsequently, our results open new questions for further investigation. First, the examples presented in this paper were chosen using parameter values such that the noise-less phase space was primarily foliated by circles and (periodic) fixed points. One interesting avenue for investigation will be to study how transport occurs as larger chaotic seas emerge (i.e., at larger values of the nonlinearity parameter). We hypothesize that the emergence of significant chaotic seas will cause even more significant speed-up in transport than do resonant circles. Second, we have used as a primary example in this paper Chirikov's standard map due to its simplicity and widespread popularity. However, the effect that noise has on other, possibly more complicated conservative systems - discrete or continuous - will be an interesting question to ask. Finally, we find it remarkable that, although the invariant manifolds organizing the phase space in the noiseless case are broken with the addition of noise, the dynamics remain remarkably robust. An investigation into the limits of this robustness, e.g., in terms of the noise intensity and/or the nonlinearity parameter, would be worthwhile.
\bibliographystyle{plain}
|
1,477,468,751,407 | arxiv | \section{Introduction}
\label{sec:introduction}
Many cases require the decision-maker to rank alternatives according to multiple decision criteria. When this decision requires dealing with a significant amount of data, methods of multiple criteria decision analysis (MCDA) arise as an interesting tool~\cite{roy1985methodologie, e2012readings, mardani2015multiple}. These methods are often used to rank a set of alternatives $A = \{a_1, \ldots, a_m\}$ based on a set of criteria $C = \{c_1, \ldots, c_n\}$~\cite{keeney1993decisions}. MCDA methods are applied in several fields, including the public sector~\cite{dotoli2020multi}, sustainable development~\cite{frini2019mupom}, economics and finance~\cite{masri2018financial}, medicine and health care~\cite{belacel2000multicriteria, oliveira2019multi}, energy storage systems~\cite{baumann2019review}, and many others~\cite{zopounidis2002multicriteria}.
Most of MCDA techniques consider as input data the \textit{decision matrix} \textbf{P} $\in \mathbb{R}^{m \times n}$. The matrix rows represent alternatives to be ranked and the columns represent criteria. The performance of the alternatives is measure in terms of the values assumed concerning the criteria. Each element $p_{ij}$ of matrix \textbf{P} corresponds to the evaluation of alternative $i$ in criterion $j$. A core issue in MCDA, known as \textit{matrix aggregation}, consists of applying a technique that transforms the decision matrix into a scoring vector \textbf{g}; i.e., each row $i$ of the matrix \textbf{P} (the alternatives) is mapped into a score $g_i$, used to rank the alternatives.
MCDA approaches usually consider a static value ($p_{ij}$) to evaluate alternative $i$ concerning criterion $j$, without dealing with the criteria evolution over time. The $p_{ij}$ is often the value at the time that the decision is taken (\textit{current data}). Although this approach is widely used, many decision-making problems require analyzing the time-series features beyond only the current data. For instance, it can be relevant to consider the variance, the tendency, the seasonality, among other time-series features of the criteria.
To illustrate the interest behind the analysis of the time-series information in MCDA, let us suppose that we aim to rank two athletes based on their speed test and anaerobic capacity, as shown in Figure~\ref{fig:graf}. If we ponder only the current data ($t_T$), Athlete 1 is chosen due to its speed test superiority. Likewise, Athlete 1 is chosen if we consider all the values one by one over the time-series. However, it is interesting to note that Athlete 2 is improving in both criteria, whereas Athlete 1 is worsening in the speed test and presents less regularly in anaerobic capacity. Hence, the decision can be made from a new perspective in which athletes' improvement and regularity are considered. In this case, Athlete 2 should be chosen due to the high slope coefficient and low variance in both criteria. This example suggests that different rankings (i.e., different solutions) can be achieved by taking into account the time-series features, as we shall discuss later.
\begin{figure}[h]
\centering
\includegraphics[width=10cm]{grafico}
\caption{Analysis of the features of the criteria.}
\label{fig:graf}
\end{figure}
Few studies in MCDA regard the time-series of the criteria. For example,~\cite{Mingl2011} investigated the decision information at different periods applied in emergency management with real-world time-series. The authors proposed the Ordered Weighted Averaging method for aggregating the time-series.~\cite{frini2015topsis} and~\cite{frini2019mupom} considered the criteria time-series in a sustainable development context. The authors applied an aggregating method called Multi-criteria multi-Period Outranking Method. Other authors, such as~\cite{banamar2018extension} and \cite{campello2020adaptive}, also analyzed time-series in the MCDA approach. These studies differ from our proposal since they apply a method to aggregate the time-series; instead, we aim to explore many time-series' features before the aggregation.
Therefore, in this study we analyze the MCDA problem dynamically by representing a given criterion as a time-series (signal). This approach leads us to structure the data involved in the decision as a \textit{tensor}~\cite{sidiropoulos2017tensor, da2018tensor} (a function of three or more indexes), as shown in Figure~\ref{fig:agregacao_tensor}. Our first contribution consists of obtaining features of the signals that may be relevant for the decision. In other words, we mapped the time-space into a feature-space by taking measures that describe the time-series evolution. From this mapping, the third tensor dimension becomes the time-series feature. Finally, we apply a method to aggregate the tensor for ranking the alternatives.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{agregacao_tensor}
\caption{Tensorial representation, space mapping, and tensor aggregation.}
\label{fig:agregacao_tensor}
\end{figure}
Since most MCDA methods deal with a matricial structure, we propose an MCDA method extension to provide a tensor aggregation. Thus, our second contribution is to extend the procedure called Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)~\cite{hwang1981methods} for aggregate the tensor of features. The classical TOPSIS method considers that the best alternative should have the shortest distance of the \textit{ideal positive point} and the greatest distance of the \textit{ideal negative point}~\cite{behzadian2012state}. The ideal positive point is the best value the criteria assume for all alternatives, and the ideal negative point, the worse one. This method is one of the most popular MCDA techniques, as it presents a good performance and is suitable for different application areas~\cite{behzadian2012state}.
Many extensions and applications of the TOPSIS have been proposed, such as found in~\cite{jahanshahloo2006algorithmic},~\cite{chen2015inclusion},~\cite{frini2015topsis},~\cite{dash2019integrated},~\cite{palczewski2019fuzzy}, and~\cite{shukla2017applications}. An aspect of the TOPSIS method and its extensions is that it incorporates relative weights that model each criterion's importance~\cite{olson2004comparison}. Our proposal introduces relative weights for modeling the features' importance since they can have different relevance depending on the decision's objective. For instance, in the example of the athletes' ranking, a positive trend in athlete performance may be more relevant than their performance variance.
Finally, a sensitivity analysis is given in the computational tests to analyze our proposal's performance further. We study changes in the alternatives' ranking when the feature weights are modified. For this purpose, we use the stochastic multicriteria acceptability analysis (SMAA) method technique~\cite{lahdelma1998smaa}. The SMAA is widely used for support decision-makers to explore the weight space. It can be used in contexts where the weights are uncertain, or the decision-makers do not express their weights preferences. The central idea is to obtain the ranking for each possible value of the parameter, e.g., calculate the ranking for each weight vector value. The method provides the probability of each alternative being in each position for all possible parameter values. Thus, from the SMAA output, it is possible to observe if the specific ranking is more likely, or if there is more probability of a given alternative to be the preferred one, among other analyses.
The paper is organized as follows. Subsection~\ref{sec:tensors} summarizes the tensor notation used in this paper. Section~\ref{sec:flex} discusses the motivation of considering the feature-space. Section~\ref{sec:methodology} describes the methodology of this study, and it is divided into two subsections: Subsection~\ref{sec:TOPSIS} describes the TOPSIS method extension and Subsection~\ref{sec:smaa}, the SMAA method. In Section~\ref{sec:resultados}, the results and their discussion are given. Finally, Section~\ref{sec:conclusoes} concludes this study.
\subsection{Mathematical notation}
\label{sec:tensors}
We briefly present the tensor notations used in this paper. For further details, we refer the interested reader to~\cite{cichocki2015tensor, da2018tensor, kanatsoulis2019regular}. A real-valued tensor is denoted by $\mathcal{P} \in \mathbb{R}^{K_1 \times K_2 \times \cdots \times K_o}$, where $o$ is its order (number of dimensions). The elements of the tensor are represented by $p_{k_1 \ldots k_o}$. Subtensors are obtained by fixing a subset of tensor indices. The matrix subtensors, or \textit{slices}, are defined by fixing all but two indices. A third order tensor $\mathcal{P} \in \mathbb{R}^{m \times n \times T}$ has three slices: vertical $ \textbf{P}(i, :, :) $, horizontal $ \textbf{P}(:, j, :) $ and frontal $ \textbf{P}(:, :, t)$, as shown in Figure~\ref{fig:cortes_tensor}.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{cortes_tensor}
\caption{Slices-types for a third order tensor and its tabular representation.}
\label{fig:cortes_tensor}
\end{figure}
\section{The motivation of considering feature-space}
\label{sec:flex}
As discussed in Section~\ref{sec:introduction}, most MCDA studies consider the criteria value as the current data, and few studies deal with the criteria' time-series~\cite{frini2017making}. That is, both approaches ranking the alternatives taking into account the criteria value in the time domain. However, the this domain is not always the most favorable to analyze relevant aspects of the decision. It can be more appropriate to rank the alternatives considering the criteria in a feature domain in some decisions. This domain transformation may lead to different solutions.
The latter statement becomes clearer in the example to rank athletes used in Section~\ref{sec:introduction}. We observed that Athlete 1 should be preferred, whether it is considered the criteria' current data. Also, Athlete 1 is chosen considering all values one by one over the time-series. Note that both approaches are in the time domain. Instead, by considering the improvement in the athletes' performance (the slope coefficient of the criteria' time-series), Athlete 2 should be chosen. That is, by mapping the time-space into a feature-space, a different solution is achieved. The solution with the feature-space approach may be interesting, for instance, to hire an athlete. Since Athlete 2 performance is improving, it shall be better than Athlete 1 performance in the medium term.
Before further discuss the change of ranking when a feature domain is considered, let us define concepts relevant to MCDA~\cite{grabisch2009aggregation}:
\begin{definition}
A function $f: \mathbb{R}^{n} \rightarrow \mathbb{R}$ is monotonically nondecreasing in each argument if, for any vector $\textbf{p}_1$, $\textbf{p}_2$ $\in \mathbb{R}^n$, $\textbf{p}_1 \geq \textbf{p}_2 \implies f\{\textbf{p}_1\} \geq f\{\textbf{p}_2\}$.
\end{definition}
\begin{definition}\label{def:agg_fun}
An aggregation function in $\mathbb{R}^{m \times n}$ is a function $f: \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^n$, which a natural requirement is nondecreasing monotonicity in each argument.
\end{definition}
\begin{definition}
The decision criteria can be either benefit (maximum) when the desired value is as high as possible, or cost (minimum) when the desired value is as low as possible.
\end{definition}
Following the example of ranking the athletes, suppose that the decision-maker should choose one among two alternatives ($m = 2$), according to two criteria ($n = 2$) of benefit, where each criterion is measured over two periods $T = 2$. Let us represent the decision data as a tensor $\mathcal{P} \in \mathbb{R}^{2 \times 2 \times 2}$, in which each alternative $i$ is represented by vertical slices of $\mathcal{P}$, $ \textbf{P}(i, :, :) \in \mathbb{R}^{2 \times 2} $:
\begin{equation}
\label{eq:matr_ex}
\textbf{P}(i, :, :)=\begin{array}{cc}
\left[\begin{array}{cc}
p_{i11} & p_{i12} \\
p_{i21} & p_{i22}
\end{array}
\right].
\end{array}
\end{equation}
\noindent Each vector $ \textbf{p}(i,j,:) $ represents the time-series of alternative $i$ in criterion $j$. Figure~\ref{fig:decision_data} shows the data and graphics.
\begin{figure}[h]
\centering
\includegraphics[height=5cm,keepaspectratio]{grafico3_pt}
\caption{Data involved in the decision-making.}
\label{fig:decision_data}
\end{figure}
By Figure~\ref{fig:decision_data} it is possible to see that the current data of Alternative 1 are greater than or equal to the current data of Alternative 2, i.e., $p_{1jT} \geq p_{2jT}$ $\forall j$. Also, all the values of Alternative 1 in the time-series are greater than or equal to the values of Alternative 2, $p_{ijt} \geq p_{ijt}$ $\forall j, t$. According to \textbf{Definition 1} and \textbf{Definition 2} by applying any aggregation function $f\{.\}$ in $\textbf{P}(1, :, :)$ and $\textbf{P}(2, :, :)$, since $p_{ijt} \geq p_{ijt}$ $\forall j, t$, implies that $f\{\textbf{P}(1, :, :)\} \geq f\{\textbf{P}(2, :, :)\}$. That means Alternative 1 is preferable to Alternative 2 independent of the MCDA aggregation method. Thus, Alternative 1 dominates Alternative 2 in the time domain. Suppose $f\{.\}$ an additive method:
\begin{eqnarray}\label{eq:frobenius}
f\{\textbf{P}(i, :, :)\} = \sum_{j = 1}^{n} \left(\sum_{t = 1}^{T}w_{tj} p_{ijt} \right), \ i = 1,\cdots, m,
\end{eqnarray}
\noindent where the elements $w_{tj}$ of a matrix \textbf{W}, models the relative importance of criterion $j$ in period $t$, and $w_{tj} \geq 0$ and $\sum_{t=1}^{T} \sum_{j=1}^{n} w_{tj} = 1$. Assuming:
\begin{equation}
\label{eq:matr_pesos}
\textbf{W}=\begin{array}{cc}
\left[\begin{array}{cc}
0.25 & 0.25 \\
0.25 & 0.25
\end{array}
\right],
\end{array}
\end{equation}
\noindent by Equation~(\ref{eq:frobenius}), $f\{\textbf{P}(1, :, :)\} = 4$ and $f\{\textbf{P}(2, :, :)\} = 2.5$. As expected, Alternative 1 is preferable to Alternative 2:
\begin{equation}\label{eq:ranking}
\textbf{P}(1, :, :) \succ \textbf{P}(2, :, :),
\end{equation}
\noindent Same solution is obtained if we use only the current data. Notice that it is not possible to find \textbf{W} able to change this ranking because $p_{1jt} \geq p_{2jt}$ $\forall j, t$, which implies that $\sum_{j=1}^{n} \sum_{t=1}^{T}p_{1jt} \geq \sum_{j=1}^{n} \sum_{t=1}^{T} p_{2jt}$; finally, since the weights are non-negative $w_{tj} \geq 0$, by multiplying non-negative numbers in both sides of the inequality, it is not possible to change the ranking: $\sum_{j=1}^{n} \sum_{t=1}^{T}w_{tj} p_{1jt} \geq \sum_{j=1}^{n} \sum_{t=1}^{T} w_{tj}p_{2jt}$.
Suppose now that it is relevant in the decision to consider the tendency of the criteria. Therefore, a change of domain is made by mapping the time-space into a feature-space. This mapping is represented by $G\{.\}$: $G\{\textbf{P}(1, :, :)\}$ and $G\{\textbf{P}(2, :, :)\}$. Let assume $G(.)$ as the slope coefficient. We apply the mapping $G\{.\}$, and then apply the additive method $f\{.\}$ defined in Equation~\ref{eq:frobenius}, obtaining $f\{G\{\textbf{P}(1, :, :)\}\} = -0.25$ and $f\{G\{\textbf{P}(2, :, :)\}\}= 0.5$. Hence, Alternative 2 becomes preferable to Alternative 1:
\begin{equation}\label{eq:ranking2}
\textbf{P}(2, :, :) \succ \textbf{P}(1, :, :).
\end{equation}
The preference in~(\ref{eq:ranking2}) is different from~(\ref{eq:ranking}). In the time domain, Alternative 1 dominates Alternative 2. In a feature space, Alternative 2 becomes the preferred one. The rankings are different, but both are suitable to support the decision-maker. Thus, the example shows that considering only the time domain can disregard a solution that can be interesting in the feature domain. In other words, to ignore the features leads to the loss of crucial information and the solution is not necessarily satisfactory. Therefore, our proposal considers more elements besides the current data, such as tendency, mean, variance, etc. The next section presents the methodology proposed to achieve this objective.
\section{Methodology}
\label{sec:methodology}
\subsection{Extended TOPSIS method for aggregate the tensor}
\label{sec:TOPSIS}
Figure~\ref{fig:agregacao_tensor} of Section~\ref{sec:introduction} shows the tensorial representation used to structure the data in the feature-space. This novel data representation requires to adapt the MCDA methods in other to aggregate the tensor and ranking the alternatives. This section introduces the proposed extension of the TOPSIS method~\cite{hwang1981methods} for a tensorial approach. The classical TOPSIS consists of measuring the distance of the ideal positive and negative points. The ideal positive point of a specific criterion is the best value of this criterion in all alternatives, and the ideal negative point is the worst one. The best value is the highest value if the criterion is of benefit and the lowest if it is of cost. Therefore, the scoring value of each alternative is determined according to the criteria values' distance to the positive and negative ideal points. The data input in this method is the decision matrix and the vector of weights \textbf{w}.
In the TOPSIS extension we propose, the algorithm input data is a third-order tensor $\mathcal{P} \in \mathbb{R}^{m \times n \times T}$ , where $m$ is the number of alternatives, $n$ is the number of criteria, and $T$ the number of samples in the time-series. In this extension, we first map the time-space into a feature-space $\mathcal{P} \in \mathbb{R}^{m \times n \times T} \Rightarrow \mathcal{S} \in \mathbb{R}^{m \times n \times h}$, where $h$ is the number of features. The criteria weights are $w_j, \ j = 1,\ldots, n$, and $w_j \geq 0$, $\sum_{j=1}^{n} w_j = 1$. We also introduce weights for the features, represented by $\alpha_k, \ k = 1,\ldots, h$, in which $ \alpha_k \geq 0$, $\sum_{k=1}^{h} \alpha_k = 1$.
Finally, we verify if the feature $k$ is of benefit or cost to determine the ideal positive and negative point. This step is necessary since some features are desirable to be as high (or low) as possible, independent of the criteria. For instance, if a low variance is important for the decision, independent of the criteria, the ideal positive point will be the lowest. However, the tendency is neither benefit nor cost, since it depends on whether the criterion is of benefit or cost; if the criterion is of benefit, we desire an increasing tendency. If the criterion is of cost, we desire to decrease tendency.
In the sequence, we detail the steps of the proposed extension of the TOPSIS.
\begin{enumerate}
\item Map the time-space into a feature-space.
\begin{eqnarray}
\mathcal{P} \in \mathbb{R}^{m \times n \times T} \Rightarrow \mathcal{S} \in \mathbb{R}^{m \times n \times h},
\end{eqnarray}
where the $s_{ijk}$ are the elements of $\mathcal{S}$ and $h$ the number of features. The chosen features should be those that are relevant to the purpose of the decision.
\item Normalize the elements of the tensor $\mathcal{S}$, which results in the tensor represented by $\mathcal{N}$.
\begin{eqnarray}
n_{ijk}= \frac{s_{ijk}}{\sqrt{\sum_{j=1}^{n}(s_{ijk})^2}}, \ \ i = 1,\ldots, m, \ j = 1,\ldots, n, \ k = 1,\ldots, h.
\end{eqnarray}
This step is necessary since the order of magnitude and the unit of measurement of the data may influence the results.
\item Weight the tensor $\mathcal{N}$, from where we obtain the tensor represented by $\mathcal{V}$.
\begin{eqnarray}
v_{ijk}= \alpha_k n_{ijk}w_{j}, \ \ i = 1,\ldots, m, \ j = 1,\ldots, n, \ k = 1,\ldots, h.
\end{eqnarray}
\item Determine the positive and negative ideal points represented by $\mathcal{A^+}$ e $\mathcal{A^-}$ $\in \mathbb{R}^{1 \times n \times h}$. The ideal positive and negative points depend on if the feature $k$ is in the set of benefit $I$ or cost $J$ (by Equation~(\ref{eq:beneforcost}), where `$\vee$' represents the logical operator \textit{or}); or if the feature $k$ is neither benefit nor cost (by Equation~(\ref{eq:benefecost}), where `$\wedge$' represents the logical operator \textit{and}). If it is of benefit, we identify the highest value that the alternatives assume for each feature and each criterion. If it is of cost, we determine the lowest value that the alternatives assume for for each feature and each criterion. If the feature is neither benefit nor cost, we identify if the criterion $j$ is of benefit $I$ or cost $J$. Thus, for each $k = 1,\ldots, h$:
\begin{flalign}
k \in I \vee k \in J \rightarrow \label{eq:beneforcost} \\
A_k^+ = \{v_{1k}^+, \ldots, v_{nk}^+ \} = \bigg\{\bigg(\underset{j}{\max} \ v_{ijk} \mid k \in I\bigg), \bigg(\underset{j}{\min} \ v_{ijk} \mid k \in J\bigg)\bigg\} \\
A_k^- = \{v_{1k}^-, \ldots, v_{nk}^- \} = \bigg\{\bigg(\underset{j}{\min} \ v_{ijk} \mid k \in I\bigg), \bigg(\underset{j}{\max} \ v_{ijk} \mid k \in J\bigg)\bigg\} \\
k \not\in I \wedge \ k \not\in J \rightarrow \label{eq:benefecost} \\
A_k^+ = \{v_{1k}^+, \ldots, v_{nk}^+ \} = \bigg\{\bigg(\underset{j}{\max} \ v_{ijk} \mid j \in I\bigg), \bigg(\underset{j}{\min} \ v_{ijk} \mid j \in J\bigg)\bigg\} \ \\
A_k^- = \{v_{1k}^-, \ldots, v_{nk}^- \} = \bigg\{\bigg(\underset{j}{\min} \ v_{jik} \mid j \in I\bigg), \bigg(\underset{j}{\max} \ v_{jik} \mid j \in J\bigg)\bigg\};
\end{flalign}
\item Compute the $\mathit{n}$-dimensional Euclidean distance of each alternative for the ideal positive and negative points.
\begin{eqnarray}
d_i^+ = \Bigg\{\sum_{k=1}^{h}\sum_{j=1}^{n}(v_{ijk}-v^+_{jk})^2\Bigg\}^{\frac{1}{2}}, \ i = 1,\ldots, m\\
d_i^- = \Bigg\{\sum_{k=1}^{h}\sum_{j=1}^{n}(v_{ijk}-v^-_{jk})^2\Bigg\}^{\frac{1}{2}}, \ i = 1,\ldots, m;
\end{eqnarray}
\item Compute the relative proximity of the alternatives to the positive optimal point, where the vector $\textbf{g} = [g_1, \ldots, g_m] $ is obtained.
\begin{eqnarray}
g_i = \frac{d_i^-}{d_i^+ + d_i^-}, \ i = 1,\ldots, m.
\end{eqnarray}
By sorting in descending order the values $g_i$ in \textbf{g}, we obtain the ranking of the alternatives. Each $g_i$ assumes values between zero and one. When $g_i$ tends to 1, the alternative tends to be closer to the ideal positive point and far from the negative ideal.
\end{enumerate}
\subsection{Stochastic multicriteria acceptability analysis (SMAA)}
\label{sec:smaa}
In Section~\ref{sec:flex}, we showed a change of ranking when the criteria value is in the time domain or by considering the slope coefficient. In the computational test, we deepen this analysis adding more features besides the slope coefficient. Because in the TOPSIS method the weights of the features are considered, we provide a sensitivity analysis of these weights in the ranking of the alternatives; i.e., we use the SMAA method to verify changes in the ranking by varying the weights of the features.
The SMAA numerical calculation is quite complicated, but it can be replaced by Monte Carlo simulation, which generates good approximations~\cite{tervonen2007implementing}. In this study, the SMAA input is the feature-space tensor, $\mathcal{P} \in \mathbb{R}^{m \times n \times T}$. We set the value of the criteria weights $w_j, \ j = 1,\ldots, n$. The feature weights are represented by a random variable, $\alpha_k \sim U[a, b]$, for $k = 1,\ldots, h$, where $U$ is a continuous uniform distribution, $0 \leq a < b \leq 1$, and $\sum_{k=1}^{h} \alpha_k = 1$. The called \textit{percentage matrix of rankings}, $ \textbf{M} = \textbf{0}_{m \times m} $ computes the percentage the alternatives occupied a given position. The elements of this matrix are represented by $m_{i\theta}$. The simulation consists of repeat the steps below $L$ times, where $L$ is a large number (in the order of a thousand):
\begin{enumerate}
\item Sample the random variables $\alpha_k$, for $k = 1,\ldots, h$, in other to compose the deterministic vector $\boldsymbol{\alpha}$;
\item Then, apply the extension of the TOPSIS method presented in Section~\ref{sec:TOPSIS} with the inputs $\mathcal{P}$, \textbf{w}, and the weights $\boldsymbol{\alpha}$ obtained in Step 1. The output of the TOPSIS is a ranking \textbf{g};
\item Compute the position of each alternative in matrix \textbf{M}. For this, do $m_{i\theta} = m_{i\theta} + 1$ if alternative $i$ is in position $\theta$ in the ranking \textbf{g}.
\end{enumerate}
\noindent At the end of the simulation, we compute the percentage: \textbf{M} = \textbf{M}/$L$ * 100.
Each $m_{i\theta}$ represents the percentage of times that alternative $ i $ was at position $\theta$. If the alternative $i$ has the highest percentage in the $\theta$-th position, for all $i$ and $\theta$, of \textbf{M}, then it is possible to find the most likely ranking.
\section{Experiments on actual data and discussion}
\label{sec:resultados}
In this section, the Human Development Index (HDI) is calculated using the feature-space proposal. The HDI can be computed by aggregating three criteria: life expectancy at birth ($c_1$), education ($c_2$), and gross national income per capita ($c_3$). Usually, the criteria values are that they assume at the year the index is calculated. In~\cite{banamar2018extension} the authors proposed to calculated the HDI for ranking ten emerging economies: Brazil (BR), China (CN), India (IN), Indonesia (ID), Malaysia (MY), Mexico (MX), Philippines (PH), Russia (RU), South Africa (ZA), Turkey (TR); according to the criteria $c_1$, $c_2$, and $c_3$. They considering the evolution of the criteria over the years (the criteria time-series). To test our proposal, we consider the same data, which is shown in Table~\ref{tab:dados} in horizontal representation $\textbf{P}(:,j,:)$. The weights the criteria assumes are equal for all of them, i.e., $w_j = 0.333 $.
\begin{table}[htbp]
\centering
\caption{Evaluations Table for 10 emerging countries~\cite{banamar2018extension}.}
\scalebox{0.5}{\begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|r|}
\hline
Weights \textbf{w} & \multicolumn{ 6}{c|}{0.333} & \multicolumn{ 6}{c|}{0.333} & \multicolumn{ 6}{c|}{0.333} \\ \hline
Max/Min & \multicolumn{ 6}{c|}{Max} & \multicolumn{ 6}{c|}{Max} & \multicolumn{ 6}{c|}{Max} \\ \hline
Criteria & \multicolumn{ 6}{c|}{Life expectancy at birth ($c_1$)} & \multicolumn{ 6}{c|}{Education ($c_2$)} & \multicolumn{ 6}{c|}{Gross national income per capita ($c_3$)} \\ \hline
\rowcolor{Gray}
Years & 1990 & 1995 & 2000 & 2005 & 2010 & 2015 & 1990 & 1995 & 2000 & 2005 & 2010 & 2015 & 1990 & 1995 & 2000 & 2005 & 2010 & 2015 \\ \hline
BR & 65.3 & 67.6 & 70.1 & 71.9 & 73.3 & 74.8 & 8.00 & 8.95 & 9.95 & 10.15 & 11.05 & 11.60 & 10065 & 10959 & 11161 & 12032 & 14420 & 15062 \\ \hline
CN & 69.0 & 69.9 & 71.7 & 73.7 & 75.0 & 76.0 & 6.80 & 7.25 & 7.85 & 8.75 & 9.85 & 10.30 & 1520 & 2508 & 3632 & 5632 & 9387 & 13347 \\ \hline
IN & 57.9 & 60.4 & 62.6 & 64.5 & 66.5 & 68.4 & 5.35 & 5.90 & 6.45 & 7.35 & 8.25 & 8.55 & 1754 & 2046 & 2522 & 3239 & 4499 & 5814 \\ \hline
ID & 63.3 & 65.0 & 66.3 & 67.2 & 68.1 & 69.1 & 6.75 & 7.20 & 8.70 & 9.30 & 9.95 & 10.30 & 4337 & 5930 & 5308 & 6547 & 8267 & 10130 \\ \hline
MY & 70.7 & 71.8 & 72.8 & 73.6 & 74.1 & 74.8 & 8.10 & 8.90 & 10.25 & 10.15 & 11.35 & 11.35 & 9772 & 13439 & 14500 & 17157 & 19725 & 23712 \\ \hline
MX & 70.8 & 72.8 & 74.4 & 75.3 & 76.1 & 77.0 & 8.05 & 8.55 & 9.15 & 9.85 & 10.50 & 10.80 & 12074 & 12028 & 14388 & 14693 & 15395 & 16249 \\ \hline
PH & 65.3 & 66.1 & 66.7 & 67.2 & 67.7 & 68.3 & 8.70 & 8.95 & 9.50 & 9.75 & 9.75 & 10.20 & 3962 & 4111 & 4994 & 6058 & 7478 & 8232 \\ \hline
RU & 68.0 & 66.0 & 65.1 & 65.8 & 68.6 & 70.3 & 10.95 & 10.85 & 11.85 & 12.60 & 13.10 & 13.35 & 19461 & 12011 & 12933 & 17797 & 21075 & 22094 \\ \hline
ZA & 62.1 & 61.4 & 55.9 & 51.6 & 54.5 & 57.9 & 8.95 & 10.65 & 11.00 & 11.15 & 11.55 & 11.75 & 9987 & 9566 & 9719 & 10935 & 11833 & 12110 \\ \hline
TR & 64.3 & 67.0 & 70.0 & 72.5 & 74.2 & 75.6 & 6.70 & 7.20 & 8.30 & 8.95 & 10.55 & 11.05 & 10494 & 11317 & 12807 & 14987 & 16506 & 18976 \\ \hline
\end{tabular}}
\label{tab:dados}
\end{table}
The feature-space is composed of four features ($h = 4$), current data (2015 data), average, coefficient of variation (CV), and slope coefficient (SC). Table~\ref{tab:atributos} shows the tensor $ \mathcal{S} \in \mathbb{R}^{10 \times 3 \times 4}$, in the frontal representation $ \textbf{S}(:,:,k) $, obtained by computing these features. The CV is a feature of cost (CV $\in J$), and the features SC, current data and average do not belong to either set of benefit or cost.
The results are presented using five strategies. Each of the first four strategies deals with only one feature. From these four strategies, we can better analyze the countries position in the ranking given a specific feature. In the fifth strategy, all the features are considered, which is effectively our proposal. In this latter strategy, we use the SMAA method to support the choice of the weights of the features and to provide a sensitivity analysis concerning these features. The five strategies were implemented as follow:
\begin{description}
\item[Strategy 1:] In this strategy $\boldsymbol{\alpha}^{S1}$ = [1, 0, 0, 0], which means only current data is taking into account, equivalent to consider only the slice $ \textbf{S}(:,:,1)$. Notice that this strategy is as it has been used in the literature.
\item[Strategy 2, 3 and 4:] In these three strategies, the $ \boldsymbol{\alpha}$ values are: $\boldsymbol{\alpha}^{S2}$, $\boldsymbol{\alpha}^{S3}$, $\boldsymbol{\alpha}^{S4}$ = [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], equivalent to consider only the slices $ \textbf{S}(:,:,2) $, $ \textbf{S}(:,:,3) $, and $ \textbf{S}(:,:,4)$ respectively;
\item[Strategy 5:] In this strategy, we consider that $\boldsymbol{\alpha}^{S5} = [\alpha_1, \alpha_{2}, \alpha_{3}, \alpha_{4}]$ takes on the values: $\alpha_{1}$ = 1 - ($\alpha_{2}$ + $\alpha_{3}$ + $\alpha_{4}$), and $\alpha_{2}, \alpha_{3}, \alpha_{4} \sim U[0.1, 0.2]$, where $U$ is a continuous uniform distribution. We chose to give more importance to the current data and equal weight range for the other features.
\end{description}
\begin{table}[htbp]
\centering
\caption{Alternative evaluations in the feature-space.}
\scalebox{0.8}{
\begin{tabular}{l|rrr|rrr|rrr|rrr}
\toprule
Feature & \multicolumn{ 3}{c}{Current data -- $ \textbf{S}(:,:,1)$}
& \multicolumn{ 3}{c}{Average -- $ \textbf{S}(:,:,2)$}
& \multicolumn{ 3}{c}{CV -- $ \textbf{S}(:,:,3)$}
& \multicolumn{ 3}{c}{SC -- $ \textbf{S}(:,:,4)$}\\ \hline
Criteria & $ c_1 $ & $ c_2 $ & $ c_3 $ & $ c_1 $ & $ c_2 $ & $ c_3 $ & $ c_1 $ & $ c_2 $ & $ c_3 $& $ c_1 $ & $ c_2 $ & $ c_3 $ \\
\midrule
BR &74,8&11,60&15062 & 70.5 & 9.9 & 12283
&0.05 & 0.12 & 0.15
& 1.9 & 0.70 & 1035
\\
CN &76,0&10.30&13347 & 72.5 & 8.5 & 6004
&0,03 & 0.15 & 0.69
& 1.5 & 0.75 & 2336
\\
IN &68.4&8.55&5814 & 63.4 & 6.9 & 3312
&0.06 & 0.17 & 0.43
& 2.1 & 0.68 & 810
\\
ID &69,1&10.30&10130 & 66.5 & 8.7 & 6753
&0.03 & 0.15 & 0.29
& 1.1 & 0.76 & 1063
\\
\rowcolor{yellow}
MY &74,8&11.35&23712 & 72.9 & 10.0 & 16384
&0.02 & 0.12 & 0.27
& 0.8 & 0.67 & 2606
\\
MX &77,0&10.80&16249 & 74.4 & 9.5 & 14137
&0.03 & 0.10 & 0.11
& 1.2 & 0.58 & 893
\\
PH &68.3&10.20&8232 & 66.9 & 9.5 & 5805
&0.01 & 0.05 & 0.28
& 0.6 & 0.29 & 929
\\
\rowcolor{b}
RU &70.3&13.35&22094 & 67.3 & 12.1 & 17561
&0.03 & 0.08 & 0.22
& 0.6 & 0.56 & 1292
\\
ZA &57.9&11.75&12110 & 57.2 & 10.9 & 10691
&0.06 & 0.08 & 0.09
& -1.3 & 0.48 & 532
\\
\rowcolor{RawSienna}
TR &75.6&11.05&18976 & 70.6 & 8.8 & 14181
&0.06 & 0.18 & 0.21
& 2.3 & 0.93 & 1718
\\
\bottomrule
\end{tabular}}
\label{tab:atributos}
\end{table}
The first row of Table~\ref{tab:ranking}, which we called R1, shows the ranking obtained in~\cite{banamar2018extension}, in order to compare with our results. We highlight that this latter study considered the time domain for ranking the countries. The other rows of Table~\ref{tab:ranking} show the ranking achieved by applying our five strategies. To facilitate the analysis, we shall focus on the position of the three countries highlighted in the table: Russia, Malaysia, and Turkey. A first remark on Table~\ref{tab:ranking} is that the R1 ranking is similar to $\boldsymbol{\alpha}^{S1}$ (current data) and $\boldsymbol{\alpha}^{S2}$ (average). For instance, the ranking in R1, $\boldsymbol{\alpha}^{S1}$, and $\boldsymbol{\alpha}^{S2}$, Russia and Malaysia ranked first and second place. The leading position of these two countries changes when we consider other features of the time-series, as in Strategies 3 and 4.
Different rankings were obtained in the results using $\boldsymbol{\alpha}^{S1}$, $\boldsymbol{\alpha}^{S2}$, $\boldsymbol{\alpha}^{S3}$, and $\boldsymbol{\alpha}^{S4}$, as shown in Table~\ref{tab:ranking}. We point out that these four strategies give all the importance to only a specific feature. The $\boldsymbol{\alpha}^{S1}$ (current data) ranking is very close to that of $\boldsymbol{\alpha}^{S2}$ (average). It is possible to see that Russia and Malaysia are in first and second place in both strategies; Turkey, however, changes the position with Mexico. As can be seen in Table~\ref{tab:atributos}, this change of position between Turkey and Mexico is because Turkey presents a better performance in current data, then it is better placed in $\boldsymbol{\alpha}^{S1}$. But in terms of average, Turkey's performance is worse, which leads Turkey to rank worse than Mexico in $\boldsymbol{\alpha}^{S2}$. The rankings in Strategies 3 and 4 are very different than the rankings in Strategies 1 and 2. Indeed, in $\boldsymbol{\alpha}^{S4}$, Russia ranks eighth, rather than first, and Turkey and Malaysia outperform it. This last analysis is clearer to understand from Table~\ref{tab:atributos}, where we observe that in terms of the slope coefficient, Russia's performance is worse than the two countries.
Given this first analysis, we can show, with a practical example, the discussion presented in Section~\ref{sec:flex}. Here, we note that Russia ranks first in the strategies that consider only the time-space. Instead, when dealing with SC, Russia becomes close to the last place. That is, when we consider the feature-space, a new solution is achieved, and it may be more useful if these elements are relevant for the decision.
\begin{table}[htbp]
\centering
\caption{Ranking obtained according to the strategy used.}
\centering
\scalebox{0.8}{
\begin{tabular}{lllllllllll}
\toprule
{} & \nth{1} & \nth{2} & \nth{3} & \nth{4} & \nth{5} & \nth{6} & \nth{7} & \nth{8} & \nth{9} & \nth{10} \\
\midrule
R1 & \colorbox{b}{RU} & \colorbox{yellow}{MY} & MX & BR & \colorbox{RawSienna}{TR} & CN & ZA & PH & ID & IN \\
$\boldsymbol{\alpha}^{S1}$ & \colorbox{b}{RU} & \colorbox{yellow}{MY} & \colorbox{RawSienna}{TR} & MX & BR & CN & ZA & ID & PH & IN \\
$\boldsymbol{\alpha}^{S2}$& \colorbox{b}{RU} & \colorbox{yellow}{MY} & MX & \colorbox{RawSienna}{TR} & BR & ZA & ID & PH & CN & IN \\
$\boldsymbol{\alpha}^{S3}$&MX & ZA&BR &\colorbox{b}{RU} & \colorbox{RawSienna}{TR} &PH & \colorbox{yellow}{MY} &ID & IN & CN \\
$\boldsymbol{\alpha}^{S4}$& \colorbox{RawSienna}{TR} & CN & BR & \colorbox{yellow}{MY} & IN & ID & MX & \colorbox{b}{RU} & PH & ZA \\
$\boldsymbol{\alpha}^{S5}$&\colorbox{yellow}{MY} & \colorbox{b}{RU} & \colorbox{RawSienna}{TR} & BR & MX & CN & ID & ZA & PH & IN \\
\bottomrule
\end{tabular}}
\label{tab:ranking}
\end{table}
For the sensitivity analysis, we show in Table~\ref{tab:smaa} the SMAA ranking percentage obtained using $\boldsymbol{\alpha}^{S5}$. As can be seen in Table~\ref{tab:smaa}, in 92\% of the time, Malaysia ranked first. Russia and Turkey are competing in the second and third place. Russia was ranked 50.19\% of the time in the second place and 49.81\% in the third, while Turkey was 41.88\% in second place and 50.19\% in the third. By analyzing the features in Table~\ref{tab:atributos}, it is possible to see that, in terms of average and current data, Russia performs better than Turkey in criteria $c_2$ and $c_3$. Also, Russia's coefficient of variation is lower than Turkey's in criteria $c_1$ and $c_2$. Finally, Russia performs worse in the slope coefficient compared with Turkey for all criteria. Thus, we can infer that the slope coefficient is strongly influencing the dispute between Russia and Turkey for the second and third place. Actually, in $\boldsymbol{\alpha}^{S4}$ (in which all the importance is for slope coefficient feature), Turkey is in the first position, and Russia is in the eighth. Conversely, in $\boldsymbol{\alpha}^{S1}$, $\boldsymbol{\alpha}^{S2}$, and $\boldsymbol{\alpha}^{S3}$, Russia rank better than Turkey.
In other pairwise comparisons, like between Mexico and Brazil, the slope coefficient also strongly impacts the dispute to be better placed. As can be seen in Table~\ref{tab:smaa}, Mexico and Brazil are competing for the fourth and fifth places. Brazil was ranked 55.95\% of the time in fourth place and 44.05\% in the fifth, while Mexico was 44.05\% in second place and 55.95\% in the fifth. By investigating the features in Table~\ref{tab:ranking}, Mexico rank better than Brazil in $\boldsymbol{\alpha}^{S1}$, $\boldsymbol{\alpha}^{S2}$, and $\boldsymbol{\alpha}^{S3}$, but it has a worse $\boldsymbol{\alpha}^{S4}$ performance. Thus, Mexico ranks better than Brazil in three of four features. However, when all features are considered (i.e., in $\boldsymbol{\alpha}^{S5}$), Mexico is not better placed than Brazil; instead they are disputing the fourth and fifth place.
\begin{table}[htbp]
\caption{Percentage matrix of rankings.}
\centering
\scalebox{0.8}{
\begin{tabular}{lrrrrrrrrrr}
\toprule
{} & \nth{1} & \nth{2} & \nth{3} & \nth{4} & \nth{5} & \nth{6} & \nth{7} & \nth{8} & \nth{9} & \nth{10} \\
\midrule
BR & 0 & 0 & 0 & 55.95 & 44.05 & 0 & 0 & 0 & 0 & 0 \\
CN & 0 & 0 & 0 & 0 & 0 & 90.07 & 9.75 & 0.18 & 0 & 0 \\
IN & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 24.22 & 37.09 & 38.69 \\
ID & 0 & 0 & 0 & 0 & 0 & 0 & 56.09 & 43.91 & 0 & 0 \\
MY&92.07 & 7.93 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
MX& 0 & 0 & 0 & 44.05 & 55.95& 0 & 0 & 0 & 0 & 0 \\
PH& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 38.75 & 61.25 \\
RU& 0 & 50.19 & 49.81 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
ZA& 0 & 0 & 0 & 0 & 0 & 9.93& 34.16& 31.69 & 24.16& 0.06 \\
TR& 7.93 & 41.88 & 50.19 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\bottomrule
\end{tabular}}
\label{tab:smaa}
\end{table}
It is worth noticing that the good performance of some countries in terms of current data, average, and coefficient of variation, was compensated by the worst performance in the slope coefficient. In this sense, the ranking proved to be very sensitive to the value of the $\boldsymbol{\alpha}$. Therefore, it is crucial to analyze how important is the tendency for decision-making.
\section{Conclusion}\label{sec:conclusoes}
This paper structure the MCDA data in a tensorial approach to fully exploit relevant features for decision-making. We proposed an extension of the TOPSIS method for aggregate the tensor to rank the alternatives. We also provided a sensitivity analysis using the SMAA method to verify the impact of the feature's weights in the ranking. For the computational tests, we applied the approach in a real-word time-series of ten countries to ranking them according to three criteria.
The main conclusion of the analysis is that considering some features from the time-series may lead to a different perspective in the decision-making compared to only aggregating the time-series or the current data. This approach allows us to find new solutions (rankings) that better describe the decision problem.
From the experiments, we observe changes in the ranking when considering new elements beyond the current data. It was possible to see that the tendency of the time-series strongly impacts the final ranking. For example, it is interesting if the consequences of the decision are in the medium or long term. Thus, we conclude that the feature-space approach can be meaningful for the decision-maker.
\bibliographystyle{elsarticle-num}
|
1,477,468,751,408 | arxiv | \section{Binary aggregation (without abstention)} \label{appendix:binary}
The formalism of choice for this paper is binary aggregation \cite{grandi13lifting}.
A binary aggregation structure (\emph{BA structure}) is a tuple $\S = \tuple{N,{\bf P},\gamma}$ where:
\begin{itemize}
\item $N = \set{1,\dots,n}$ is a finite set individuals s.t. $|N|= n \in \mathbb{N}$;
\item ${\bf P} = \set{p_1,\dots,p_m}$ is a finite set of issues ($|{\bf P}|= m \in \mathbb{N}$), each represented by a propositional atom;
\item $\gamma \in \L$ is an (integrity) constraint, where $\L$ is the propositional language constructed by closing ${\bf P}$ under a functionally complete set of Boolean connectives (e.g., $\set{\neg, \wedge}$)
\end{itemize}
An {\em opinion} $O: {\bf P} \to \set{{\bf 0},{\bf 1}}$ is an assignment of truth values to the set of issues ${\bf P}$, and the set of all opinions is denoted by $\mathcal O$. The opinion of an agent $i$ is said to be ``consistent" whenever $O_i \models \gamma$, that is, $i$'s opinion satisfies the integrity constraint. The set of all consistent opinions is denoted $\mathcal O_c = \set{O \in \mathcal O \mid O \models \gamma}$.
Thus, $O(p)={\bf 0}$ (respectively, \mbox{$O(p)={\bf 1}$}) indicates that opinion $O$ rejects (respectively, accepts) the issue $p$. Syntactically, the two opinions correspond to the truth of the literals $p$ or $\neg p$. For $p \in {\bf P}$ we write $\pm p$ to denote one element from $\set{p, \neg p}$. An \emph{opinion profile} $\O=(O_1,\dots,O_{n})$ records the opinion, on the given set of issues, of every individual in $N$. Given a profile $\O$ the $i^{\mathit{th}}$ projection $\O$ is denoted $O_i$ (i.e., the opinion of agent $i$ in profile $\O$).
We also denote by $\O(p)= \set{i \in N \mid O_{i}(p)= {\bf 1}}$ the set of agents accepting issue $p$ in profile $\O$ and by $\O(p^-)= \set{i \in N \mid O_{i}(p)= {\bf 0}}$.
Given a BA structure $\S$, an aggregation rule (or {\em aggregator}) for $\S$ is a function $F:(\mathcal O_{c})^N \to \mathcal O$, mapping every profile of consistent opinions to one collective opinion in $\mathcal O$.
$F(\O)(p)$ denotes the outcome of the aggregation on issue $p$. A benchmark aggregator is \emph{issue-by-issue strict majority rule} ($\mathsf{maj}$), which accepts an issue if and only if the majority of the population accepts it:
\begin{align}\label{eq:maj}
\mathsf{maj}(\O)(p)= {\bf 1} \Longleftrightarrow |\O(p)| \geq \frac{|N|+1}{2}.
\end{align}
It is well-known that aggregation by majority does not preserve consistency. The standard example is provided by the discursive dilemma, represented by the BA structure $\tuple{\set{1,2,3},\set{p,q,r},r \leftrightarrow p \land q}$. The profile consisting of $O_1 \models p \land q \land r$, $O_2 \models p \land \neg q \land \neg r$, $O_3 \models \neg p \land q \land \neg r$, returns an inconsistent majority opinion $\mathsf{maj}(\O) \models p \land q \land \neg r$.
\section{Relevant terminology from graph theory} \label{appendix:graph}
Let $G = \tuple{N, R}$ be a graph and $R^{*}$ be the transitive and symmetric closure of $R$.
A \emph{path} is a sequence of nodes $\tuple{i_1,\dots, i_k}$, such that, for all $l\in\{1,\dots,k\}$, $i_lRi_{l+1}$.
The \emph{distance} between two nodes $i,j$ is the length of the shortest path $\tuple{i,\dots, j}$ between them.
The \emph{diameter} of a graph is the maximal distance between any two nodes related by a path.
A \emph{cycle} is a path of length $k$ such that $i_1=i_k$.
A set of nodes $S\subseteq N$ is said to be:
\begin{itemize}
\item[]\emph{a cycle in $G$} if all elements in $S$ are in one cycle of length $|S|$,
\item[]\emph{connected} if for any $i , j \in S$: $i R^{*}j$,
\item[] \emph{strongly connected} if for any $i,j \in S$: there is a path $\tuple{i,\dots, j}$,
\item[]\emph{closed} if for any $i\in S$, $j \notin S$, it is not the case that $iRj$,
\item[] a \emph{connected component} if for any $i,j \in N$: $iR^{*}j$ if and only if $i,j\in S$,
\item[] \emph{aperiodic} if the greatest common divisor of the lengths of its cycles is $1$.
\end{itemize}
\section{Conclusions} \label{sec:conclusions}
The paper has moved the first steps towards the development of theoretical foundations for the voting system of liquid democracy based on delegable proxy.
We have pursued two lines of research linked to two interpretations commonly associated to the proxy character of liquid democracy: the delegation of voting right to trustees, vs. the copying of the votes of influencers. The first interpretation has led us to develop a simple model of liquid democracy based on the theory of binary and judgment aggregation. This has allowed us to study liquid democracy as a form of binary aggregation with abstentions. The second interpretation has led us to study liquid democracy through extremely simple models of opinion diffusion corresponding to the Boolean special case of the stochastic processes of opinion diffusion known as DeGroot processes. We have argued that studying aggregation in liquid democracy through this lens offers important advantages with respect to the handling of delegation cycles and the preservation of individual rationality. Through this second perspective we have also shown how off-the-shelf logical techniques can be used to analyze properties (such as convergence) of the diffusion process underpinning liquid democracy.
\section{Liquid Democracy as Binary Opinion Diffusion} \label{sec:diffusion}
Proxy voting can also be studied from a different perspective. Imagine a group where, for each issue $p$, each agent copies the ${\bf 0},{\bf 1}$ opinion of a unique personal ``guru''. Imagine that this group does so repeatedly until all agents (possibly) reach a stable opinion. These new stable opinions can then be aggregated as the `true' opinions of the individuals in the group. The collective opinion of a group of agents who either express a ${\bf 0},{\bf 1}$ opinion or delegate to another agent is (for one man---one vote proxy aggregators) the same as the output obtained from a vote where each individual has to express a ${\bf 0},{\bf 1}$ opinion but gets there by copying the opinion of some unique ``guru'' (possibly themselves).
In this perspective, a proxy voting aggregation can be assimilated to a (converging) process of opinion formation.
The above interpretation of liquid democracy is explicitly put forth in \cite{liquid_feedback}:
\begin{quote}
``While one way to describe delegations is the transfer of voting weight to another person, you can alternatively think of delegations as automated copying of the ballot of a trustee.
While at assemblies with voting by a show of hands it is naturally possible to copy the vote of other people, in Liquid Democracy this becomes an intended principle'' \cite[p. 22]{liquid_feedback}.
\end{quote}
The current section develops an analysis of this interpretation, and highlights some of its advantages over the delegation-based one studied earlier.
\subsection{Binary aggregation and binary influence}
The section develops a very simple model of binary influence based on the standard framework of binary aggregation (see Appendix \ref{appendix:graph} for a concise presentation). For simplicity, in this section we assume agents are therefore not allowed to abstain, although this is not a crucial assumption for the development of our analysis.
\subsubsection{DeGroot Processes and Opinion Diffusion}
In \cite{Degroot_1974}, DeGroot proposes a simple model of step-by-step opinion change under social influence. The model combines two types of matrices. Assuming a group of $n$ agents, a first $n\times n$ matrix represents the weighted influence network (who influences whom and how much), and a second $n \times m$ matrix represents the probability assigned by each agent to each of the $m$ different alternatives. Both the agents' opinion and the influence weights are taken within $[0,1]$ and are (row) stochastic (each row sums up to $1$). Given an opinion and an influence matrix, the opinion of each agent in the next time step is obtained through linear averaging.
Here we focus on a specific class of opinion diffusion processes in which opinions are binary, and agents are influenced by exactly one influencer, possibly themselves, of which they copy the opinion.
The model captures a class of processes which lies at the interface of two classes of diffusion models that have remained so far unrelated: the stochastic opinion diffusion model known as DeGroot's \cite{Degroot_1974}, and the more recent propositional opinion diffusion model due to \cite{Grandi:2015:POD:2772879.2773278}. The diffusion processes underpinning liquid democracy---which we call here Boolean DeGroot processes (BDPs)---are the $\set{0,1}$ special case of the DeGroot stochastic processes and, at the same time, the special case of propositional opinion diffusion processes where each agent has access to the opinion of exactly one neighbor (cf. Figure \ref{figure:intersection}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\columnwidth]{BDPs1}
\caption{{{BDPs lie in the intersection of DeGroot processes and propositional opinion diffusion processes.\label{figure:intersection}%
}%
}}
\end{center}
\end{figure}
\subsubsection{Boolean DeGroot processes}
Here we focus on the Boolean special case of a DeGroot process showing its relevance for the analysis of liquid democracy. Opinions are defined over a BA structure, and hence are taken to be binary. Similarly, we take influence to be modeled by the binary case of an influence matrix. Influence is of an ``all-or-nothing'' type and each agent is therefore taken to be influenced by exactly one agent, possibly itself. The graph induced by such a binary influence matrix (called \emph{influence graph}) is therefore a structure $G = \tuple{N, R}$ where $R \subseteq N^2$ is a binary relation where $i R j$ is taken here to denote that ``$i$ is influenced by $j$''. Such relation is serial ($\forall i\in N, \exists j \in N: i R j$) and functional ($\forall i,j,k \in N$ if $i R j$ and $i R k$ then $j = k$). So each agent $i$ has exactly one successor (the influencer), possibly itself, which we denote $R(i)$. It should be clear that influence graphs are the same sort of structures we studied earlier in Section \ref{sec:proxy} under the label 'delegation graph'.
An \emph{influence profile} ${\bf G}=(G_1,\dots,G_m)$ records how each agent is influenced by each other agent, with respect to each issue $p \in {\bf P}$. Given a profile ${\bf G}$ the i$^{\mathit{th}}$ projection $G_i$ denotes the influence graph for issue $p_i$, also written $G_p$.
\medskip
So let us define the type of opinion dynamics driving BDPs:
\begin{definition}
[BDP] \label{def:BDP}
Now fix an opinion profile $\O$ and an influence profile ${\bf G}$. Consider the stream $\O^0, \O^1, \ldots, \O^n, \ldots$ of opinion profiles recursively defined as follows:
\begin{itemize}
\item Base: $\O_0 := \O$
\item Step: for all $i \in N$, $p\in {\bf P}$, $O_i^{n+1}(p) := O^{n}_{R_p(i)}(p)$.
\end{itemize}
where $G_p = \tuple{N, R_p}$.
We call processes defined by the above dynamics \emph{Boolean DeGroot processes} (BDPs).
\end{definition}
It should be clear that the above dynamics is the extreme case of linear averaging applied on binary opinions and binary influence.
As noted above, BDPs are also the special case of processes that have recently been proposed in the multi-agent systems literature as \emph{propositional opinion diffusion} processes \cite{Grandi:2015:POD:2772879.2773278}, i.e., cases where 1) the aggregation rule is the unanimity rule (an agent adopts an opinion if and only if all her influencers agree on it), and 2) each agent has exactly one influencer. We will come back to the link with propositional opinion diffusion in some more detail later in Section \ref{sec:coloring}.
\subsection{Convergence of BDPs} \label{sec:convergence}
When do the opinions of a group of individuals influencing each other stabilize? Conditions have been given, in the literature, for the general paradigms of which BDPs are limit cases. This section introduces the necessary graph-theoretic notions and briefly recalls those results before giving a characterization of convergence for BDPs.
\subsubsection{Preliminaries}
We start with some terminology.
We say that the stream of opinion profiles $\O^0, \O^1, \ldots, \O^n, \ldots$ {\em converges} if
there exists $n \in \mathbb{N}$ such that for all $m\in \mathbb{N}$, if $m\geq n$, then $\O^m = \O^n$.
We will also say that a stream of opinion profiles converges {\em for issue} $p$ if
there exists $n \in \mathbb{N}$ such that, for all $m\in \mathbb{N}$, if $m\geq n$, then $\O^m (p) = \O^n(p)$.
Given a stream of opinion profiles starting at $\O$ we say that agent $i \in N$ stabilizes in that stream for issue $p$ if there exists $n \in \mathbb{N}$ such that $O^n_i(p) = O^{m}_i(p)$ for any $m > n$. So a BDP on influence graph ${\bf G}$ starting with the opinion profile $\O$ is said to converge if the stream $\O^0, \O^1, \ldots, \O^n, \ldots$ generated according to Definition \ref{def:BDP} where $\O = \O^0$ converges. Similarly, A BDP is said to converge for issue $p$ if its stream converges for $p$, and an agent $i$ in the BDP is said to stabilize for $p$ if it stabilizes for $p$ in the stream generated by the BDP.
\medskip
Notice first of all that influence graphs have a special shape:\footnote{Please consult Appendix \ref{appendix:graph} for the relevant terminology from graph theory.}
\begin{fact}
\label{fact:uniquecycle}
Let $G$ be an influence graph and $C$ be a connected component of $G$.
Then $C$ contains exactly one cycle, and the set of nodes in the cycle is closed.
\end{fact}
\begin{proof}
Assume that $C$ does not contain any cycle. Since $N$ is finite and since no path can repeat any node, any path in $C$ is finite too. Let $i$ be the last element of (one of) the longest path(s) in $C$. Then $i$ does not have any successor, which contradicts seriality. So $C$ contains at least one cycle.
Let $S$ be the set of nodes of a cycle in $C$. Assume that $S$ is not closed: for some $i\in S$ and $j\notin S$, $iRj$. Since $S$ is a cycle, there is also some $k\in S$, such that $iRk$, which contradicts functionality. Therefore, the nodes of any cycle in $C$ forms a closed set.
Now assume that $C$ contains more than one cycle. Since the nodes of each cycle forms a closed set, there is no path connecting any node inside a cycle to any node in any other cycle, which contradicts connectedness. So $C$ contains a unique cycle, whose nodes form a closed set.
\end{proof}
Intuitively, influence graphs of BDPs then look like sets of confluent chains aiming together towards common cycles.
\subsubsection{Context: convergence in DeGroot processes}
For the general case of DeGroot processes, an influence structure guarantees that any distribution of opinions will converge if and only if ``every set of nodes that is strongly connected and closed is aperiodic" \cite[p.233]{jackson08social}.
In the propositional opinion diffusion setting, sufficient conditions for stabilization have been given by \cite[Th. 2]{Grandi:2015:POD:2772879.2773278}: on influence structures containing cycles of size at most one (i.e, only self-loops), for agents using an aggregation function satisfying (ballot-)monotonicity and unanimity\footnote{Notice that the rule underpinning BDP, that is the `guru-copying' rule on serial and functional graphs, trivially satisfies those constraints.}, opinions will always converge in at most at most $k+1$ steps, where $k$ is the diameter of the graph.\footnote{A second sufficient condition for convergence is given by \cite{Grandi:2015:POD:2772879.2773278}: when agents use the unanimity aggregation rule, on irreflexive graphs with only vertex-disjoint cycles, such that for each cycle there exists an agent who has at least two influencers, opinions converge after at most $N$ steps. Note that no BDP satisfies this second condition.} The results below show how BDPs are an interesting limit case of both DeGroot and propositional opinion diffusion processes.
\subsubsection{Two results}
It must be intuitively clear that non-convergence in a BDP is linked to the existence of cycles in the influence graphs. However, from the above observation
(Fact~\ref{fact:uniquecycle}), we know that nodes in a cycle cannot have any influencers outside this cycle, and hence that cycles (including self-loops) can only occur at the ``tail'' of the influence graph.
Hence, if the opinions in the (unique) cycle do not converge, which can only happen in a cycle of length $\geq 2$, the opinions of the whole population in the same connected component will not converge. The above implies that for any influence graphs with a cycle of length $\geq 2$, there exists a distribution of opinions which loops.
This brings us back to convergence result for general (not necessarily Boolean) DeGroot processes. Indeed, for functional and serial influence graphs, a closed connected component is aperiodic if and only if its cycle is of length $1$.
\begin{fact}
\label{fact:influence}
Let ${\bf G}$ be an influence profile. Then the following are equivalent:
\begin{enumerate}
\item The BDP converges for any opinion profile $\O$ on ${\bf G}$.
\item For all $p\in {\bf P}$, $G_{p}$ contains no cycle of length $\geq 2$.
\item For all $p\in {\bf P} $, all closed connected components of $G_{p}$ are aperiodic.
\end{enumerate}
\end{fact}
\begin{proof}
\fbox{$2) \Rightarrow 1)$}
Let $p\in{\bf P}$ and assume that $G_p$ contains no cycle of length $\geq 2$ and has diameter $k$. Let $C_p$ be a connected component of $G_p$. By Fact \ref{fact:uniquecycle}, $C_p$ contains a unique cycle, which, by assumption, is of length $1$. Hence, $C_p$ is aperiodic. Let $i$ be the node in the cycle. The opinion of $i$ will spread to all nodes in $C_p$ after at most $k$ steps. Therefore, all BDPs on $G$ will converge after at most $l$ steps, where $l$ is the maximum within the set of diameters of $G_p$ for all $p\in{\bf P}$.
\fbox{$1) \Rightarrow 3)$} We proceed by contraposition.
Assume that for some $p\in{\bf P}$, a connected component $C_p$ of $G_p$ contains a cycle of length $k\geq 2$. By \ref{fact:uniquecycle}, this cycle is unique, and therefore the greatest common divisor of the cycles lengths of $C_p$ is $k$, so $C_p$ is not aperiodic.
Let $S$ be the set of nodes in the cycle.
Let $\O$ be such that for some $i,j\in S$ with distance $d$ from $i$ to $j$, $O_i(p)\neq O_j(p)$. Then $\O_i(p)$ will not converge, but enter a loop of size $k$: for all $x\in\mathbb{N}$, $O^{x\times k}_i(p) \neq O^{(x\times k)+d}_i(p)$. Hence, $\O$ does not converge.
\fbox{$3) \Rightarrow 2)$} Trivial.
\end{proof}
It is worth noticing that one direction (namely from $3$ to $1$) of the above result is actually a corollary of both the convergence result for DeGroot processes stated at the beginning of this section (cf. \cite{jackson08social}), and of a known convergence result for propositional opinion diffusion \cite[Th. 2]{Grandi:2015:POD:2772879.2773278}, also stated earlier.
\medskip
The above gives a characterization of the class of influence profiles on which \emph{all} opinion streams converge. But we can aim at a more general result, characterizing the class of pairs of opinion and influence profiles which lead to convergence:
\begin{theorem}
\label{theorem:opinion}
Let ${\bf G}$ be an influence profile and $\O$ be an opinion profile. Then the following statements are equivalent:
\begin{enumerate}
\item The BDP converges for $\O$ on ${\bf G}$.
\item For all $p\in {\bf P}$, there is no set of agents $S\subseteqN$ such that: $S$ is a cycle in $G_{p}$ and there are two agents $i,j\in S$ such that $O_i(p)\neq O_j(p)$.
\end{enumerate}
\end{theorem}
\begin{proof}
\fbox{$1) \Rightarrow 2)$} We proceed by contraposition.
Let $p\in{\bf P}$, $S\subseteqN$ be a cycle in $G_p$, $i,j\in S$, and $O_i(p)\neq O_j(p)$. Let $k$ be the length of the cycle and $d$ be the distance from $i$ to $j$. Then $O_i(p)$ will enter a loop of size $k$: for all $x\in\mathbb{N}$, $O^{x k}_i(p)\neq O^{x k+d}_i(p)$.
\fbox{$2) \Rightarrow 1)$}
Assume $S\subseteqN$ be such that $S$ is a cycle in $G_p$, and for all $i,j\in S$, $O_i(p)=O_j(p)$. Then, for all $j\in S$, and all $x\in\mathbb{N}$, $O^{x}_j(p)=O_i(p)$ and for all $f\inN\notin S$ with distance $d$ from $f$ to $i$, for all $x\in\mathbb{N}$, such that $x\geq d$, $O^{x}_f(p)=O_i(p)$.
\end{proof}
This trivially implies that the class of opinion profiles which guarantees convergence for {\em any} influence profile, is the one where everybody agrees on everything already. Note that the only stable distributions of opinions are the ones where, in each connected component in $G$, all members have the same opinion, i.e, on BDPs, converging and reaching a consensus (within each connected component) are equivalent, unlike in the stochastic case. Moreover, for an influence profile where influence graphs have at most diameter $d$ and the smallest cycle in components with diameter $d$ is of length $c$, it is easy to see that if a consensus is reached, it will be reached in at most $d-c$ steps, which is at most $n-1$.
\medskip
Finally observe that Theorem \ref{theorem:opinion} subsumes Fact \ref{fact:influence}. If $G_p$ contains only cycles of length $1$ (second statement in Fact \ref{fact:influence}) then, trivially, no two agents in a cycle can disagree (second statement in Theorem \ref{theorem:opinion}).
\subsubsection{Liquid Democracy as a BDP}
We have seen (Section \ref{sec:proxy}) that each proxy profile $\O$ induces what we called a delegation graph $G^\O = \tuple{N, R_p}$ for each issue $p$. Delegation graphs are the same sort of structures we referred to in the current section as influence graphs. So each proxy profile $\O$ can be associated to a BDP by simply assigning random ${\bf 0}$ or ${\bf 1}$ opinions to each voter delegating her vote in $\O$. It is then easy to show that for each connected component $C$ of $G^\O$, if $C$ has a guru with opinion $x$, then that component stabilizes in the BDP on opinion $x$ for each assignment of opinions to the delegating agents in $\O$. Vice versa, if $C$ stabilizes on value $x$ in the BDP for each assignment of opinions to the delegating agents in $\O$, then $C$ has a guru whose opinion is $x$. This establishes a direct correspondence between voting with delegable proxy and Boolean deGroot processes. However, BDPs offer an interesting and novel angle on the issue of cyclical delegations, to which we turn now.
\subsubsection{Cycles}
As discussed earlier (Section \ref{sec:proxyabs}), cycles are a much discussed issue in liquid democracy. Its proponents tend to dismiss delegation cycles as a non-issue: since the agents forming a cycle delegate their votes, none of them is casting a ballot and the cycles get resolved essentially by not counting the opinions of the agents involved in the cycle \cite{liquid_feedback}. We stressed this solution as problematic in the `vote-delegation' interpretation of liquid democracy as it has the potential to discard large numbers of opinions. The elimination of cycles not only hides to aggregation the opinions of the agents involved in cycles, but also the opinions of agents that may be linked to any of those agents by a delegation path. In other words information about entire connected components in the delegation graph may be lost.
We argue that the `vote-copying' interpretation of the system---formalized through BDPs---offers novel insights into possible approaches to cycles in delegable proxy. Theorems \ref{theorem:opinion} and \ref{theorem:mu} offer an alternative solution by showing that not all cycles are necessarily bad news for convergence: cycles in which all agents agree still support convergence of opinions, and therefore a feasible aggregation of opinions by proxy. This suggests that alternative proxy voting mechanisms could be designed based on opinion convergence behavior rather than on weighted voting.
\subsection{Excursus: unanimity and 2-colorability}\label{sec:coloring}
In the above, we have worked at the intersection of two models of opinion diffusion, the DeGroot model, and the propositional opinion diffusion model. However, there is more to say about how the two frameworks relate.
Let us take a brief detour towards a generalisation of BDPs corresponding to the case of propositional opinion diffusion with the unanimity rule, where agents can have several influencers and change their opinions only if all their influencers disagree with them. This means that we relax the functionality constraint on influence graphs. We will show how the two frameworks meet again: some non-stabilizing opinion cases under the unanimity rule correspond to a special class among the `semi-Boolean' cases of DeGroot processes where opinions are still binary but influence does not need to be.
We define the dynamics of opinions under the unanimity rule in the obvious way:
\begin{definition}
[UP]
Fix an opinion profile $\O$ and a (serial but non-necessarily functional) influence profile ${\bf G}$. Consider the stream $\O^0, \O^1, \ldots, \O^n, \ldots$ of opinion profiles recursively defined as follows:
\begin{itemize}
\item Base: $\O_0 := \O$
\item Step: for all $i \in N$ and all $p \in {\bf P}$:
\begin{align}
\O_i^{n+1}(p) & = \left\{
\begin{array}{ll}
O_i^{n}(p) & \mbox{if for some $j,k\in R_p(i)$,$O_j^{n}(p)\neq O_k^{n}(p)$ } \\
O_j^{n}(p) & \mbox{otherwise, where $j \in R_p(i)$ }
\end{array}
\right.
\end{align}
\end{itemize}
where $G_p = \tuple{N, R_p}$.
We call processes defined by the above dynamics \emph{Unanimity Processes} (UPs).
\end{definition}
We give a sufficient condition for non-convergence of UPs:
\begin{lemma}
\label{lemma:suff.UP}
Let $G$ be a (serial and non-necessarily functional) influence profile and $\O$ be an opinion profile, such that, for some $p\in {\bf P} $, for all $i,j\in C$, where $C$ is a connected component of $G_p$: if $i\in R_p(j)$, then $O_i(p)\neq O_j(p)$.
Then $\O$ does not converge in UP.
\end{lemma}
\begin{proof}
Let $G$ be a (serial and non-necessarily functional) influence profile, and $\O$ be an opinion profile, such that, for some $p\in {\bf P} $, for all $i,j\in C$ with $C$ a connected component of $G_p$: if $i\in R_p(j)$, then $O_i(p)\neq O_j(p)$. Then, by definition of UPs, for all $i\in C$, $O^1_i(p)\neq O_i(p)$, and by repeating the same argument, for all $n\in\mathbb{N}$, $O^{n+1}_i(p)\neq O^n_i(p)$.
\end{proof}
Intuitively, the above condition for non-convergence corresponds to a situation of global maximal disagreement: \emph{all} agents (of a connected component) disagree with \emph{all} their influencers.
Recall that a graph is properly $k$-colored if each node is assigned exactly one among $k$ colors and no node has a successor of the same color, and consider the two possible opinions on issue $p$ as colors. The above result can be reformulated in terms of proper $2$ colorings, as follows: if for some $p\in{\bf P}$, $\O$ properly colors $G_p$, then $\O$ does not converge. In such a case, all agents will change their opinion on $p$ at every step, entering an oscillation of size $2$. So the maximal state of disagreement is the maximally unstable case of the dynamics. Note that this limit case of opinion distribution is yet another special case of DeGroot processes, another example within the intersection between the two frameworks of propositional opinion diffusion and DeGroot.
The possibility of such a distribution of opinions on $p$ relies on the influence graph $G_p$ being $2$-colorable, which is again a requirement about the lengths of its cycles: it is $2$-colorable if and only if it contains no cycle of odd length. However, non $2$-colorability is not a sufficient condition for convergence of UPs in general: a simple cycle of three agents, for instance, is not $2$-colorable but does not guarantee convergence either (as illustrated above with the convergence conditions for BDPs).
Nevertheless, there is a class of influence profiles for which being $2$-colorable is a necessary condition of non-convergence of UPs, the \emph{symmetric} ones:
\begin{lemma}
\label{lemma:symm.opinionUP}
Let ${\bf G}$ be a symmetric (serial and non-necessarily functional) influence profile and $\O$ be an opinion profile. The following statements are equivalent:
\begin{enumerate}
\item $\O$ converges in UP on ${\bf G}$;
\item For all $p\in{\bf P} $, for all connected component $C$ of $G_p$, there are $i,j\in C$, such that $i\in R_p(j)$, and $O_i(p)= O_j(p)$, where $G_p = \tuple{N, R_p}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\fbox{$2) \Rightarrow 1)$} Assume that for any $p\in{\bf P}$, for any connected component $C$ of $G_p$, there exist $i,j\in C$, such that $ R_p(j)$ and $O_i(p)= O_j(p)$. By definition of UP, this implies that $O_i(p)$ is stable, and that all agents with distance $\leq k$ will be stable after at most $k$ steps. \fbox{$1) \Rightarrow 2)$} This follows from Lemma~\ref{lemma:suff.UP}.
\end{proof}
This means that opinions on a given $p$ will converge if and only if two agents influencing each other on $p$ already agree on it. We can therefore, as we did for BDPs, characterize the class of influence profiles for which all (symmetric) opinion profiles converge in UPs:
\begin{theorem}
\label{thm:symm.influenceUP}
Let ${\bf G}$ be a symmetric (serial and non-necessarily functional) influence profile. The following statements are equivalent:
\begin{enumerate}
\item All opinion profiles $\O$, converge in UPs on ${\bf G}$.
\item For all $p\in{\bf P}$, and all connected components of $C \subseteq G_p$, $C$ is not $2$-colorable (contains cycle(s) of odd length), where $G_p = \tuple{N, R_p}$.
\end{enumerate}
\end{theorem}
\begin{proof}
\fbox{$2) \Rightarrow 1)$} Let $p\in{\bf P}$ and $C$ be connected component of $G_p$ with diameter $k$. Let $C$ contain a cycle of length $c$, with $c$ odd. Let $\O$ be an arbitrary opinion profile. Since $c$ is odd, there exist $i,j\in S$ such that $j\in R_p(i)$ and $O_i(p)=O_j(p).$ By definition of UP, this implies that $O_i(p)$ is stable, and that all agents with distance $\leq k$ will be stable after at most $k$ steps. Hence, $\O$ converges. \fbox{$1) \Rightarrow 2)$} This follows from Lemma~\ref{lemma:symm.opinionUP}.
\end{proof}
Note that, while the basic modal language cannot capture graph $2$-colorability, it can capture non $2$-colorability, and therefore capture the class of symmetric (serial and non-necessarily functional) influence profiles which guarantee convergence of UPs. We leave the detail out for space reason.
We have shown that, for UPs in general, convergence (in a connected component) is not guaranteed if it contains no odd cycles, and that symmetric UPs guarantee convergence as soon as they contain some odd cycle. However, containing an odd cycle is a very ``easy'' requirement for a real-life influence network to meet (it corresponds to a non-zero clustering coefficient). By contrast, recall that BDPs guarantee convergence (on {\em any} opinion profile) only when they contain only cycles of size $1$, which is a rather implausible requirement to be satisfied on real influence networks.
\subsection{BDPs on logically interdependent issues}
So far we have assumed the aggregation to happen on a set of issues without constraint (or rather with $\gamma = \top$). In this subsection we study what happens in the presence of a constraint $\gamma \neq \top$. BDPs on aggregation structures with constraints may lead individuals to update with logically inconsistent opinions. But the diffusion perspective whereby agents copy the opinion of trustees rather than delegating their voting right better lends itself to an assumption of individual rationality.
The following processes are simple adaptations of BDPs where agents update their opinions only if the opinions of their influencers, on the respective issues, are consistent with the constraint.\footnote{Other update policies are of course possible. A recent systematic investigation of opinion diffusion on interconnected issues is \cite{Botan16}.}
\begin{definition}
Fix an opinion profile $\O$, an influence profile ${\bf G}$, and a constraint $\gamma$. Consider the stream $\O^0, \O^1, \ldots, \O^n, \ldots$ of opinion profiles recursively defined as follows:
\begin{itemize
\item Base: $\O_0 := \O$
\item Step: for all $i \in N$, $p\in {\bf P}$,
\begin{align*}
O_i^{n+1}(p) :=
\left\{
\begin{array}{ll}
O^{n}_{R_p(i)}(p) & \mbox{if } \bigwedge_{p \in {\bf P}} O^{n}_{R_p(i)}(p) \wedge \gamma \mbox{ is consistent} \\
O_i^{n}(p) & \mbox{otherwise}
\end{array}
\right.
\end{align*}
\end{itemize}
where $G_p = \tuple{N, R_p}$.
We call processes defined by the above dynamics \emph{individually rational} BDPs.
\end{definition}
Individually rational BDPs converge in some cases in which BDPs do not. There are cases in which there is disagreement in the cycles but the process still converges, because of the safeguard towards individual rationality built into the dynamics.
\begin{example}
Consider the following example. Let $N = \set{1,2}$, ${\bf P} = \set{p,q}$ and $\gamma =\{p\leftrightarrow \neg q\}$. Let then $G = \tuple{N, \set{R_i}_{i \in {\bf P}}}$ be as follows: $1R_q1$, $2R_q2$, $1R_p2$ and $2R_p1$. Finally let $\O$ be such that $O_1(p) = O_2(q) = {\bf 1}$, $O_2(p) = O_1(q) = {\bf 0}$. Voters $1$ and $2$ form a non-unanimous cycle, but $\O$ is a stable opinion profile.
\end{example}
The example shows that direction $1)\Rightarrow 2)$ of Theorem \ref{theorem:opinion} does not hold for individually rational BDPs: some individually rational BDPs may stabilize even in the presence of disagreement within a cycle. Intuitively, the reason why this happens is that individually rational BDPs that stabilize even when disagreements occur within cycles do so because their cycles are not "synchronized". In the above example, given the constraint $p\leftrightarrow\neg q$, the only way to get stabilization starting from a situation respecting the constraint is to have a cycle of influence for $q$ which goes `in the opposite direction' from the one from $p$, all other cases would amount to violate the constraint.
\medskip
Beyond this simple example, we want to find out what happens with more complex constraints and what are the conditions for individually rational BDPs to converge. Let us first show that direction $2)\Rightarrow 1)$ of Theorem \ref{theorem:opinion} still holds, that is, individually rational BDPs without disagreement in their cycles always converge:
\begin{theorem}
\label{theorem:resistantsufficient}
Let ${\bf G}$ be an influence profile, $\O$ be an opinion profile, and $\gamma$ a constraint. Then the following holds: {\em if} for all $p\in {\bf P}$, for all $S\subseteqN$ such that $S$ is a cycle in $G_{p}$, and all $i,j\in S$: $O_i(p)=O_j(p)$, {\em then} the individually rational BDP for $\O$, ${\bf G}$ and $\gamma$ converges in at most $k$ steps, where $k\leq \max\{diam(G_p)|p\in P\}$.
\end{theorem}
\begin{proof}
Assume that for all $p\in {\bf P}$, for all $S\subseteqN$ such that $S$ is a cycle in $G_{p}$, for all $i,j\in S$: $O_i(p)=O_j(p)$.
Consider an arbitrary $i\in N$.
Let $k_i(p)$ be the distance from $i$ to the closest agent in a cycle of $G_p$, and let $k_i$ denote $max \{k_i(p)|p\in P\}$. We show that for any $k_i\in\mathbb{N}$, $O^{k_i}_ i$ is stable.
\begin{itemize}
\item If $k_i=0$: $i$ is its only infuencer, therefore $O^0_{i}$ is stable.
\item If $k_i=n+1$: Assume that for all agents $j$ such that $k_j=n$, $O^{k_j}_ j$ is stable. This implies that all influencers of $i$ are stable. We need to consider the following cases:
\begin{enumerate}
\item If $\bigwedge_{p \in {\bf P}} O^{m}_{R_p(i)}(p) \wedge \gamma$ is not consistent, then it will never be: $O^{n}_i$ is stable.
\item If $\bigwedge_{p \in {\bf P}} O^{m}_{R_p(i)}(p) \wedge \gamma$ is consistent,
then $O^{n+1}_i$ is stable.
\end{enumerate}
\end{itemize}
This completes the proof.
\end{proof}
\subsection{Section Summary}
In this section we studied a very simple class of opinion diffusion processes on networks (Boolean DeGroot processes, BDPs), which precisely capture the vote-copying behavior suggested by a standard interpretation of the liquid democracy system. Interestingly these processes lie at the interface of two so far unconnected network diffusion models: the well-known DeGroot processes---of which BDPs constitute the binary special case---and of propositional opinion diffusion processes---of which BDPs constitute the special case where the set of neighbors is a singleton.
We established necessary and sufficient conditions for convergence, which can be captured in modal fixpoint logics as we will show in the next section. We argued that these results provide a novel angle on the issue of delegation cycles in liquid democracy.
There are a number of further questions concerning, especially, individually rational BDPs that we leave for future investigations: What are the necessary conditions for their stabilization? What opinions are reachable? And, in particular, when is a consensus reached? Finally, one could consider other types of influence policies than the one used in individually rational BDPs. For instance, agents may be allowed to `pass through' an inconsistent state at some point, in which case one can wonder under which conditions the process can still converge to a consistent state. Indeterministic policies would also make sense, where an agent confronted with inconsistent opinions from her influencers keeps one of the closest consistent opinions set, rather than not being influenced at all (cf. \cite{Botan16}).
\section{Introduction}
Liquid democracy \cite{liquid_feedback} is a form of democratic decision-making considered to stand between direct and representative democracy. It has been used, advocated and popularized by local and even national parties (e.g., Demoex\footnote{\url{demoex.se/en/}} in Sweden, and Piratenpartei\footnote{\url{www.piratenpartei.de}} in Germany) to coordinate the behavior of party representatives in assemblies, as well as campaigns (e.g., Make Your Laws\footnote{\url{www.makeyourlaws.org}} in the US). At its heart is voting via a delegable proxy, also called sometimes transitive proxy. For each issue submitted to vote, each agent can either cast its own vote, or it can delegate its vote to another agent---a proxy---and that agent can delegate in turn to yet another agent and so on. This differentiates liquid democracy from standard proxy voting \cite{Miller_1969,Tullock_1992}, where proxies cannot delegate their vote further. Finally, the agents that decided not to delegate their votes cast their ballots (e.g., under majority rule, or adaptations thereof), but their votes now carry a weight consisting of the number of all agents that, directly or indirectly, entrusted them with their vote.
\paragraph{Scientific context and contribution}
Analyses of standard (non-delegable) proxy voting from a social choice-theoretic perspective---specifically through the theory of spatial voting---have been put forth in \cite{Alger_2006} and \cite{Green_Armytage_2014}. Delegable proxy has not, to the best of our knowledge, been object of study so far, with the notable exception of \cite{Boldi_2011} which focuses specifically on algorithmic aspects of a variant of liquid democracy (which the authors refer to as {\em viscous democracy}) with applications to recommender systems.
The objective of the paper is to provide a first analysis, via formal methods, of the liquid democracy voting system based on delegable proxy. This, we hope, should point to a number of future lines of research and stimulate further investigations into this and related systems.
\paragraph{Outline}
The paper starts in Section \ref{sec:preliminaries} by introducing some preliminaries on the theory of binary aggregation, which is the framework of reference for this study. It is then structured in two parts. This preliminary section presents also novel results on binary aggregation with abstentions. The first part (Section \ref{sec:proxy}) studies voting in liquid democracy from the point of view of the delegation of voting power: we study delegable proxy aggregators using the machinery of binary and judgment aggregation. This allows us to shed novel light on some issues involved in the liquid democracy system, in particular: the issue of circular delegation, and the issue of individual irrationality when voting on logically interdependent issues. The second part (Sections \ref{sec:diffusion} and \ref{sec:logic}) studies voting in liquid democracy as a very specific type of opinion diffusion on networks, whereby delegation is rather interpreted as the willingness to copy the vote of a trustee. We show that this perspective provides some interesting insights on how to address the above mentioned issues of circular delegations and individual irrationality. Section \ref{sec:conclusions} concludes the paper and outlines some on-going lines of research.
\section{Fixpoint Logics for BDPs} \label{sec:logic}
In this section we show how a well-established logic for formal verification can be readily used to specify and reason about properties of BDPs, and in particular their convergence. The logic is the so-called $\mu$-calculus. This points to a so-far unexplored interface between fixpoint logics and models of opinion dynamics---like the DeGroot model and propositional opinion diffusion. The section moves some first steps in that direction along the lines of another recent work \cite{JvBoscillations}, where the $mu$-calculus, and extensions thereof, have been applied to the study of dynamical systems.
\subsection{Influence graphs as Kripke models}
We treat influence graphs as Kripke (multi-relational) models \cite{Seligmanetal:synthese,Christoff_2015}
\begin{definition}
We call an {\em influence model} a tuple $\mathcal{M} = \tuple{N, {\bf G}, \O}$ where ${\bf G}=(G_{p_1},\dots,G_{p_m})$ is an influence profile, and $\O: {\bf P} \longrightarrow 2^N$ is an opinion profile over ${\bf P}$, that is, a valuation function.
\end{definition}
One can therefore easily interpret a modal language over influence models, where modalities are interpreted on the accessibility relations in ${\bf G}$. That is, to each graph $G_p$ we associate modalities $\square{p}$ and $\lozenge{p}$. We will give the details below, but let us immediately note that the class of (possibly infinite) influence graphs would then be characterized by the following properties, for any $p \in {\bf P}$:
\begin{align}
\square{p} \phi \rightarrow \lozenge{p} \phi & & \mbox{(seriality)} \\
\lozenge{p} \phi \rightarrow \square{p} \phi & & \mbox{(functionality)}
\end{align}
More precisely, for any influence profile ${\bf G}=(G_{p_1},\dots,G_{p_m})$, formula $\square{p_i} \phi \rightarrow \lozenge{p_i} \phi$ (respectively, $\lozenge{p_i} \phi \rightarrow \square{p_i} \phi$) is valid in such graph---that is, true in any pointed influence model built on such graph---if and only if each $G_{p_i}$ consists of a serial (respectively, functional) relation.\footnote{These are known results from modal correspondence theory (cf. \cite{Blackburn_2001}).} Put otherwise, on serial and functional graphs the modal box and diamond are equivalent.
\subsection{Modal $\mu$-calculus}
\[
\L^\mu: \phi ::= p \mid \bot \mid \neg \phi \mid \phi \land \phi \mid \lozenge{p} \phi \mid \mu p. \phi(p)
\]
The language of the $\mu$-calculus expands the basic modal language with a least fixpoint operator $\mu$. Here is the BNF of the language:
where $p$ ranges over ${\bf P}$ and $\phi(p)$ indicates that $p$ occurs free in $\phi$ (i.e., it is not bounded by fixpoint operators) and under an even number of negations.\footnote{This syntactic restriction guarantees that every formula $\phi(p)$ defines a set transformation which preserves $\subseteq$, which in turn guarantees the existence of least and greatest fixpoints by the Knaster-Tarski fixpoint theorem (cf. \cite{Stirling_2001}).} In general, the notation $\phi(\psi)$ stands for $\psi$ occurs in $\phi$. The usual definitions for Boolean and modal operators apply. Intuitively, $\mu p. \phi(p)$ denotes the smallest formula $p$ such that $p \leftrightarrow \phi(p)$. The greatest fixpoint operator $\nu$ can be defined from $\mu$ as follows: $\nu p. \phi(p) := \neg \mu p. \neg \phi( \neg p)$.
We interpret $\L^\mu$ on influence models as follows:
\begin{definition}
Let $\phi \in \L^\mu$. The satisfaction of $\phi$ by a pointed influence model $(\mathcal{M}, i)$ is inductively defined as follows:
\begin{align*}
\mathcal{M}, i \not\models \bot & \\
\mathcal{M}, i \models p \ & \Longleftrightarrow i \in \O(p), \mbox{ for } p \in {\bf P} \\
\mathcal{M}, i \models \neg \phi & \Longleftrightarrow i \not\in \true{\phi}_\mathcal{M} \\
\mathcal{M}, i \models \phi_1 \wedge \phi_2 & \Longleftrightarrow i \in \true{\phi_1}_\mathcal{M} \cap \true{\phi_2}_\mathcal{M} \\
\mathcal{M}, i \models \lozenge{p} \phi & \Longleftrightarrow i \in \{ j \mid \exists k: j G_p k \ \& \ k \in \true{\phi}_\mathcal{M} \} \\
\mathcal{M}, i \models \mu p. \phi(p) & \Longleftrightarrow i \in \bigcap \{ X \in 2^N \mid \true{\phi}_{\mathcal{M}[p:=X]} \subseteq X \}
\end{align*}
where $\true{\phi}_{\mathcal{M}[p:=X]}$ denotes the truth-set of $\phi$ once $\O(p)$ is set to be $X$. As usual, we say that: $\phi$ is valid in a model $\mathcal{M}$ iff it is satisfied in all points of $\mathcal{M}$, i.e., $\mathcal{M} \models \phi$; $\phi$ is valid in a class of models iff it is valid in all the models in the class.
\end{definition}
We list some relevant known results about $\mathsf{K}^{\mu}$. The logic has a sound and (weakly) complete axiom system \cite{Walukiewicz_2000}. The satisfiability problem of $\mathsf{K}^{\mu}$ is decidable \cite{Streett_1984}. The complexity of the model-checking problem for $\mathsf{K}^{\mu}$ is known to be in NP $\cap$ co-NP \cite{Gr_del_1999}. It is known that the model-checking problem for a formula of size $m$ and alternation depth $d$ on a system of size $n$ can be solved by the natural fixpoint-approximation algorithm with (time) complexity of $O((m \cdot n)^{d+1})$ \cite{Emerson96}, where the alternation depth of a formula of $\L^\mu$ is the maximum number of $\mu/\nu$ alternations in a chain of nested fixpoint subformulas.\footnote{The reader is referred to, e.g. \cite{Emerson_2001}, for the precise definition.} Finally, the $\mu$-calculus is known to be invariant for bisimulation (cf. \cite{Blackburn_2001}). It is actually known to correspond to the bisimulation-invariant fragment of monadic second-order logic \cite{Janin_1996}.
\subsection{On the logic of convergence in BDPs}
Each stream of opinion profiles $\O^0, \O^1, \ldots, \O^n, \ldots$ corresponds to a stream of influence models $\mathcal{M}^0, \mathcal{M}^1, \ldots, \mathcal{M}^n, \ldots$.
From the point of view of an influence model $\mathcal{M} = \tuple{N, {\bf G}, \O}$ the BDP dynamics of Definition \ref{def:BDP} can therefore be recast in terms of updates of the valuation function $\O$ as follows:
\begin{itemize}
\item Base: $\O^0 := \O$
\item Step: $\O^{n+1}(p) := \true{\square{p}p}_{\mathcal{M}^n}$.
\end{itemize}
That is, the interpretation of $p$ at step $n+1$ is the interpretation of $\square{p}p$ at step $n$. Equivalently, the interpretation of $\neg p$ at step $n+1$ is the interpretation of $\square{p}\neg p$ at step $n$.
\begin{lemma}
\label{lemma:stable}
Let $\mathcal{M} = \tuple{N, {\bf G}, \O}$ be an influence model. The two following statements are equivalent:
\begin{enumerate}
\item $i \in N$ is stable for $p$;
\item The pointed model $(\mathcal{M}, i)$ satisfies:\footnote{Notice that $\pm p$ is used as a variable ranging over $\set{p, \neg p}$. Technically the above formula is to be read as a scheme for $\nu x. p \land \square{p} x$ and $\nu x. \pm \neg p \land \square{p} x$.}
\begin{align}
\mathsf{stb}(p) := \nu x. \pm p \land \square{p} x
\end{align}
\end{enumerate}
\end{lemma}
\begin{proof}
First of all observe that, by the semantics of the $\mu$-calculus, formula $\mathsf{stb}(p)$ denotes the largest fixpoint of function $\pm p \land \square{p}(\cdot)$, that is, formula $\square{p^*} \pm p$ where $\square{p^*}$ is the modal box interpreted over the reflexive and transitive closure of $G_p$.
\fbox{$1) \Rightarrow 2)$} Assume that $i$ is stable for $p$ and suppose towards a contradiction that $\mathcal{M}, i \not\models \mathsf{stb}(p)$. By what said above, it follows that there exists a $j$ such that $\O_i (p) \neq \O_j(p)$ which is connected by a finite $G_p$ path to $i$. By the functionality of influence models and the dynamics of Definition \ref{def:BDP} then at some stage $n$ in the stream of opinion profiles it should hold that $\O^n_i(p) = \O_j(p)$, against the assumption that $i$ be stable for $p$.
\fbox{$2) \Rightarrow 1)$} Assume $\mathcal{M}, i \models \mathsf{stb}(p)$. By what said above, this implies that there exists no $j$ such that $\O_i (p) \neq \O_j(p)$ which is connected by a finite $G_p$ path to $i$. It follows that in the stream generated by the BDP dynamics $i$ cannot change its opinion, and hence it is stable.
\end{proof}
\begin{theorem}
\label{theorem:mu}
Let $\mathcal{M} = \tuple{N, {\bf G}, \O}$ be an influence model. The two following statements are equivalent:
\begin{enumerate}
\item $i \in N$ stabilizes for issue $p \in {\bf P}$;
\item The pointed model $(\mathcal{M}, i)$ satisfies:
\begin{align}
\mu x. \mathsf{stb}(p) \vee \square{p} x
\end{align}
\end{enumerate}
\end{theorem}
\begin{proof}
First of all observe that, by the semantics of the $\mu$-calculus $\mu x. \mathsf{stb}(p) \vee \square{p} x$ denotes the smallest fixpoint of equation $x \leftrightarrow \mathsf{stb}(p) \vee \square{p} x$. By the Knaster-Tarski theorem and the fact that influence models are finite, we can compute such fixpoint as $\bigcup_{0 \leq n < \omega} \true{\mathsf{stb}(p)^n}$ where $\true{\mathsf{stb}(p)^0} = \true{\mathsf{stb}(p) \vee \square{p} \bot}$ (notice that $\square{p} \bot \leftrightarrow \bot$ on influence models) and $\true{\mathsf{stb}(p)^{n+1}} = \true{\mathsf{stb}(p) \vee \square{p}\mathsf{stb}(p)^n}$. So, by Lemma \ref{lemma:stable} $i$ belongs to $\true{\mu x. \mathsf{stb}(p) \vee \square{p} x}$ either $i$ is stable for issue $p$ or has access in a finite number of steps to a an agent who is stable for $p$.
\fbox{$1) \Rightarrow 2)$} Assume that $i$ stabilizes for issue $p \in {\bf P}$. So there exists a stage $n$ in the stream of profiles generated through Definition \ref{def:BDP} at which $\O_i^n(p) = \O_i^{m}(p)$ for all $m > n$. By Lemma \ref{lemma:stable}, $\tuple{N, {\bf G}, \O^n}, i \models \mathsf{stb}(p)$. It follows that $i$ is connected through a finite $G_p$-path to an agent $j$ such that $\mathcal{M}, j \models \mathsf{stb}(p)$. By what established above we thus have that $\mathcal{M}, i \models \mu x. \mathsf{stb}(p) \vee \square{p} x$.
\fbox{$2) \Rightarrow 1)$} Assume $\mathcal{M}, i \models \mu x. \mathsf{stb}(p) \vee \square{p} x$. It follows that $i$ is connected through a finite $G_p$-path to an agent $j$ such that $\mathcal{M}, j \models \mathsf{stb}(p)$. By Lemma \ref{lemma:stable} $j$ is therefore stable and therefore $i$ will stabilize for $p$.
\end{proof}
So the formula that expresses the stabilization of the agents' opinions on one issue is $\mu x. \left(\nu y. \pm p \land \square{p} y \right) \vee \square{p} x$.
Informally, the theorem states that in a BDP an agent reaches a stable opinion if and only if it has an indirect influencer (linked by an influence path) whose all direct and indirect influencer have the same opinion. Notice that such formula has alternation depth $0$. So an off-the-shelf model-checking algorithm for the $\mu$-calculus can check stabilization in time $O(m \cdot n)$ with $n$ being the size of the model and $m$ the size of the formula.
Now confront this with the earlier Theorem \ref{theorem:mu}. Since the convergence of the BDP is equivalent to the stabilization of all agents on all issues $p$ (either on $p$ or $\neg p$), we have the following corollary:
\begin{corollary}
The BDP for an opinion profile $\O$ based on influence graph ${\bf G}$ converges if and only if
\begin{align}
\tuple{N, {\bf G}, \O}, i \models U \left(\bigwedge_{p \in {\bf P}} \mu x. \mathsf{stb}(p) \vee \square{p} x \right)
\end{align}
for any agent $i \in N$, where $U$ denotes the universal modality (cf. \cite{Blackburn_2001}).
\end{corollary}
So the above formula characterizes the property of convergence for a BDP. Since the process of voting in a liquid democracy system can be modeled by a BDP, the formula also characterizes precisely when voting by delegable proxy results in a ${\bf 1}$ or ${\bf 0}$ opinion on a given issue.
\section{Binary Aggregation with Abstention} \label{sec:preliminaries}
The formalism of choice for this paper is binary aggregation \cite{grandi13lifting} with abstention.\footnote{The standard framework of binary aggregation without abstention is sketched in the appendix for ease of reference.} This preliminary section is devoted to its introduction.
\subsection{Opinions and Opinion Profiles}
A binary aggregation structure (\emph{BA structure}) is a tuple $\S = \tuple{N,{\bf P},\gamma}$ where:
\begin{itemize}
\item $N = \set{1,\dots,n}$ is a non-empty finite set individuals s.t. $|N|= n \in \mathbb{N}$;
\item ${\bf P} = \set{p_1,\dots,p_m}$ is a non-empty finite set of issues ($|{\bf P}|= m \in \mathbb{N}$), each represented by a propositional atom;
\item $\gamma \in \L$ is an (integrity) constraint, where $\L$ is the propositional language constructed by closing ${\bf P}$ under a functionally complete set of Boolean connectives (e.g., $\set{\neg, \wedge}$).
\end{itemize}
An {\em opinion} function $O$ is an assignment acceptance/rejection values (or, truth values) to the set of issues ${\bf P}$. Thus, $O(p)={\bf 0}$ (respectively, \mbox{$O(p)={\bf 1}$}) indicates that opinion $O$ rejects (respectively, accepts) the issue $p$. Syntactically, the two opinions correspond to the truth of the literals $p$ or $\neg p$. For $p \in {\bf P}$ we write $\pm p$ to denote one element from $\set{p, \neg p}$, and $\pm {\bf P}$ to denote $\bigcup_{p\in{\bf P}} \set{p, \neg p}$, which we will refer to as the {\em agenda} of $\S$. Allowing abstention in the framework of binary aggregation amounts to considering incomplete opinions: an {\em incomplete opinion} is a partial function from ${\bf P}$ to $\set{{\bf 0},{\bf 1}}$. We will study it as a function $O: {\bf P} \rightarrow \set{{\bf 0},{\bf 1}, \ast}$ thereby explicitly denoting the undetermined value corresponding to abstention.
We say that the incomplete opinion of an agent $i$ is \emph{consistent} if the set of formulas $\set{p \mid O_i(p) = {\bf 1}} \cup \set{\neg p \mid O_i(p) = {\bf 0}} \cup \set{\gamma}$ can be extended to a model of $\gamma$ (in other words, if the set is satisfiable). Intuitively, the consistency of an incomplete opinion means that the integrity constraint is consistent with $i$'s opinion on the issues she does not abstain about. We also say that an incomplete opinion is {\em closed} whenever the following is the case: {\em if} the set of propositional formulas $\set{p \mid O_i(p) = {\bf 1}} \cup \set{\neg p \mid O_i(p) = {\bf 0}} \cup \set{\gamma}$ logically implies $p$ (respectively, $\neg p$), {\em then} $O_i(p) = 1$ (respectively, $O_i(p) = 0$). That is, individual opinions are closed under logical consequence or, in other words, agents cannot abstain on issues whose acceptance or rejection is dictated by their expressed opinions on other issues. The set of incomplete opinions is denoted $\mathcal{O}^\ast$ and the set of consistent and closed incomplete opinions $\mathcal{O}^\ast_c$. As the latter are the opinions we are interested in, we will often refer to them simply as individual opinions.
An \emph{opinion profile} $\O = (O_1,\dots,O_{n})$ records the opinion, on the given set of issues, of every individual in $N$. Given a profile $\O$ the $i^{\mathit{th}}$ projection $\O$ is denoted $O_i$ (i.e., the opinion of agent $i$ in profile $\O$). Let us introduce some more notation.
We also denote by $\O(p)= \set{i \in N \mid O_{i}(p)= {\bf 1}}$ the set of agents accepting issue $p$ in profile $\O$, by $\O(\neg p)= \set{i \in N \mid O_{i}(p)= {\bf 0}}$ and by $\O(\pm p) = \O(p) \cup \O(\neg p)$ the set of non-abstaining agents. We write $\O =_{-i} \O'$ to denote that the two profiles $\O$ and $\O'$ are identical, except for possibly the opinion of voter $i$.
\subsection{Aggregators}
Given a BA structure $\S$, an \emph{aggregator} (for $\S$) is a function $F:(\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$, mapping every profile of individual opinions to one collective (possibly incomplete) opinion.\footnote{It is therefore worth stressing that, in this paper, we study aggregators that are resolute (that is, output exactly one value), even though they allow for collective abstention.} $F(\O)(p)$ denotes the outcome of the aggregation on issue $p$. The benchmark aggregator is the \emph{issue-by-issue strict majority rule} ($\mathsf{maj}$), which accepts an issue if and only if the majority of the non-abstaining voters accept that issue:
\begin{align}\label{eq:majast}
\mathsf{maj}(\O)(p)=
\begin{cases}
{\bf 1} & \mathit{ if } |\O(p)| > |\O(\neg p)|\\
{\bf 0} & \mathit{ if } |\O(\neg p)| > |\O(p)| \\
\ast & \mathit{ otherwise } \\
\end{cases}
\end{align}
We will refer to this rule simply as `majority'.
Majority can be thought of as a
quota rule. In general, quota rules in binary aggregation with abstention are of the form: accept when the proportion of \emph{non-abstaining} individuals accepting is above the acceptance-quota, reject when the proportion of \emph{non-abstaining} individuals is above the rejection-quota, and abstain otherwise:\footnote{There are several ways to think of quota rules in the presence of abstentions. Instead of a quota being a proportion of non-abstaining agents, one could for instance define rules with absolute quotas instead: accept when at least $n$ agents accept, independently of how many agents do not abstain. In practice, voting rules with abstention are often a combination of those two ideas: accept an issue if a big enough proportion of the population does not abstain, and if a big enough proportion of those accept it.}
\begin{definition}[Quota rules] \label{def:quota}
Let $\S$ be an aggregation structure.
A {\em quota rule} (for $\S$) is defined as follows, for any issue $p\in{\bf P}$, and any opinion profile $\O\in\mathcal{O}^\ast$:\footnote{The definition uses the ceiling function $\lceil x \rceil$ denoting the smallest integer larger than $x$.}
\begin{align}\label{eq:quotarules}
F(\O)(p)=
\begin{cases}
{\bf 1} & \mbox{ if } |\O(p)| \geq \left\lceil q_{\bf 1}(p) \cdot |\O(\pm p)| \right\rceil\\
{\bf 0} & \mbox{ if } |\O(\neg p)| \geq \left\lceil q_{\bf 0}(p) \cdot |\O(\pm p)| \right\rceil \\
\ast & \mbox{ otherwise } \\
\end{cases}
\end{align} \label{eq:quota}
where for $x \in \set{{\bf 0}, {\bf 1}}$, $q_x$ is a function $q_x: {\bf P} \to (0,1] \subset \mathbb{Q}$ assigning a positive rational number smaller or equal to $1$ to each issue, and such that, for each $p \in {\bf P}$:
\begin{align}
q_x(p) > 1 - q_{({\bf 1} - x)}(p), \label{eq:constraint}
\end{align}
A quota rule is called: {\em uniform} if, for all $p_i,p_j \in {\bf P}$, $q_x(p_i) = q_x(p_j)$;
it is called {\em symmetric} if, for all $p \in {\bf P}$, $q_{\bf 1}(p) = q_{\bf 0}(p)$
\end{definition}
Notice that the definition excludes trivial quota.\footnote{Those are quotas with value $0$ (always met) or $>1$ (never met). Restricting to non-trivial quota is not essential but simplifies our exposition.}
It should also be clear that, by \eqref{eq:constraint} the above defines an aggregator of type $(\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ as desired.\footnote{What needs to be avoided here is that both the acceptance and rejection quota are set so low as to make the rule output both the acceptance and the rejection of a given issue} Notice also that if the rule is symmetric, then \eqref{eq:constraint} forces $q_x > \frac{1}{2}$.
\begin{example} \label{example:maj}
The majority rule \eqref{eq:majast} is a uniform and symmetric quota rule where $q_{\bf 1}$ and $q_{\bf 0}$ are set to meet the equation $\lceil q_{\bf 1}(p) \cdot |\O(\pm p)| \rceil = \lceil q_{\bf 0}(p) \cdot |\O(\pm p)| \rceil = \left\lceil \frac{|\O(\pm p)| + 1}{2}\right\rceil$, for any issue $p$ and profile $\O$. This is achieved by setting the quota as $\frac{1}{2}<q_{\bf 1},q_{\bf 0} \leq \frac{1}{2}+\frac{1}{|N|} = \frac{|N|+1}{2|N|}$. More precisely one should therefore consider $\mathsf{maj}$ as a class of quota rules yielding the same collective opinions.
\end{example}
\begin{example}
The uniform and symmetric unanimity rule is defined by setting $q_{\bf 1} = q_{\bf 0} = 1$. A natural uniform but asymmetric variant of unanimity can be obtained by setting $q_{\bf 1} = 1$ and $q_{\bf 0} = \frac{1}{|N|}$.
\end{example}
Let us finally note an important difference between quota rules in binary aggregation with abstentions vs. without abstentions. In a framework without abstentions quota rules are normally defined by a unique acceptance quota $q^{\bf 1}$, the rejection quota being uniquely determined as $q^{\bf 0} = 1 - q^{\bf 1}$. As a consequence, the majority rule, when $|N|$ is odd, is the only unbiased quota rule in the standard framework. This is no longer the case when abstentions are considered. A novel characterization of the majority rule will be given in Section \ref{subsec:char.quota}.
\subsection{Agenda conditions}
\begin{definition}[simple/evenly negatable agenda]
An agenda $\pm {\bf P}$ is said to be {\em simple} if there exists no set $X \subseteq \pm {\bf P}$ such that: $|X|\geq 3$, and $X$ is minimally $\gamma$-inconsistent, that is:
\begin{itemize}
\item $X$ is inconsistent with $\gamma$
\item For all $Y\subset X$, $Y$ is consistent with $\gamma$ (or, $\gamma$-consistent).
\end{itemize}
An agenda is said to be {\em evenly negatable} if there exists a minimal $\gamma$-inconsistent set $X \subseteq \pm {\bf P}$ such that for a set $Y \subseteq X$ of even size, $X\backslash Y \cup \set{\neg p \mid p \in Y}$ is $\gamma$-consistent. It is said to be {\em path-connected} if there exists $p_1, \ldots, p_n \in \pm {\bf P}$ such that $p_1 \models^c p_2, \ldots, p_{n-1} \models^x p_n$ where $p_i \models^c p_{i+1}$ (conditional entailment) denotes that there exists $X \subseteq \pm{\bf P}$, which is $\gamma$-consistent with both $p_i$ and $\neg p_{i+1}$, and such that $\set{p} \cup X \cup \set{\gamma}$ logically implies $p_{i +1}$.
\end{definition}
We refer the reader to \cite[Ch. 2]{Grossi_2014} for a detailed exposition of the above conditions. We provide just a simple illustrative example.
\begin{example}
Let ${\bf P} = \set{p, q, r}$ and let $\gamma = (p \wedge q ) \rightarrow r$. $\pm {\bf P}$ is not simple. The set $\set{p, q, \lnot r} \subseteq \pm{\bf P}$ is inconsistent with $\gamma$, but none of its subsets is.
\end{example}
\subsection{Properties of aggregators}\label{subsec:axiomatic}
We start by recalling some well-known properties of aggregators from the judgment aggregation literature, adapted to the setting with abstention:
\begin{definition} \label{def:properties}
Let $\S$ be an aggregation structure. An aggregator $F: (\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ is said to be:
\begin{description}
\item[unanimous] iff for all $p\in {\bf P}$, for all profiles $\O$ and all $x\in\{0,1,\ast\}$: if for all $i\in N, O_i(p) = x$, then $F(\O)(p)= x $. I.e., if everybody agrees on a value, that value is the collective value.
\item[anonymous] iff for any bijection $\mu: N\rightarrowN$, $F(\O)=F(\O^\mu)$, where $\O^\mu = \tuple{O_{\mu(1)}, \ldots, O_{\mu(n)}}$. I.e., permuting opinions among individuals does not affect the output of the aggregator.
\item[$p$-dictatorial] iff there exists $i \in N$ (the {\em $p$-dictator}) s.t. for any profile $\O$, and all $x \in \set{{\bf 0},{\bf 1}}$, $O_i(p) = x$ iff $F(\O)(p) = x$.
I.e., there exists an agent whose definite opinion determines the group's definite opinion on $p$. If $F$ is $p$-dictatorial, with the same dictator on all issues $p \in {\bf P}$, then it is called {\bf dictatorial}.
\item[$p$-oligarchic] iff there exists $C \subseteq N$ (the {\em $p$-oligarchs}) s.t. $C\neq\emptyset$ and for any profile $\O$, and any value $x \in \set{{\bf 0},{\bf 1}}$, $F(\O)(p) = x$ iff $O_i(p) = x$ for all $i\in C$.
I.e., there exists a group of agents whose definite opinions always determine the group's definite opinion on $p$. If $F$ is $p$-oligarchic, with the same oligarchs on all issues $p \in {\bf P}$, then it is called {\bf oligarchic}.
\item[monotonic] iff, for all $p\in {\bf P}$ and all $i\inN$:
for any profiles $\O, \O'$, if $\O =_{-i} \O'$: (i) if $O_i(p)\neq {\bf 1}$ and $O'_i(p)\in\{{\bf 1},\ast\}$, then: if $F(\O)(p)={\bf 1}$, then $F(\O')(p)={\bf 1}$; and (ii) if $O_i(p)\neq {\bf 0}$ and $O'_i(p)\in\{{\bf 0},\ast\}$, then: if $F(\O)(p)={\bf 0}$, then $F(\O')(p)={\bf 0}$. I.e., increasing support for a definite collective opinion does not change that collective opinion.
\item[independent] iff, for all $p\in {\bf P}$, for any profiles $\O, \O'$: if for all $i\in N, O_i(p) = O'_i(p)$, then $F(\O)(p)=F(\O')(p)$. I.e., the collective opinion on each issue is determined only by the individual opinions on that issue.
\item[neutral] iff, for all $p,q \in {\bf P}$, for any profile $\O$: if for all $i\inN$, $O_i(p)=O_i(q)$, then $F(\O)(p)=F(\O)(q)$. I.e., all issues are aggregated in the same manner.
\item[systematic] iff it is neutral and independent. I.e., the collective opinion on issue $p$ depends only on the individual opinions on this issue.
\item[responsive] iff for all $p\in{\bf P}$, there exist profiles $\O, \O'$ such that $F(\O)(p)={\bf 1}$ and $F(\O')(p)={\bf 0}$. I.e., the rule allows for an issue to be accepted for some profile, and rejected for some other.
\item[unbiased] iff for all $p \in {\bf P}$, for any profiles $\O, \O'$ : if for all $i\inN$, $O_i(p)= {\bf 1}$ iff $O'_i(p)={\bf 0}$ (we say that $\O'$ is the ``reversed'' profile of $\O$), then $F(\O)(p)={\bf 1}$ iff $F(\O')(p)={\bf 0}$. I.e., reversing all and only individual opinions on an issue $p$ (from acceptance to rejection and from rejection to acceptance) results in reversing the collective opinion on $p$.
\item[rational] iff for any profile $\O$, $F(\O)$ is consistent and closed. I.e., the aggregator preserves the constraints on individual opinions.
\end{description}
\end{definition}
\begin{example}
It is well-known that majority is not rational in general. The standard example is provided by the so-called discursive dilemma, represented by the BA structure $\tuple{\set{1,2,3},\set{p,q,r},r \leftrightarrow (p \land q)}$. The profile consisting of $O_1 \models p \land q \land r$, $O_2 \models p \land \neg q \land \neg r$, $O_3 \models \neg p \land q \land \neg r$, returns an inconsistent majority opinion $\mathsf{maj}(\O) \models p \land q \land \neg r$ (cf. \cite{Grossi_2014}).
\end{example}
Finally, let us defined also the following property. The {\bf undecisiveness} of an aggregator $F$ on issue $p$ for a given aggregation structure is defined as the number of profiles which result in collective abstention on $p$:
\begin{align}
u(F)(p) & = |\set{\O \in \mathcal{O}^\ast_c \mid F(\O)(p) = \ast}|.
\end{align}
\subsection{Characterizing quota rules}\label{subsec:char.quota}
As a typical example, consider the aggregator $\mathsf{maj}$: it is unanimous, anonymous, monotonic, systematic, responsive and unbiased, but, as mentioned above, it is not rational in general.
However, it can be shown (cf. \cite[3.1.1]{Grossi_2014}) that aggregation by majority is collectively rational under specific assumptions on the constraint:
\begin{fact} \label{fact:rat}
Let $\S$ be a BA structure with a simple agenda. Then $\mathsf{maj}$ is rational.
\end{fact}
\begin{proof}
If the agenda $\pm{\bf P}$ is simple, then all minimally inconsistent sets have cardinality $2$, that is, are of the form $\set{\phi,\neg \phi}$ such that $\phi \models \neg \phi$ for $\phi,\psi \in {\bf P}$. W.l.o.g. assume $\phi = p_i$ and $\psi = p_j$. Suppose towards a contradiction that there exists a profile $\O$ such that $\mathsf{maj}(\O)$ is inconsistent, that is, $\mathsf{maj}(\O)(p_i) = \mathsf{maj}(\O)(p_j) = 1$, and $\phi \models \neg \psi$. By the definition of $\mathsf{maj}$ \eqref{eq:majast} it follows that $|\O(p_i)| > |\O(\neg p_i)|$ and $|\O(p_j)| > |\O(\neg p_j)|$. Since $p_i \models \neg p_j$ by assumption, and since individual opinions are consistent and closed, $|\O(\neg p_j)| \geq |\O(p_i)|$ and $|\O(\neg p_i)| \geq |\O(p_j)|$. From the fact that $|\O(p_i)| > |\O(\neg p_i)|$ we can thus conclude that $|\O(\neg p_j)|> |\O(p_j)|$. Contradiction.
\end{proof}
May's theorem \cite{May_1952} famously shows that for preference aggregation, the majority rule is in fact the \emph{only} aggregator satisfying a specific bundle of desirable properties. A corresponding characterization of the majority rule is given in judgment aggregation \emph{without abstention}: when the agenda is simple, the majority rule is the only aggregator which is rational, anonymous, monotonic and unbiased \cite[Th. 3.2]{Grossi_2014}. We give below a novel characterization theorem, which takes into account the possibility of abstention (both at the individual and at the collective level). To the best of our knowledge, this is the first result of this kind in the literature on judgment and binary aggregation with abstention.
\smallskip
We first prove the following lemma:
\begin{lemma} \label{lemma:min}
Let $F$ be a uniform and symmetric quota rule for a given $\S$. The following holds:
$\frac{1}{2}< q_{\bf 1} = q_0 \leq \frac{|N|+1}{2|N|}$ if and only if $F = \arg\min_G u(G)(p)$, for all $p \in {\bf P}$.
\end{lemma}
\begin{proof}
The claim is proven by the following series of equivalent statements.
(a) A uniform and symmetric quota rule $F$ has quota such that $\frac{1}{2}< q_{\bf 1} = q_0 \leq \frac{|N|+1}{2|N|}$.
(b) A uniform and symmetric quota rule $F$ has quota such that $\lceil q_{\bf 1}(p)|\O(\pm p)| \rceil = \lceil q_{\bf 0}(p) |\O(\pm p)| \rceil = \left\lceil \frac{|\O(\pm p)| + 1}{2}\right\rceil$ for any profile $\O$ and issue $p$.
(c) For any $\O \in \mathcal{O}^\ast_c$ and $p \in {\bf P}$, $u(\O)(p) = \ast$ if and only if $\O(p) = \O(\neg p)$, that is, an even number of voters vote and the group is split in half.
(d) $F = \arg\min_G u(G)(p)$, for all $p \in {\bf P}$.
\end{proof}
That is, the quota rule(s) corresponding to the majority rule (Example \ref{example:maj}) is precisely the rule that minimizes undecisiveness.
We can now state and prove the characterization result:
\begin{theorem}\label{thm:quotarules}
Let $F: (\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ be an aggregator for a given $\S$. The following holds:
\begin{enumerate}
\item $F$ is a quota rule if and only if it is anonymous, independent, monotonic, and responsive;
\item $F$ is a uniform quota rule if and only if it is a neutral quota rule;
\item $F$ is a symmetric quota rule if and only if it is an unbiased quota rule;
\item $F$ is the majority rule $\mathsf{maj}$ if and only if it is a uniform symmetric quota rule which minimizes undecisiveness.
\end{enumerate}
\end{theorem}
\begin{proof}
\smallskip
\fbox{Claim 1}
Left-to-right: Easily checked.
Right-to-left: Let $F$ be an anonymous, independent, monotonic, and responsive aggregator. By \textit{anonymity} and \textit{independence}, for any $p\in{\bf P}$, and any $\O\in\mathcal{O}^\ast_c$, the only information determining the value of $F(O)(p)$ are the integers $|\O(p)|$ and $|\O(\neg p)|$.
By \textit{responsiveness}, there exists a non-empty set of profiles $S^{\bf 1}=\{\O\in\mathcal{O}^\ast|F(\O)(p)={\bf 1}\}$. Pick $\O$ to be any profile in $S^{\bf 1}$ with a minimal value of $\frac{|\O(p)|}{|\O(\pm p)|}$ and call this value $q_{\bf 1}$. Now let $\O'$ be any profile such that $\O'=_{-i}\O$ and $\frac{|\O'(p)|}{|\O'(\pm p)|}>q_{\bf 1}$. This implies that $O_i(p)={\bf 0}$ and $O'_i(p)={\bf 1}$. By \textit{monotonicity}, it follows that $F(\O')(p)= {\bf 1}$.
By iterating this argument a finite number of times we conclude that whenever $\frac{|\O(p)|}{|\O(\pm p)|} \geq q_{\bf 1}$, we have that $F(\O)(p)={\bf 1}$.
Given that $q_{\bf 1}$ was defined as a minimal value, we conclude also that if $F(\O)(p)=1$, then $\frac{\O(p)}{\O(p^\pm)}\geq q_{\bf 1}$. The argument for $q_{\bf 0}$ is identical.
\fbox{Claims 2 \& 3} follow straightforwardly from the definitions of uniform quota rule (Definition \ref{def:quota}) and of neutrality (Definition \ref{def:properties}) and, respectively, from the definitions of symmetric quota rules (Definition \ref{def:quota}) and of unbiasedness (Definition \ref{def:properties}) .
\fbox{Claim 4}
Left-to-right. Recall that $\mathsf{maj}$ is defined by quota $\frac{1}{2}< q_{\bf 1} =q_{\bf 0} \leq \frac{1}{2}+\frac{1}{|N|}$ (Example \ref{example:maj}). It is clear that $\mathsf{maj}$ is uniform and symmetric. The claim then follows by Lemma \ref{lemma:min}.
Right-to-left. By Lemma \ref{lemma:min} if an aggregator minimizes undecisiveness then its quota are set as $\frac{1}{2}< q_{\bf 1} =q_{\bf 0} \leq \frac{1}{2}+\frac{1}{|N|}$. These quota define $\mathsf{maj}$ (Example \ref{example:maj}).
\end{proof}
By the above theorem and Fact \ref{fact:rat}, it follows that, on simple agendas, majority is the only rational aggregator which is also responsive, anonymous, systematic and monotonic.
\subsection{Impossibility in Binary Aggregation with Abstentions}
The following is a well-know impossibility result concerning binary aggregation with abstentions:
\begin{theorem}[\cite{Dokow_2010,Dietrich_2007}]\label{thm:imp.agg.abst}
Let $\S$ be a BA structure whose agenda is path connected and evenly negatable. Then if an aggregator $F: (\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ is independent, unanimous and collectively rational, then it is oligarchic.
\end{theorem}
We will use this result to illustrate how impossibility results from binary aggregation with abstentions apply to delegable proxy voting on binary issues.
\section{Liquid Democracy as Binary Aggregation} \label{sec:proxy}
In this section we provide an analysis of liquid democracy by embedding it in the theory of binary aggregation with abstentions presented in the previous section. To the best of our knowledge, this is the first attempt at providing an analysis of delegable proxy voting using social-choice theoretic tools, with the possible exception of \cite{greenarmytage_delegable}.
In what follows we will often refer to delegable proxy voting/aggregation simply as proxy voting/aggregation.
\subsection{Binary Aggregation via Delegable Proxy}
In binary aggregation with proxy, agents either express an acceptance/rejection opinion or \emph{delegate} such opinion to a different agent.
\subsubsection{Proxy Opinions and Profiles}
Let a BA structure $\S$ be given and assume for now that $\gamma = \top$, that is, all issues are logically independent. An opinion $O: {\bf P} \to \set{{\bf 0},{\bf 1}} \cup N$ is an assignment of either a truth value or another agent to each issue in ${\bf P}$, such that $O_i(p) \neq i$ (that is, self-delegation is not an expressible opinion).
We will later also require proxy opinion to be individually rational, in a precise sense (Section \ref{sec:indirat}).
For simplicity we are assuming that abstention is not a feasible opinion in proxy voting, but that is an assumption that can be easily lifted in what follows.
We call functions of the above kind {\em proxy opinions} to distinguish them from standard (binary) opinions, and we denote by $\mathcal{P}$ the set of all proxy opinions, $\mathcal{P}_c$ the set of all consistent proxy opinions, $\mathcal{P}^N$ being the set of all proxy profiles.
\subsubsection{Delegation Graphs}
Each profile $\O$ of proxy opinions ({\em proxy profile} in short) induces a delegation graph $G^\O = \langle N, \set{R_p}_{p \in {\bf P}}\rangle$ where for $i, j \in N$:
\begin{align}
iR_p j & \Longleftrightarrow
\left\{
\begin{array}{ll}
O_i(p) = j & \mbox{if $i \neq j$} \\
O_i(p) \in \set{{\bf 0}, {\bf 1}} & \mbox{otherwise}
\end{array}
\right.
\end{align}
The expression $iR_pj$ stands for ``$i$ delegates her vote to $j$ on issue $p$''.
Each $R_p$ is a so-called functional relation. It corresponds to the graph of an endomap on $N$. So we will sometimes refer to the endomap $r_p: N \to N$ of which $R_p$ is the graph. Relations $R_p$ have a very specific structure and can be thought of as a set of trees whose roots all belong to cycles (possibly loops).
The weight of an agent $i$ w.r.t. $p$ in a delegation graph $G^\O$ is given by its indegree with respect to $R^*_p$ (i.e., the reflexive and transitive closure of $R_p$):\footnote{
We recall that the reflexive transitive closure $R^*$ of a binary relation $R \subseteq N^2$ is the smallest reflexive and transitive relation that contains $R$.
}
$w^\O_i(p) = |\set{j \in N \mid j R^*_p i}|$. This definition of weight makes sure that each individual carries the same weight, independently of the structure of the delegation graph. Alternative definitions of weight are of course possible and we will come back to this issue later.\footnote{See also footnote \ref{footnote:weight} below.}
For all $p \in {\bf P}$, we also define the function $g_p: N \rightarrow \wp(N)$ such that $g_p(i) = \set{j \in N \mid j R^*_p i \mbox{ \textsc{and} } \nexists k: jR_p k}$. The function associates to each agent $i$ (for a given issue $p$), the (singleton consisting of the) last agent reachable from $i$ via a path of delegation on issue $p$, when it exists (and $\emptyset$ otherwise). Slightly abusing notation we will use $g_p(i)$ to denote an agent, that is, the {\em guru} of $i$ over $p$ when $g_p(i) \neq \emptyset$. If $g_p(i) = \set{i}$ we call $i$ a {\em guru} for $p$. Notice that $g_p(i) = \set{i}$ iff $r_p(i) = i$, that is, $i$ is a guru of $p$ iff it is a fixpoint of the endomap $r_p$.
If the delegation graph $G^\O$ of a proxy profile $\O$ is such that, for some $R_p$, there exists no $i \in N$ such that $i$ is a guru of $p$, we say that graph $G^\O$ (and profile $\O$) is {\em void} on $p$. Intuitively, a void profile on $p$ is a profile where no voter expresses an opinion on $p$, because every voter delegates her vote to somebody else.
Given a BA structure $\S$, a proxy aggregation rule (or proxy aggregator) for $\S$ is a function $\mathsf{pv}:\mathcal{P}^N\to\mathcal{O}^\ast$ that maps every proxy profile to one collective incomplete opinion. As above, $\mathsf{pv}(\O)(p)$ denotes the outcome of the aggregation on issue $p$.
\subsubsection{Proxy Aggregators}
The most natural form of voting via delegable proxy is a proxy version of the majority rule we discussed in Section \ref{sec:preliminaries}:\footnote{On the importance of majority decisions in the current implementation of liquid democracy by liquid feedback cf. \cite[p.106]{liquid_feedback}.}
\begin{align}\label{eq:proxymajast}
\mathsf{pv}_{\mathsf{maj}}(\O)(p) =
\begin{cases}
{\bf 1} & \mbox{ if } \sum_{i \in \O(p)} w^\O_i(p) > \sum_{i \in \O(\neg p)}w^\O_i(p) \\
{\bf 0} & \mbox{ if } \sum_{i \in \O(\neg p)} w^\O_i(p) > \sum_{i \in \O(p)}w^\O_i(p) \\
\ast & \mbox{ otherwise }
\end{cases}
\end{align}
That is, an issue is accepted by proxy majority in profile $\O$ if the sum of the weights of the agents who accept $p$ in $\O$ exceeds the majority quota, it is rejected if the sum of the weights of the agents who reject $p$ in $\O$ exceeds the majority quota, and it is undecided otherwise. It should be clear that $\sum_{i \in \O(p)} w^\O_i(p) = |\{i \in N | O_{g_i}(p)= {\bf 1} \}|$ (and similarly for $\neg p$), that is, the sum of the weights of the gurus accepting (rejecting) $p$ is precisely the cardinality of the set of agents whose gurus accept (reject) $p$.
In general, it should be clear that for any quota rule $F: \mathcal{O}^\ast_c \to \mathcal{O}^\ast$ a proxy variant $\mathsf{pv}_F$ of $F$ can be defined via an obvious adaptation of \eqref{eq:proxymajast}.
\medskip
To fix intuitions further about proxy voting it is worth discussing another example of aggregator, {\em proxy dictatorship}. It is defined as follows, for a given $d \in N$ (the dictator) any proxy profile $\O$ and issue $p$:
\begin{align}\label{eq:proxyd}
\mathsf{pv}_{d}(\O)(p) =
\begin{cases}
\O_{g_p(d)} & \mbox{ if } g_p(d) \neq \emptyset \\
\ast & \mbox{ otherwise }
\end{cases}
\end{align}
That is, in a proxy dictatorship, the collective opinion is the opinion of the guru of the dictator, when it exists, and it is undefined otherwise.
\subsection{Two Issues of Delegable Proxy}
\subsubsection{Cycles and Abstentions} \label{sec:proxyabs}
It should be clear from the definition of proxy aggregators like $\mathsf{pv}_\mathsf{maj}$, that such aggregators rely on the existence of gurus in the underlying delegation graphs. If the delegation graph $R_p$ on issue $p$ contains no guru, then the aggregator has access to no information in terms of who accepts and who rejects issue $p$. To avoid bias in favor of acceptance or rejection, such situations should therefore result in an undecided collective opinion. That is for instance the case of $\mathsf{pv}_\mathsf{maj}$. However, such situations may well be considered problematic, and the natural question arises therefore of how likely they are, at least in principle.
\begin{fact} \label{fact:cycles}
Let ${\bf A}$ be a BA structure where $\gamma = \top$ (i.e., issues are independent).
If each proxy profile is equally probable (impartial culture assumption), then the probability that, for each issue $p$, the delegation graph $R_p$ has no gurus tends to $\frac{1}{e^2}$ as $n$ tends to infinity.
\end{fact}
\begin{proof}
The claim amounts to computing the probability that a random proxy profile $\O$ induces a delegation graph $R_p$ that does not contain gurus (or equivalently, whose endomap $r_p: N \to N$ has no fixpoints) as $n$ tends to infinity.
Now, for each agent $i$, the number of possible opinions on a given issue $p$ (that is, functions $O: \set{p} \to \set{{\bf 0},{\bf 1}} \cup N$) is $|(N \backslash \set{i}) \cup \set{{\bf 0},{\bf 1}}| = n + 1$ (recall $i$ cannot express ``$i$'' as an opinion). The number of opinions in which $i$ is delegating her vote is $n - 1$. So, the probability that a random opinion of $i$ about $p$ is an opinion delegating $i$'s vote is $\frac{n-1}{n+1}$. Hence the probability that a random profile consists only of delegated votes (no gurus) is $(\frac{n-1}{n+1})^n$.
The claimed value is then established through this series of equations:
\begin{align*}
\lim_{n \to \infty} \left(\frac{n-1}{n}\right)^n & = \lim_{n \to \infty} \left(\frac{n}{n+2}\right)^n \\
& = \lim_{n \to \infty} \left(\frac{1}{\frac{n+2}{n}}\right)^n \\
& = \lim_{n \to \infty} \left(\frac{1}{1 + \frac{2}{n}}\right)^n \\
& = \lim_{n \to \infty} \left(\frac{1}{(1 + \frac{2}{n})^n}\right) \\
& = \frac{1}{\lim_{n \to \infty}(1 + \frac{2}{n})^n}\\
& = \frac{1}{e^2}
\end{align*}
This completes the proof.
\end{proof}
Now contrast the above simple fact with the probability that all agents abstain on an issue when each voter either expresses a ${\bf 1}$ or ${\bf 0}$ opinion or abstains (that is, the binary aggregation with abstentions setting studied earlier).
In that case the probability that everybody abstains tends to $0$ as $n$ tends to infinity.
Fact \ref{fact:cycles} should obviously not be taken as a realistic estimate of the effect of cycles on collective abstention, as the impartial culture assumption is a highly idealized assumption.
Election data should ideally be used to assess whether delegation cycles ever lead large parts of the electorate to 'lose their vote', possibly together with refinements of the above argument that take into consideration realistic distributions on proxy profiles, and therefore realistic delegation structures.
Nonetheless, Fact \ref{fact:cycles} does flag a potential problem of cyclical delegations as sources of abstention which has, to the best of our knowledge, never been discussed. The mainstream position on cyclical delegations \cite[Section 2.4.1]{liquid_feedback} is:\footnote{Cf. also \cite{Behrens15}.}
\begin{quote}
``The by far most discussed issue is the so-called circular delegation
problem. What happens if the transitive delegations lead to
a cycle, e.g. Alice delegates to Bob, Bob delegates to Chris, and
Chris delegates to Alice? Would this lead to an infinite voting
weight? Do we need to take special measures to prohibit such a
situation? In fact, this is a nonexistent problem: A cycle only exists as long as there is no activity in the cycle in which case the cycle has no effect. As already explained [\ldots], as soon as somebody casts a vote, their (outgoing) delegation will be suspended. Therefore, the cycle naturally disappears before it is used. In our example: If Alice and Chris decide to vote, then Alice will no longer delegate to Bob, and Chris will no longer delegate to Alice [\ldots]. If only Alice decides to vote, then only Alice's delegation to Bob is suspended and Alice would use a voting weight of 3. In either case the cycle is automatically resolved and the total voting weight used is 3.''
\end{quote}
We will discuss later (Section \ref{sec:diffusion}) a possible approach to mitigate this issue by suggesting a different interpretation of liquid democracy in terms of influence rather than delegation.
\subsubsection{Individual \& Collective Rationality} \label{sec:indirat}
In our discussion so far we have glossed over the issue of logically interdependent issues and collective rationality. The reason is that under the delegative interpretation of liquid democracy developed in this section individual rationality itself appears to be a more debatable requirement than it normally is in classical aggregation.
A proxy opinion $O_i$ is {\em individually rational} if the set of formulas
\begin{align}
\set{\gamma} \cup \set{p \in {\bf P} \mid O_{g_p(i)}(p) = {\bf 1}} \cup \set{\neg p \in {\bf P} \mid O_{g_p(i)}(p) = {\bf 0}} \label{eq:ir}
\end{align}
is satisfiable (consistency), and if whenever \eqref{eq:ir} entails $\pm p$, then $\pm p$ belongs to it (closedness).\footnote{Cf. the definition of individual opinions in Section \ref{sec:preliminaries}.} That is, the integrity constraint $\gamma$ is consistent with $i$'s opinion on the issues she does not delegate on, and the opinions of her gurus (if they exist), and those opinions, taken together, are closed under logical consequence (w.r.t. the available issues).
The constraint in \eqref{eq:ir} captures, one might say, an idealized way of how delegation works: voters are assumed to be able to check or monitor how their gurus are voting, and always modify their delegations if an inconsistency arises. The constraint remains, however, rather counterintuitive under a delegative interpretation of proxy voting. Aggregation via delegable proxy has at least the potential to represent individual opinions as irrational (inconsistent and/or not logically closed).
\medskip
Like in the case of delegation cycles we will claim that the interpretation of liquid democracy in terms of influence to be developed in Section \ref{sec:diffusion}, rather than in terms of delegation, makes individual rationality at least as defensible as in the classical case.
\subsection{Embedding in Binary Aggregation with Abstentions}
\subsubsection{One man---One vote}
Aggregation in liquid democracy as conceived in \cite{liquid_feedback} should satisfy the principle that the opinion of every voter, whether expressed directly or through proxy, should be given the same weight:
\begin{quote}
``[\ldots] in fact every eligible voter has still exactly one vote [\ldots] unrestricted transitive delegations are an integral part of Liquid Democracy. [\ldots] Unrestricted transitive delegations are treating delegating voters and direct voters equally, which is most democratic and empowers those who could not organize themselves otherwise'' \cite[p.34-36]{liquid_feedback}
\end{quote}
In other words, this principle suggests that aggregation via delegable proxy should actually be `blind' for the specific type of delegation graph. Making this more formal, we can think of the above principle as suggesting that the only relevant content of a proxy profile is its translation into a standard opinion profile (with abstentions) via a function $t: \mathcal{P} \to \mathcal{O}^\ast$ defined as follows: for any $i \in N$ and $p \in {\bf P}$: $t(O_i(p)) = O_{g_p(i)}$ if $ g_p(i) \neq \emptyset$ (i.e., if $i$ has a guru for $p$), and $t(O_i(p)) = \ast$ otherwise. Clearly, if we assume proxy profiles to be individually rational, the translation will map proxy opinions into individually rational (consistent and closed) incomplete opinions. By extension, we will denote by $t(\O)$ the incomplete opinion profile resulting from translating the individual opinions of a proxy profile $\O$.
\medskip
The above discussion suggests the definition of the following property of proxy aggregators: a proxy aggregator $\mathsf{pv}$ has the {\bf one man--one vote property} (or is a one man---one vote aggregator) if and only if $pv = t \circ F$ for some aggregator $F: \mathcal{O}^\ast_c \to \mathcal{O}^\ast$ (assuming the individual rationality of proxy profiles).\footnote{It should be clear that not every proxy aggregator satisfies this property. By means of example, consider an aggregator that uses the following notion of weight accrued by gurus in a delegation graph. The weight $w(i)$ of $i$ is $\sum_{j \in R^*(i)} \frac{1}{\ell(i,j)}$ where $\ell(i,j)$ denotes the length of the delegation path linking $j$ to $i$. This definition of weight is such that the contribution of voters decreases as their distance from the guru increases. Aggregators of this type are studied in \cite{Boldi_2011}. \label{footnote:weight}}
The class of one man---one vote aggregators can therefore be studied simply as the concatenation $t \circ F$ where $F$ is an aggregator for binary voting with abstentions,
as depicted in the following diagram:
\begin{center}
\begin{tikzpicture}
\node(P) at (0,0) {$\O$};
\node(O) at (2,0) {$t(\O)$};
\node(F) at (2, -2) {$F(t(\O))$};
\draw[->] (P) -- node[below]{$t$} (O);
\draw[->] (P) -- node[left]{$\mathsf{pv}_F$} (F);
\draw[->] (O) -- node[right]{$F$} (F);
\end{tikzpicture}
\end{center}
which gives us a handle to study a large class of proxy voting rules
\begin{example}
Proxy majority $\mathsf{pv}_\mathsf{maj}$ \eqref{eq:proxymajast} is clearly a one man---one vote rule aggregator. It is easy to check that, for any proxy profile $\O$: $\mathsf{pv}_{\mathsf{maj}}(\O) = \mathsf{maj}(t(\O))$. The same holds for proxy dictatorship \eqref{eq:proxyd}. It is easy to see that proxy dictatorship $\mathsf{pv}_d$ is such that for any proxy profile $\O$: $\mathsf{pv}_{d}(\O) = d(t(\O))$, where $d$ is the standard dictatorship (of $d \in N$).
\end{example}
It follows that for every proxy aggregator $\mathsf{pv}_F = t \circ F$ the axiomatic machinery developed for standard aggregators can be directly tapped into.
Characterization results then extend effortlessly to proxy voting, again providing a strong rationale for the use of majority in proxy aggregation:
\begin{fact}[Characterization of proxy majority]\label{thm:proxy.quotarules}
A one man---one vote proxy aggregator $\mathsf{pv} = t \circ F$ for a given $\S$ is proxy majority $pv_\mathsf{maj}$ iff $F$ is anonymous, independent, monotonic, responsive, neutral and minimizes undecisiveness.
\end{fact}
\begin{proof}
This follows from the definition of $t$ and Theorem \ref{thm:quotarules}.
\end{proof}
It follows that on simple agendas and assuming the individual rationality of proxy profiles, proxy majority is the only rational aggregator which is anonymous, independent, monotonic, responsive, neutral and minimizes undecisiveness.
\subsubsection{Impossibility}
Similarly, there are many ways in which pursue the opposite embedding, from standard aggregation into proxy voting. For example, we can define a function $s: \mathcal{O}_c \to \mathcal{P}_c$ from opinion profiles to individually rational proxy profiles as follows. For a given opinion profile $\O$, and issue $p \in {\bf P}$ consider the set $\set{i\in N \mid O_i(p) = \ast}$ of individuals that abstain in $\O$ and take an enumeration $1, \ldots, m$ of its elements, where $m = |\set{i\in N \mid O_i(p) = \ast}|$. The function is defined as follows, for any $i \in N$ and $p \in {\bf P}$: $s(O_i(p)) = O_i(p)$ if $O_i(p) \in \set{{\bf 0},{\bf 1}}$, $s(O_i(p)) = i+1 \mod m$, otherwise.\footnote{Notice that since self-delegation (that is, $O_i(p) = i$) is not feasible in proxy opinions, this definition of $s$ works for profiles where, on each issue, either nobody abstains or at least two individuals abstain. A dummy voter can be introduced for that purpose.}
A translation of this type allows to think of standard aggregators $F: \mathcal{O}^\ast_c \to \mathcal{O}^\ast$ as the concatenation $s \circ \mathsf{pv}$, for some proxy aggregator $\mathsf{pv}$:
\begin{center}
\begin{tikzpicture}
\node(O) at (2,0) {$\O$};
\node(Q) at (4,0) {$s(\O)$};
\node(F) at (2, -2) {$\mathsf{pv}(s(\O))$};
\draw[->] (Q) -- node[right]{$\mathsf{pv}$} (F);
\draw[->] (O) -- node[right]{$F_\mathsf{pv}$} (F);
\draw[->] (O) -- node[below]{$s$} (Q);
\end{tikzpicture}
\end{center}
\begin{fact} \label{thm:imp.proxy}
Let $\S$ be such that its agenda is path connected and evenly negatable. For any proxy aggregator $\mathsf{pv}$, if $s \circ \mathsf{pv}$ is independent, unanimous and collectively rational, then it is oligarchic.
\end{fact}
\begin{proof}
It follows directly from the definition of $s$ and Theorem \ref{thm:imp.agg.abst}.
\end{proof}
\subsection{Section Summary}
The section has provided a very simple model of delegable proxy voting within the framework of binary aggregation. This has allowed us to put liquid democracy in perspective with an established body of results in the social choice theory tradition, and highlight two of its problematic aspects, which have so far gone unnoticed: the effect of cycles on collective indecisiveness, and the issue of preservation of individual rationality under delegable proxies.
An independent, purpose-built axiomatic analysis for liquid democracy focused on its more characteristic features (like the one man---one vote property) is a natural line of research, which we do not pursue here.
\subsection{old version of thm 1 and corollaries}
\begin{theorem}\label{thm:quotarules}
Let $\S$ be a BA structure.
An aggregator $F: (\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ is a quota rule if and only if it is anonymous, independent, monotonic, and responsive.
\end{theorem}
\begin{proof}
\fbox{Left-to-right} Easily checked.
\fbox{Right-to-left} Let $\S$ be a BA structure. Let $F$ be an anonymous, independent, monotonic, and responsive aggregator.
By \textit{anonymity} and \textit{independence}, for any $p\in{\bf P}$, and any $\O\in\mathcal{O}^\ast$, the only information determining $F(O)(p)$ is $|\O(p)|$ and $|\O(p^-)|$.
By \textit{responsiveness}, there exists a non-empty set of profiles $S^{\bf 1}=\{\O\in\mathcal{O}^\ast|F(\O)(p)={\bf 1}\}$. Pick $\O$ to be any profile in $S^{\bf 1}$ with the minimal value of $\frac{|\O(p)|}{|\O(p^\pm)|}$ and call this value $q_{\bf 1}$. Now let $\O'$ be any profile such that $\O'=_{-i}\O$ and $\frac{|\O'(p)|}{|\O'(p^\pm)|}>q_{\bf 1}$. This implies that
either $O_i(p)\neq {\bf 1}$ and $O'_i(p)={\bf 1}$, or $O_i(p)={\bf 0}$ and $O'_i(p)\neq {\bf 0}$. In both cases, by \textit{monotonicity}, $F(\O')(p)= {\bf 1}$.
By iterating this reasoning on monotonicity and by anonimity, we can generate similarly any profile $\O$ such that $\frac{|\O(p)|}{|\O(p^\pm)|}>q_{\bf 1}$, and show that $F(\O)(p)={\bf 1}$. Given that $q_{\bf 1}$ was defined as a minimum, we conclude that for all $\O\in\mathcal{O}^\ast$, $F(\O)(p)=1$ iff $\frac{\O'(p)}{\O'(p^\pm)}\geq q_{\bf 1}$.
By \textit{responsiveness} again, there also exists a non-empty set of profiles $S^{\bf 0}$ such that $S^{\bf 0}=\{\O\in\mathcal{O}^\ast|F(\O)(p)={\bf 0}\}$. Take $\O$ to be one of the profiles in $S^{\bf 0}$ with minimal value of $\frac{|\O(p^-)|}{|\O(p^\pm)|}$ and let us call this value $q_{\bf 0}$. By \textit{monotonicity} and the same reasoning steps as above, we obtain that for any profile $\O$ in $\mathcal{O}^\ast$: $F(\O)(p)={\bf 0}$ iff $\frac{|\O(p^-)|}{|\O(p^\pm|)}\geq q_{\bf 0}$.
\begin{comment}
By responsiveness, there exists a set of profiles $S$ such that $S=\{\O\in\mathcal{O}^\ast|F(\O)(p)={\bf 0}\}$. Take $\O$ to be a profile in $S$ with the minimal value of $q$ with $q=\frac{|\O(p^-)|}{|\O(p^\pm)|}$.
Let $\O'\in\mathcal{O}^\ast$ be such that $\O'=_{-i}\O$ and $\frac{|\O'(p^-)|}{|\O'(p^\pm)|}>q$. This implies that
either $O_i(p)\neq {\bf 0}$ and $O'_i(p)={\bf 0}$, or $O_i(p)={\bf 1}$ and $O'_i(p)\neq {\bf 1}$. In both cases, by monotonicity, $F(\O')(p)= {\bf 0}$.
By iterating this reasoning on monotonicity and by anonimity, we can treat in a similar way all profiles such that $\frac{|\O(p^-)|}{|\O(p^\pm)|}>q$, and show that $F\O={\bf 0}$. Given that $q$ was chosen to be minimal, we conclude that for all $\O\in\mathcal{O}^\ast$, $F(\O)(p)={\bf 0}$ iff $\frac{\O'(p^-)}{\O'(p^\pm)}\geq q$.
Fix $q_-$ be the value such that for any aggregation structure $\S$ and any profiles $\O$ in $\mathcal{O}^\ast$: $\frac{|\O(p^-)|-1}{|\O(p^\pm)|}< q_- < q \in $: $F(\O)(p)={\bf 0}$ iff $\frac{|\O'(p^-)|}{|\O'(p^\pm|)}> q_-$.
\end{comment}
\end{proof}
\begin{fact}\label{fact:symm}
Let $\S$ be a BA structure.
Let $F$ be a quota rule. $F$ is symmetric if and only if it is unbiased.
\end{fact}
\begin{proof}
This follows from the definitions of symmetric quotas and of unbiasedness. Let $F$ be a quota rule. The following statements are all equivalent:
\begin{enumerate}
\item $F$ is unbiased (assumption);
\item For all $\O,\O'$ in $\mathcal{O}^\ast$ such that, for all $i\inN, O_i(p)={\bf 1}$ iff $O'_i(p)={\bf 0}$, $|O(p)|>p_{\bf 1}$ iff $|O'(\neg p)|>p_{\bf 0}$ (definition of unbiasedness)
\item $q_{\bf 1}=q_{\bf 0}$ (definition of quota rule)
\item $F$ is symmetric (definition of symmetric quota rule).
\end{enumerate}
This completes the proof.
\end{proof}
\begin{corollary}\label{thm:MayAdaptedNEW}
Let $\S$ be a BA structure. An aggregator $F$ is a symmetric quota rule if and only if it is independent, anonymous, monotonic, responsive, and unbiased.
\end{corollary}
\begin{proof}
This follows directly from Theorem \ref{thm:quotarules} and Fact \ref{fact:symm}.
\end{proof}
\begin{corollary}
An aggregator $F$ is the majority aggregator $\mathsf{maj}$ if and only if it is an anonymous, independent, monotonic, responsive, and unbiased aggregator which minimizes abstention.
\end{corollary}
\begin{proof}
\ldots
\end{proof}
\subsection{the proxy majority without abstention}
The standard form of proxy voting can be modeled by the following proxy aggregator based on issuewise majority:
\begin{align}
\mathsf{pv}_{\mathsf{maj}}(\O)(p) = 1 & \Longleftrightarrow \sum_{i \in \O(p)} w^\O_i(p) \geq \frac{|N|+1}{2} \label{eq:proxymaj}
\end{align}
That is, an issue is accepted by proxy majority in profile $\O$ if and only if the sum of the weights of the agents who accept $p$ in $\O$ exceeds the majority quota. Just like majority, proxy majority is not collectively consistent. In general, given an (anonymous and independent) aggregator $F$, its proxy variant can be defined as in \eqref{eq:proxymajast} for $\mathsf{maj}$.
Remark that the notation introduced above allows us to define the majority rule without abstention $\mathsf{maj}$ in a simpler way:
\begin{align}
\mathsf{pv}_{\mathsf{maj}}(\O)(p) = 1 & \Longleftrightarrow |\{i \in N | O_{g_i}(p)={\bf 1} \}| \geq \frac{|N |+1}{2} \label{eq:proxymajsimple}
\end{align}
\subsection{Our initial axiomatic properties for proxy aggregators}
We start by defining proxy variants of well-known properties of aggregators from the judgment aggregation literature. Let us first, however, introduce some auxiliary notation and terminology.
We start by defining a notion of opinion permutation for proxy profile. Indeed, the permutation of a proxy profile cannot simply be defined by `reshuffling' the individual proxy opinions in the profile.
We therefore introduce a notion of issue by issue opinions permutation which is relative to connected components of the delegation graphs and preserve the rationality of the individual opinions, i.e an agent $i$ may switch opinions on two different issues $p$ and $q$ with two different agents $j$ and $k$ respectively, as long as: 1) there is a path between $i$ and $j$ in the delegation graph $G_p$ and a path between $i$ and $k$ in the delegation graph $G_q$, and 2) the resulting profile satisfies the individual rationality constraint.
\begin{definition}[issuewise connectedness preserving permutation]
Given an issue $p\in P$, a $p$-connectedness preserving permutation is a bijection $\mu_p: N\rightarrowN$, such that, for all $i\inN$, $iR^*_{p}\mu_p(i)$ or $\mu_p(i)R^*_pi$.
\end{definition}
\begin{definition}[rational proxy permutation]
Given a proxy profile $\O$, a rational proxy opinions permutation $\mu:\mathcal{P}^N\to\mathcal{P}^N$ is a function where $\mu(\O) = \tuple{O^\mu_1, \ldots, O^\mu_n}$ is such that, for all $i\inN$, and all $p\in{\bf P}$:
\begin{description}
\item $O^\mu_i(p) = O_{\mu_p(i)}(p)$,
\item $\mu_p$ is a $p$-connectedness preserving permutation, and
\item $O^\mu_i$ satisfies the formulas in \eqref{eq:ir}.
\end{description}
\end{definition}
\begin{definition}
A proxy aggregator $\mathsf{pv}:\mathcal{P}^N \to \mathcal{O}^\ast$ is said to be:
\begin{description}
\item[unanymous] iff for all $p\in {\bf P}$, for all proxy profile $\O$ : if for all $i\in N$, and all $x \in \{{\bf 0},{\bf 1}\}$, $O_{g_p(i)}(p) = x$, then $\mathsf{pv}(\O)(p)=x$ and if for all $i\in N$, $g_p(i)=\emptyset$, then $\mathsf{pv}(\O)(p)=\ast $. I.e., when all gurus agree, their opinion on an issue becomes the group's opinion on it, and when there are no gurus for this issue, the group abstains on it.
\item[anonymous] iff for any rational proxy permutation $\mu: \mathcal{P}^N\rightarrow\mathcal{P}^N$, $\mathsf{pv}(\O)=\mathsf{pv}(\O^\mu)$. I.e., rationally permuting proxy opinions among individuals does not affect the output of the aggregator.
\item[$p$-dictatorial] iff there exists $i \in N$ (the {\em $p$-dictator}) s.t. for any proxy profile $\O$, and all $x \in \set{{\bf 0},{\bf 1}}$, $\mathsf{pv}(\O)(p)= x$ iff $g_p(i) \neq \emptyset$ and $\O_{g_p(i)} = x$ .
I.e., there exists an agent whose guru's opinion always determines the group's opinion on $p$. If the aggregator is $p$-dictatorial, with the same dictator on all issues $p \in {\bf P}$, then it is called {\bf dictatorial}.
\item[$p$-oligarchic] iff there exists $C \subseteq N$ (the {\em $p$-oligarchs}) s.t. $C\neq\emptyset$ and for any proxy profile $\O$ and any value $x \in \set{{\bf 0},{\bf 1}}$: $\mathsf{pv}(\O)(p) = x $ iff for all $i\in C$ , $g_p(i)\neq\emptyset$ and $O_{g_p(i)}=x$. I.e., there exists a group of agents whose gurus' opinion always determines the group's opinion on $p$. If the aggregator is $p$-oligarchic, with the same oligarchs on all issues $p \in {\bf P}$, then it is called {\bf oligarchic}.
\item[monotonic] iff, for all $p\in {\bf P}$ and all $i\inN$: for any proxy profiles $\O, \O'$, if $\O =_{-i} \O'$ and $O_i(p)\neq 1$ and $O'_i(p)=1$, then: if $\mathsf{pv}(\O)(p)=1$, then $\mathsf{pv}(\O')(p)=1$.
\item[independent] iff, for all $p\in {\bf P}$, for any proxy profiles $\O, \O'$: if for all $i\in N, O_{g_p(i)}(p) = O'_{g'_p(i)}(p)$, then $\mathsf{pv}(\O)(p)=\mathsf{pv}(\O')(p)$.
\item[neutral] iff, for all $p,q \in {\bf P}$, for any proxy profile $\O$: if for all $i\inN$, $O_{g_p(i)}(p)=O_{g_q(i)}(q)$, then $\mathsf{pv}(\O)(p)=\mathsf{pv}(\O)(q)$.
\item[systematic] iff it is neutral and independent.
\item[unbiased] iff for all $p \in {\bf P}$, for any proxy profiles $\O, \O'$: if for all $i\inN$, $O_{g_p(i)}(p)= {\bf 1}$ iff $O'_{g_p(i)}(p)={\bf 0}$, then $\mathsf{pv}(\O)(p)={\bf 1}$ iff $\mathsf{pv}(\O')(p)={\bf 0}$.
\item[collectively rational] iff for any proxy profile $\O$, $\mathsf{pv}(\O) \models \gamma$. I.e., if the aggregator preserves the constraint $\gamma$.
\end{description}
\end{definition}
It is worth briefly commenting on the above definitions. Like in the classic case, anonymity implies non-dictatorship. A weaker form of dictatorship would require the opinion of the dictator to become the collective opinion only in those cases in which the dictator herself is a guru. This would be the case when one agent is a guru for more than half of the population.
\section{To-do-list}
subsection{Properties on Network-Invariant Profiles}
Let us assume that we restrict the set of possible proxy profiles to the ones with the same delegation structures (with respect to each issue). In that case, proxy majority becomes $p$-oligarchic---the oligarchs being the set of gurus such that more than half of the voters is related to any of them via $R^*_p$. Proxy majority would be oligarchic only if there existed a unique group of agents such that, for each opinion profile, \emph{for any issue $p$}, they were "p-oligarchs", which is not the case.
However, if we restrict even more our set of possible opinion profiles to the ones with a unique fixed network (i.e., the same network for all issues), then proxy majority becomes oligarchic.
\ldots
\subsubsection{Oligarchy}
An aggregator is said to be \emph{oligarchic} if there exists a non-empty set of agents (the oligarchs) such that, for any opinion profile, an issue is collectively accepted if and only if it is accepted by all the oligarchs. Majority proxy voting is not oligarchic in general.
\subsubsection{isomorphic networks}
Going towards the opposite direction, if we restrict our set of possible opinion profiles to the ones with isomorphic network structures (OR TO STRUCTURES WHICH ARE SIMILAR IN THE SENSE OF THE ABOVE AD HOC CONDITION), then we obtain a \emph{random issuewise oligarchy}. To make majority proxy voting properly oligarchic would require to add a stronger constraint to the set of opinion profiles: there is a unique set of agents such that for all opinion profiles, those agents are followed by more than half the population with respect to \emph{all} issues.
\subsubsection{Standard impossibility results}
The above defined opinion profiles can be mapped into a (non-proxy) opinion profile with abstention, as follows: For any proxy opinion profile, any agent and any issue $p$, if the agent is related via $R^*_p$ to an agent with opinion $0$ or $1$, then this agent is assigned opinion $0$ (respectively $1$) on issue $p$, otherwise this agent is abstaining on $p$.
Therefore, standard impossibility results for aggregation with abstention apply: every aggregation function that is independent and unanimous must be weakly oligarchic.
\subsection{Truth-tracking Behavior}
Condorcet's Jury Theorem shows that, for individuals whose probability of choosing the right alternative (among 2) is above 0.5, the probability of correct collective decision under the majority rule is (higher than the individual one and) approaches 1 as the size of the group increases.
When considering proxy voting, if we assume that all agents have the same probability of choosing correctly, then they would be better off without any delegation, as delegating reduces the number of impactful decision makers.
To make delegation truth-tracking, one thus needs to assume that when agents choose to delegate, they do so rather wisely too: they have more chances to delegate to someone who chooses more accurately than themselves than not (i.e, there is a probability above 0,5 that their delegate has a higher probability of making the right choice than theirs).
\begin{theorem}[Conjecture]
An independent aggregator $F: (\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ is anonymous if and only if the corresponding proxy aggregator $\mathsf{pv}_F:\mathcal{P}^N \to \mathcal{O}^\ast$ is.
\end{theorem}
\begin{proof}
\begin{comment}
$\Leftarrow$:
Let $\mathsf{pv}_F:\mathcal{P}^N \to \mathcal{O}^\ast$ be an anonymous proxy aggregator.
Then, for any rational proxy permutation $\nu: \mathcal{P}^N\rightarrow\mathcal{P}^N$, $\mathsf{pv}(\O)=\mathsf{pv}(\O^\mu)$, i.e., any function where $\nu(\O) = \tuple{O^\nu_1, \ldots, O^\nu_n}$ is such that, for all $i\inN$, and all $p\in{\bf P}$: i) $O^\nu_i(p) = O_{\nu_p(i)}(p)$, ii) $\nu_p$ is a $p$-connectedness preserving permutation (a bijection $\nu_p: N\rightarrowN$, such that, for all $i\inN$, $iR^*_{p}\nu_p(i)$ or $\nu_p(i)R^*_pi$.) ; and iii) $O^\nu_i$ satisfies the formulas in \eqref{eq:ir}.
$\Rightarrow$:
Let $F: (\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ be an independent and anonymous aggregator. Then, for any bijection $\mu: N\rightarrowN$, $F(\O)=F(\O^\mu)$, where $\O^\mu = \tuple{O_{\mu(1)}, \ldots, O_{\mu(n)}}$.
Proxy aggregator $\mathsf{pv}_F:\mathcal{P}^N \to \mathcal{O}^\ast$ is anonymous iff for any rational proxy permutation $\nu: \mathcal{P}^N\rightarrow\mathcal{P}^N$, $\mathsf{pv}(\O)=\mathsf{pv}(\O^\mu)$, i.e., any function where $\nu(\O) = \tuple{O^\nu_1, \ldots, O^\nu_n}$ is such that, for all $i\inN$, and all $p\in{\bf P}$:
\begin{description}
\item $O^\nu_i(p) = O_{\nu_p(i)}(p)$,
\item $\nu_p$ is a $p$-connectedness preserving permutation (a bijection $\nu_p: N\rightarrowN$, such that, for all $i\inN$, $iR^*_{p}\nu_p(i)$ or $\nu_p(i)R^*_pi$.)
\item $O^\nu_i$ satisfies the formulas in \eqref{eq:ir}.
\end{description}
\end{comment}
\ldots
\end{proof}
\begin{theorem}[The case of the majority rule]
Let ${\O}$ be a proxy profile, and $t$ be the above defined translation function from a proxy profile to an opinion profile.
Then, $\mathsf{pv}_{\mathsf{maj}_\ast}(\O)=\mathsf{maj}_\ast(t(\O))$.
\end{theorem}
\begin{proof}
Let $p\in {\bf P}$ be arbitrary, and $x\in\{0,1\}$. Then:
\begin{align*}
& \mathsf{pv}_{\mathsf{maj}_\ast}(\O)(p)=x & \mbox{ iff } \\
& |\{i \in N | O_{g_p(i)}(p)=x\}| > |\{i \in N | O_{g_p(i)}(p)=(1-x)\}|& \mbox{ iff } \\
& |\{i \in N | t(O_i(p))= x \}| > |\{i \in N | t(O_i(p))= (1-x)\}|& \mbox{ iff } \\
& \mathsf{maj}_\ast(t(\O))(p)=x. &
\end{align*}
And therefore:
\begin{align*}
\mathsf{pv}_{\mathsf{maj}_\ast}(\O)(p)=\ast \mbox { iff } \mathsf{pv}_{\mathsf{maj}_\ast}(\O)(p)\notin \{0,1\} \mbox { iff } \\
\mathsf{maj}_\ast(t(\O))(p) \notin \{0,1\} \mbox { iff } \mathsf{maj}_\ast(t(\O))(p) = \ast.\\
\end{align*}
\end{proof}
The same proof goes through when replacing $\frac{1}{2}$ by any other quota, showing that the correspondence holds for any similarly defined proxy quota rule.
\begin{theorem}[The case of dictatorships]
Let ${\O}$ be a proxy profile, and $t$ be the above defined standard translation from a proxy profile to an opinion profile. Let $dic\ast$ be a dictatorial aggregator and $ \mathsf{pv}_{dic\ast}$ be a dictatorial proxy aggregator.
Then, $\mathsf{pv}_{dic\ast}({\O})=dic\ast(t({\O}))$.
\end{theorem}
\begin{proof}
Let $p\in {\bf P}$ be arbitrary, agent $i$ be a proxy dictator, and $x\in\{{\bf 0},{\bf 1}\}$
\begin{align*}
\mathsf{pv}_{dic_\ast}({\O})(p)=x & \mbox{ iff} \\
O_{g_p(i)}(p)=x & \mbox{ iff} \\
t(O_i(p))= x & \mbox{ iff} \\
dic_\ast(t({\O}))(p)=x &
\end{align*}
And therefore:
\begin{align*}
\mathsf{pv}_{dic_\ast}(\O)(p)=\ast \mbox { iff } \mathsf{pv}_{dic\ast}(\O)(p)\notin \{0,1\} \mbox { iff } \\
dic_\ast(t(\O))(p) \notin \{0,1\} \mbox { iff } dic_\ast(t(\O))(p) = \ast.\\
\end{align*}
\end{proof}
\subsection{Proxy Majority}
Our variant of May's characterization theorem \cite{May_1952} of majority voting \ref{thm:MayAdapted} now applies to proxy majority:
\begin{theorem}[Corollary]
Let $\S$ be a BA structure where $\gamma$ is simple.
A proxy aggregator $\mathsf{pv}$ is the proxy majority $\mathsf{pv}_{\mathsf{maj}_\ast}$ if and only if it is rational, monotonic, anonymous, responsive, and unbiased. TO BE CORRECTED.
\end{theorem}
\begin{proof}
This follows directly from Theorem \ref{thm:MayAdapted} STILL TO BE CORRECTED and Definition \ref{}.
\end{proof}
\subsection{Impossibility in Proxy Voting}
We obtain the corresponding impossibility result (similarly as the result for binary judgment aggregation with abstention):
\begin{theorem}[proxy impossibility]
Let $\S$ be a BA structure such that the agenda induced by $\gamma$ is path connected and evenly negatable.
Then if a proxy aggregator $\mathsf{pv}: \mathcal{P}^N \to \mathcal{O}^\ast$ is independent, unanimous and collectively rational, then it is oligarchic.
\end{theorem}
\begin{comment}
\begin{proof}
Assume that for all $p\in {\bf P}$, for all $S\subseteqN$ such that $S$ is a cycle in $G_{p}$, for all $i,j\in S$: $O_i(p)=O_j(p)$.
Consider an arbitrary $p\in {\bf P}$, and an arbitrary $i\in N$. Let $k$ be the distance from $i$ to $l$, where $l$ is the closest agent in a cycle $S\subseteqN$ of $G_p$. We show that for any such $k\in\mathbb{N}$, there exists an $n \in \mathbb{N}$, such that $O^{n}_ i(p)$ is stable.
\begin{itemize}
\item If $k=0$: $i\in S$, hence $O_{i}(p)$ is stable.
\item if $k=n+1$: Assume that for agent $j$ at distance $n$ from $l$,
for some $m \in \mathbb{N}$, $O^{m}_ j(p)$ is stable. We need to consider the following cases:
\begin{enumerate}
\item If $\bigwedge_{p \in {\bf P}} O^{m}_{R_p(i)}(p) \wedge \bigwedge_{\varphi \in {\bf C}}$ is consistent,
then $O^{m+1}_i(p)=O^m_j(p)$, and $O^{m+1}_i(p)$ is stable.
\item If $\bigwedge_{p \in {\bf P}} O^{m}_{R_p(i)}(p) \wedge \bigwedge_{\varphi \in {\bf C}}$ is not consistent, then:
\begin{enumerate}
\item if $O^{m}_i(p)=O^m_j(p)$, then $O^{m}_i(p)$ is stable.
\item if $O^{m}_i(p) \neq O^m_j(p)$, then:
\begin{enumerate}
\item If there is a $t\in\mathbb{N}$ such that $\bigwedge_{p \in {\bf P}} O^{m+t}_{R_p(i)}(p) \wedge \bigwedge_{\varphi \in {\bf C}}$ is consistent, then $O^{m+t+1}_i(p) = O^{m}_j(p)$, and $O^{m+t+1}_i(p)$ is stable.
\item If there is no such $t$, then $i$'s opinion on $p$ will never change: $O^{m}_i(p)$ is stable.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{itemize}
\end{proof}
\end{comment}
\begin{comment} BELOW IS THE OLD DEFINITION, FOR MEMORY
\begin{definition}[Quota rules]
Let $\S$ be an aggregation structure. An aggregator $F:(\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ is a {\em quota rule} if there exist $q_{\bf 1}, q_{\bf 0} \in \mathbb{Q}$ such that, for $x \in \set{{\bf 0}, {\bf 1}}$:
\begin{align}
0<q_x\leq 1 \label{eq:nontrivial}, \\
q_x > 1 - q_{({\bf 1} - x)}, \label{eq:constraint}
\end{align}
and for any issue $p\in{\bf P}$, and any opinion profile $\O\in\mathcal{O}^\ast$:
\begin{align}\label{eq:quotarules}
F(\O)(p)=
\begin{cases}
{\bf 1} & \mbox{ if } |\O(p)| \geq q_{\bf 1} |\O(\pm p)|\\
{\bf 0} & \mbox{ if } |\O(\neg p)| \geq q_{\bf 0} |\O(\pm p)| \\
\ast & \mbox{ otherwise } \\
\end{cases}
\end{align} \label{eq:quota}
If $q_{\bf 1} = q_{\bf 0}$ then the quota rule is called {\em symmetric}.
\end{definition}
\end{comment}
\begin{comment}below the very same steps of $q_{\bf 0}$:
By responsiveness, there exists a set of profiles $S$ such that $S=\{\O\in\mathcal{O}^\ast|F(\O)(p)={\bf 0}\}$. Take $\O$ to be a profile in $S$ with the minimal value of $q$ with $q=\frac{|\O(p^-)|}{|\O(p^\pm)|}$.
Let $\O'\in\mathcal{O}^\ast$ be such that $\O'=_{-i}\O$ and $\frac{|\O'(p^-)|}{|\O'(p^\pm)|}>q$. This implies that
either $O_i(p)\neq {\bf 0}$ and $O'_i(p)={\bf 0}$, or $O_i(p)={\bf 1}$ and $O'_i(p)\neq {\bf 1}$. In both cases, by monotonicity, $F(\O')(p)= {\bf 0}$.
By iterating this reasoning on monotonicity and by anonimity, we can treat in a similar way all profiles such that $\frac{|\O(p^-)|}{|\O(p^\pm)|}>q$, and show that $F\O={\bf 0}$. Given that $q$ was chosen to be minimal, we conclude that for all $\O\in\mathcal{O}^\ast$, $F(\O)(p)={\bf 0}$ iff $\frac{\O'(p^-)}{\O'(p^\pm)}\geq q$.
Fix $q_-$ be the value such that for any aggregation structure $\S$ and any profiles $\O$ in $\mathcal{O}^\ast$: $\frac{|\O(p^-)|-1}{|\O(p^\pm)|}< q_- < q \in $: $F(\O)(p)={\bf 0}$ iff $\frac{|\O'(p^-)|}{|\O'(p^\pm|)}> q_-$.
\end{comment}
Let $F$ be a quota rule. The following statements are equivalent:
\begin{enumerate}
\item $F$ is unbiased. (assumption)
\item For all $\O,\O'$ in $\mathcal{O}^\ast$ such that, for all $i\inN, O_i(p)={\bf 1}$ iff $O'_i(p)={\bf 0}$,
$|O(p)|\geq \lceil q_{\bf 1} |\O(\pm p)| \rceil$ iff
$|O'(\neg p)|\geq \lceil q_{\bf 0} |\O'(\pm p)| \rceil$. (definitions of unbiasedness and quota rules)
\item $\lceil q_{\bf 1} |\O(\pm p)| \rceil = \lceil q_{\bf 0} |\O(\pm p)| \rceil $
\item $F$ is symmetric. (definition of symmetric quota rule)
\end{enumerate}
Assume $F$ is a uniform symmetric quota rule: for all $\O\in\mathcal{O}^\ast$, $\lceil q_{\bf 1} |\O(\pm p)| \rceil = \lceil q_{\bf 0} |\O(\pm p)| \rceil$. Assume that $q_{\bf 1}\leq\frac{1}{2}$. Then, by constraint \eqref{eq:constraint} $q_{\bf 0} \geq \frac{1}{2}$ and $F$ is not symmetric. Contradiction. Hence, $q_{\bf 1}>\frac{1}{2}$. And by the same reasoning, $q_{\bf 0}>\frac{1}{2}$. Two cases:
\begin{itemize}
\item[i] $F$ is not $\mathsf{maj}$: $q_{\bf 1}, q_{\bf 0}>\frac{1}{2}+\frac{1}{2|N|}$.
Then $|\{\O\in\mathcal{O}^\ast | F(\O)(p)= {\bf 1} \} \subset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= {\bf 1} \}$ and $\{\O\in\mathcal{O}^\ast | F(\O)(p)= {\bf 0} \} \subset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= {\bf 0} \}$, therefore $\{\O\in\mathcal{O}^\ast | F(\O)(p)= \ast \} \supset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= \ast \}$. Therefore, $F$ does not minimize abstention.
\item[ii] $F$ is $\mathsf{maj}$: $\frac{1}{2}<q_{\bf 1},q_{\bf 0} \leq \frac{1}{2}+\frac{1}{2|N|}$. Assume that for any symmetric quota rule $F'$ with quotas $\lceil q'_{\bf 1} |\O(\pm p)| \rceil = \lceil q'_{\bf 0} |\O(\pm p)| \rceil $: $|\{\O\in\mathcal{O}^\ast | F'(\O)(p)= \ast \}| \leq |\{\O\in\mathcal{O}^\ast \mid \mathsf{maj}(\O)(p)=\ast \}|$. Then, by the above reasoning, we know that $\frac{1}{2}<q'_{\bf 1},q'_{\bf 0}\leq \frac{1}{2}+\frac{1}{2|N|}$, and $F'$ is just $\mathsf{maj}$. Therefore, $F$ minimizes abstention.
\end{itemize}
Hence, $F$ minimizes abstention iff $F$ is $\mathsf{maj}$.
\begin{comment}
\fbox{Left-to-right}
Assume $F$ is $\mathsf{maj}$.
Assume, for contradiction, that there exists a quota rule $F'$ with quotas $\lceil q'_{\bf 1} \times |\O(\pm p)| \rceil = \lceil q'_{\bf 0} \times |\O(\pm p)| \rceil $ such that $|\{\O\in\mathcal{O}^\ast | F'(\O)(p)= \ast \}| < |\{\O\in\mathcal{O}^\ast \mid \mathsf{maj}(\O)(p)=\ast \}|$. Two cases:
\begin{itemize}
\item[i] $q_{\bf 1}'\leq \frac{1}{2}$. Then by constraint \eqref{eq:constraint}, $q'_{\bf 0} > \frac{1}{2}$, and $F'$ is not symmetric. Contradiction.
\item[ii] $q_{\bf 1}'>\frac{1}{2}+\frac{1}{2|N|}$.
Then $|\{\O\in\mathcal{O}^\ast | F'(\O)(p)= {\bf 1} \} \subset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= {\bf 1} \}$ and $\{\O\in\mathcal{O}^\ast | F'(\O)(p)= {\bf 0} \} \subset \{\O\in\mathcal{O}^\ast | F(\O)(p)= {\bf 0} \}$, therefore $\{\O\in\mathcal{O}^\ast | F'(\O)(p)= \ast \} \supset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= \ast \}$. Contradiction.\footnote{Another way to state this is: for all profiles $\O\in\mathcal{O}^\ast$, if $F'(\O)(p)={\bf 1}$, then $\mathsf{maj}(\O)(p)={\bf 1}$, and if $F'(\O)(p)={\bf 0}$, then $\mathsf{maj}(\O)(p)={\bf 0}$, but there exists some profiles where $p^\pm=N$ and $|O(p)|=\frac{1}{2}+\frac{1}{|N|}$, for which $\mathsf{maj}(\O)(p)={\bf 1}$ but $F'(\O)(p)=\ast$.}
\end{itemize}
Therefore, $\frac{1}{2}< q_{\bf 1}', q_{\bf 0}'\leq\frac{1}{2}+\frac{1}{2|N|}$, which implies that $F'(\O)(p)=\ast$ iff $\mathsf{maj}(\O)(p)=\ast$. Hence $|\{\O\in\mathcal{O}^\ast | F'(\O)(p)= \ast \}| = |\{\O\in\mathcal{O}^\ast \mid \mathsf{maj}(\O)(p)=\ast \}|$. Contradiction.
\fbox{Right-to-left}
Assume $F$ is a quota rule with $\lceil q_{\bf 1} \times |\O(\pm p)| \rceil = \lceil q_{\bf 0} \times |\O(\pm p)| \rceil $ which minimizes abstention. Assume, for contradiction, that $F$ is not $\mathsf{maj}$. Two cases:
\begin{itemize}
\item[i] $q_{\bf 1}\leq\frac{1}{2}$. Then, by \eqref{eq:constraint} $q_{\bf 0} \geq \frac{1}{2}$ and $F$ is not symmetric.
Contradiction.
\item[ii] $q_{\bf 1}>\frac{1}{2}+\frac{1}{2|N|}$. Then $|\{\O\in\mathcal{O}^\ast | F(\O)(p)= {\bf 1} \} \subset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= {\bf 1} \}$ and $\{\O\in\mathcal{O}^\ast | F(\O)(p)= {\bf 0} \} \subset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= {\bf 0} \}$, therefore $\{\O\in\mathcal{O}^\ast | F(\O)(p)= \ast \} \supset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= \ast \}$. Contradiction.
\end{itemize}
Hence, $\frac{1}{2}<q_{\bf 1}=q_{\bf 0} \leq \frac{1}{2}+\frac{1}{2|N|}$. Therefore, $F$ is $\mathsf{maj}$.
\medskip
\fbox{In one go}
Assume $F$ is a symmetric quota rule: for all $\O\in\mathcal{O}^\ast$, $\lceil q_{\bf 1} \times |\O(\pm p)| \rceil = \lceil q_{\bf 0} \times |\O(\pm p)| \rceil$. Assume that $q_{\bf 1}\leq\frac{1}{2}$. Then, by constraint \eqref{eq:constraint} $q_{\bf 0} \geq \frac{1}{2}$ and $F$ is not symmetric. Contradiction. Hence, $q_{\bf 1}>\frac{1}{2}$. By the same reasoning, $q_{\bf 0}>\frac{1}{2}$).Two cases:
\begin{itemize}
\item[i] $F$ is not $\mathsf{maj}$. Then, $q_{\bf 1}>\frac{1}{2}+\frac{1}{2|N|}$ or $q_{\bf 0}>\frac{1}{2}+\frac{1}{2|N|}$. Then $|\{\O\in\mathcal{O}^\ast | F(\O)(p)= {\bf 1} \} \subset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= {\bf 1} \}$ or $\{\O\in\mathcal{O}^\ast | F(\O)(p)= {\bf 0} \} \subset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= {\bf 0} \}$, therefore $\{\O\in\mathcal{O}^\ast | F(\O)(p)= \ast \} \supset \{\O\in\mathcal{O}^\ast | \mathsf{maj}(\O)(p)= \ast \}$. Therefore, $F$ does not minimize abstention.
\item[ii] $F$ is $\mathsf{maj}$. Then, $\frac{1}{2}<q_{\bf 1},q_{\bf 0} \leq \frac{1}{2}+\frac{1}{2|N|}$. Assume that for some symmetric quota rule $F'$ with quotas $\lceil q'_{\bf 1} \times |\O(\pm p)| \rceil = \lceil q'_{\bf 0} \times |\O(\pm p)| \rceil $: $|\{\O\in\mathcal{O}^\ast | F'(\O)(p)= \ast \}| \leq |\{\O\in\mathcal{O}^\ast \mid \mathsf{maj}(\O)(p)=\ast \}|$. Then, by the above reasoning, we know that $\frac{1}{2}<q'_{\bf 1},q'_{\bf 0}\leq \frac{1}{2}+\frac{1}{2|N|}$, and $F'$ is $\mathsf{maj}$. Therefore, $F$ minimizes abstention.
\end{itemize}
Hence, $F$ minimizes abstention iff $\frac{1}{2}<q_{\bf 1},q_{\bf 0} \leq \frac{1}{2}+\frac{1}{2|N|}$ iff $F$ is $\mathsf{maj}$.
\end{comment}
\begin{comment} HERE IS THE WHOLE THEOREM ADAPTED TO PROXY VERSION
\begin{theorem}\label{thm:quotarules}
Let $\S$ be a BA structure, let $F: (\mathcal{O}^\ast_{c})^N \to \mathcal{O}^\ast$ be an aggregator and let $\mathsf{pv}_F = t\circ F$, with $t$ the abovedefined translation function. Then, the following three claims hold:
\begin{enumerate}
\item $\mathsf{pv}_F$ is a proxy quota rule if and only if $F$ is anonymous, independent, monotonic, and responsive.
\item $\mathsf{pv}_F$ is a symmetric proxy quota rule if and only if $F$ is an unbiased proxy quota rule.
\item $\mathsf{pv}_F$ is the proxy majority rule $\mathsf{pv}_\mathsf{maj}$ if and only if $F$ is a symmetric proxy quota rule which minimizes collective abstention.
\end{enumerate}
\end{theorem}
\end{comment}
|
1,477,468,751,409 | arxiv | \section{Introduction}
The nature of phase separation and criticality in ionic fluids with
the dominant Coulomb interactions (e.g., molten salts and
electrolytes in solvents of low dielectric constant) has been an
outstanding experimental and theoretical issue for many years.
Electrostatic correlations are also known to play an important role
in many other technologically relevant systems such as
charge-colloidal suspensions, room-temperature ionic liquids and
micellar solutions of ionic surfactants.
Now, a generally accepted idea is that the gas-liquid and
liquid-liquid critical points in ionic fluids belong to the
universal class of a three-dimensional Ising model
\cite{Gutkowskii-Anisimov,Sengers_Shanks:09,Schroer:12}.
Nevertheless, the crossover from the mean-field-like behavior to
the Ising model criticality when approaching the critical point
remains a challenging problem for theory, simulations and
experiments \cite{Sengers_Shanks:09,Schroer:12}.
The most commonly studied theoretical model of ionic fluids is a restricted
primitive model (RPM), which consists of equal numbers of equisized positively and negatively charged
hard spheres immersed in a structureless dielectric continuum. The
RPM undergoes a gas-liquid-like phase transition at low
temperature and low density
\cite{stell1,levin-fisher,Cai-Mol1,patsahan_ion}.
Theoretical
\cite{patsahan:04:1,ciach:06:1,Parola-Reatto:11} and
numerical
\cite{caillol-levesque-weis:02,Hynnien-Panagiotopoulos,luijten,kim-fisher-panagiotopoulos:05}
investigations of the gas-liquid criticality in the RPM have provided
strong evidence for an Ising universal class. However,
an issue of the width of the critical region was not addressed in these works.
On the other hand, the Ginzburg criterion
\cite{levanyuk,ginzburg,Chaikin_Lubensky} was used in
Ref.~\cite{Fisher_Levin:93,evans,fisher3,schroer_1,schroer} in order
to study the crossover from the mean-field to asymptotic
regime, but
the obtained results failed to give a clear answer to the question
of the extent of the crossover region in the model.
Recently, using the method of collective variables (CVs)
\cite{zubar,jukh,Yuk-Hol,Pat-Mryg-CM}, we have derived the
Landau-Ginzburg (LG) Hamiltonian for the model of ionic fluids which
includes, besides Coulomb interactions, short-range attractive
interactions \cite{Patsahan:13}. An important feature of the
developed approach is that it enables us to obtain all the
relevant coefficients, including the square-gradient term, within
the framework of the same approximation. The Ginzburg temperature
for the RPM, calculated using this theory turned out to be about
$20$ times smaller than for a one-component nonionic model.
Furthermore, the results obtained for the RPM supplemented by
short-range attractive interactions have shown that the Ginzburg
temperature approaches the value found for the RPM when the
strength of Coulomb interactions becomes sufficiently large. These
results suggest the key role of Coulomb interactions in the
reduction of the crossover region. Nevertheless, the study of the
effect of an interaction range on the Ginzburg temperature is needed
in order to gain a better understanding of the crossover behavior in
ionic fluids.
In the present work, we extend the theory to the binary ionic model with screened Coulomb interactions.
Specifically, we consider a two-component system of particles
labeled $1$ and $2$, such that the interaction potential between a
particle of species $\alpha$ and one of the species $\beta$ at a
distance $r$ apart is as follows:
\begin{eqnarray}
u_{\alpha\beta}(r) = \left\{
\begin{array}{ll}
\infty, & r<\sigma\\
(-1)^{\alpha+\beta}K\displaystyle\frac{\exp(-z(r/\sigma-1))}{r/\sigma},&
r\geqslant \sigma
\end{array}
\right. \,,
\label{int-YRPM}
\end{eqnarray}
where $\alpha,\beta=(1,2)$. For $K>0$, Eq.~(\ref{int-YRPM})
describes a symmetrical mixture of hard spheres of the same diameter
$\sigma$ in which the like particles interact through a repulsive
Yukawa potential for $r>\sigma$, and the unlike particles interact
through the opposite attractive Yukawa potential for $r>\sigma$. We
restrict our consideration to the case where the number densities
of species $1$ and $2$ are the same, i.e.,
$\rho_{1}=\rho_{2}=\rho/2$. For $K=(q)^{2}/\epsilon$, the model
(\ref{int-YRPM}) is called a Yukawa restricted primitive model
(YRPM). In this case, $q_{+}=-q_{-}=q$ is the charge magnitude and
$\epsilon$ is the dielectric constant of the medium. In the limit
$z\rightarrow \infty$, the YRPM reduces to a hard sphere model
whereas the RPM is recovered by taking the limit $z\rightarrow 0$.
Thus, the YRPM can provide a basis for the study of the nature of
phase and critical behavior in ionic fluids and in partially ionic
fluids.
It is worth noting that the YRPM is often used to model a system of oppositely
charged colloids \cite{Leunissen,Hynninen-06,Fortini:06,Bier-10}. The effective
(screened) colloid-colloid interactions in such a system are due to
the presence of coions and counterions in the solvent. In this case,
$K$ and $z$ take the form:
$K/k_{B}T=Z^{2}\lambda_{B}/(1+\kappa_{D}\sigma/2)^{2}/\sigma$ and
$z=\kappa_{D}\sigma$, where
$\kappa_{D}=\sqrt{8\pi\lambda_{B}\rho_{s}}$ is the inverse Debye
screening length, $\lambda_{B}=e^{2}/\epsilon_{s}k_{B}T$ is the
Bjerrum length, $\rho_{s}$ is the salt concentration and
$\epsilon_{s}$ is the dielectric constant of the solvent. In a
colloid system, the range of interaction can be modified by
changing the salt concentration.
Whereas the effect of an interaction range on the gas-liquid phase
separation of a simple one-component fluid has been extensively studied (see
Ref.~\cite{Mendoub} and references herein), as far as we know there are only a few
works addressing this issue for the case of the YRPM
\cite{Parola-Reatto:11,Fortini:06,Carvalho_Evans:97,Mier-Y-Teran}.
In particular, the evolution of the gas-liquid phase diagram of the YRPM as a
function of the interaction range was theoretically studied using
the integral equation methods \cite{Carvalho_Evans:97,Mier-Y-Teran}
and the hierarchical reference theory (HRT) \cite{Parola-Reatto:11}.
The results obtained from the generalized mean-spherical
approximation (GMSA) show that both the critical density and the
critical temperature increase above the corresponding values for the
RPM when $z$ increases \cite{Carvalho_Evans:97}. Moreover, the GMSA
predicts a nonmonotonous behavior of the critical temperature as a
function of $z$: the critical temperature attains a maximum at
$z\approx 4$. In Ref.~\cite{Carvalho_Evans:97}, the attention was
focused on several values of $z$, $z=0$, $1.5075$, $3$, $4$, $5$
and $6$, and the gas-liquid coexistence was found for all the listed
values. The highest value for which the gas-liquid coexistence was
found within the framework of integral equation methods is
$z=25$ with the MSA \cite{Mier-Y-Teran}. In
Ref.~\cite{Parola-Reatto:11} the main emphasis is made on the
critical behavior of the model.
Simulations predict a rich phase diagram involving a gas-liquid
phase separation as well as several crystalline phases, which is in
agreement with experimental confocal microscopy data for
charge-stabilized colloidal suspensions
\cite{Leunissen,Hynninen-06,Fortini:06,Bier-10}. These studies
indicate a sensitivity of the phase diagram of the YRPM to the variation of $z$.
Unlike theoretical predictions \cite{Carvalho_Evans:97,Mier-Y-Teran}, it is found \cite{Fortini:06} that the gas-liquid
separation is not stable with respect to gas-solid coexistence for
$z>4$.
The purpose of the present paper is to study the effects of the
interaction range on the gas-liquid phase diagram and the Ginzburg
temperature of the YRPM. To this end, following
Ref.~\cite{Patsahan:13}, we find analytical expressions for all the relevant coefficients of the LG
Hamiltonian in a one-loop approximation. Based on these expressions,
first we calculate the gas-liquid critical parameters, spinodals
and coexistence curves of the model for $0.001\leq z\leq 2.781$. Remarkably, there is no gas-liquid
critical point for $z\geq 2.782$ in the approximation considered. Our
discussion also involves an analysis of the dependence of the
coefficients of the effective Hamiltonian on the interaction
range. Applying the Ginzburg criterion, we find that the
reduced Ginzburg temperature decreases with an increase of the
interaction range approaching the RPM value for $z\simeq 0.01$. The present analysis also
indicates the presence of a tricritical point at $z=2.781$.
The paper is organized as follows. A brief description of the
formalism is given in Sec.~2. The results for the gas-liquid phase
diagram and the critical parameters are presented in Sec.~3. In
Sec~4 we discuss the effect of the interaction range on the crossover
behavior of the YRPM. Concluding remarks are made in Sec.~5.
\section{Theory}
\subsection{Functional representation of the grand partition function}
We start with the YRPM and present the interaction potential (\ref{int-YRPM}) in the form:
\begin{equation}
u_{\alpha\beta}(r)=\phi^{\mathrm{HS}}(r)+\phi_{\alpha\beta}^{Y}(r),
\label{2.1}
\end{equation}
where $\phi^{\mathrm{HS}}(r)$ is the interaction potential between
the two hard spheres of diameter $\sigma$. Thermodynamic and structural properties of the system interacting
through the potential $\phi^{\mathrm{HS}}(r)$ are assumed to be
known. Therefore, the one-component hard-sphere model is regarded as
the reference system. $\phi_{\alpha\beta}^{Y}(r)$ are the screened Coulomb potentials. Figure~1 shows the shape
of the interaction potentials
$\phi_{\alpha\beta}^{Y}(r)/K$ for different values of the inverse screening length.
\begin{figure}[h]
\centering
\includegraphics[height=6cm]{patsahan_fig1.eps}
\caption{Interaction potentials $\phi_{\alpha\beta}^{Y}(r)/K$ for different values of the inverse screening length $z$.
} \label{fig1}
\end{figure}
The model under consideration is at equilibrium in the grand
canonical ensemble, $\beta=(k_{B}T)^{-1}$ is the inverse
temperature, and $\nu_{\alpha}=\beta\mu_{\alpha}$ ($\nu_{\alpha}=\nu_{\beta}=\nu$) is the dimensionless chemical potential of the
$\alpha$th species. Using the CV method we present the
grand partition function of the model in the form of a
functional integral \cite{Patsahan:13,Pat-Mryg-CM}:
\begin{eqnarray}
&&
\Xi=\Xi_{\text{HS}}\exp\left[\Delta\nu_{N}\langle
N\rangle_{\text{HS}}\right]
\int ({\rm d}\rho)({\rm d}\omega)\exp\left[\Delta\nu_{N}\rho_{0,N}
-\frac{\beta}{2V}\sum_{{\mathbf k}}\widetilde \phi^{Y}(k)\rho_{{\mathbf k},Q}\rho_{-{\mathbf k},Q}
\right. \nonumber\\
&& \left. +{\rm
i}\sum_{{\mathbf k}}\left(\omega_{{\mathbf k},N}\rho_{{\mathbf
k},N}+\omega_{{\mathbf k},Q}\rho_{{\mathbf
k},Q}\right)+\sum_{n\geq 2}\frac{(-{\rm i})^{n}}{n!}\sum_{i_{n}\geq 0}^{n}
\sum_{{\mathbf{k}}_{1},\ldots,{\mathbf{k}}_{n}}
{\mathfrak{M}}_{n}^{(i_{n})}(k_{1},\ldots,k_{n})
\right. \nonumber\\
&& \left.
\times
\omega_{{\bf{k}}_{1},Q}\ldots\omega_{{\bf{k}}_{i_{n}},Q}\,\omega_{{\bf{k}}_{i_{n+1}},N}\ldots\omega_{{\bf{k}}_{n},N}
\delta_{{\bf{k}}_{1}+\ldots +{\bf{k}}_{n}}
\right].
\label{Xi_full_1}
\end{eqnarray}
Here, $\rho_{{\mathbf k},N}$
and $\rho_{{\mathbf k},Q}$ are the CVs which describe fluctuations
of the total number density and the charge density (or relative number density), respectively:
\begin{equation*}
\rho_{{\mathbf k},N}=\rho_{{\mathbf k},+}+\rho_{{\mathbf k},-},
\qquad \rho_{{\mathbf k},Q}=\rho_{{\mathbf k},+}-\rho_{{\mathbf
k},-}.
\end{equation*}
CV $\rho_{{\mathbf k},\alpha}=\rho_{{\mathbf k},\alpha}^c-{\rm
i}\rho_{{\mathbf k},\alpha}^s$ describes the value of the $\mathbf
k$-th fluctuation mode of the number density of the $\alpha$th
species, the indices $c$ and $s$ denote real and imaginary parts of
$\rho_{{\mathbf k},\alpha}$; CVs $\omega_{N}$ and $\omega_{Q}$ are
conjugate to $\rho_{N}$ and $\rho_{Q}$, respectively. $({\rm
d}\rho)$ and $({\rm d}\omega)$ denote volume elements of the CV
phase space:
\begin{displaymath}
({\rm d}\rho)=\prod_{A=N,Q}{\rm d}\rho_{0,A}{\prod_{\mathbf
k\not=0}}' {\rm d}\rho_{\mathbf k,A}^{c}{\rm d}\rho_{\mathbf
k,A}^{s}, \quad ({\rm d}\omega)=\prod_{A=N,Q}{\rm
d}\omega_{0,A}{\prod_{\mathbf k\not=0}}' {\rm d}\omega_{\mathbf
k,A}^{c}{\rm d}\omega_{\mathbf k,A}^{s}
\end{displaymath}
and the product over ${\mathbf k}$ is performed in the upper
semi-space ($\rho_{-\mathbf k,A}=\rho_{\mathbf k,A}^{*}$,
$\omega_{-\mathbf k,A}=\omega_{\mathbf k,A}^{*}$).
$\widetilde\phi^{Y}(k)$ is the Fourier transform of
the repulsive potential $\phi_{\alpha\alpha}^{Y}(r)=\phi^{Y}(r)$, where $\phi^{Y}(r)=K\sigma\exp[-z(r/\sigma-1)]/r$.
Here we use the Weeks-Chandler-Andersen regularization of the potential $\phi^{Y}(r)$ inside the hard core \cite{wcha}.
In this case, $\widetilde \phi^{Y}(k)$ has the
form
\begin{equation}
\widetilde\phi^{Y}(x)=\frac{4\pi K\sigma^{3}}{x^{3}(z^{2}+x^{2})}\bar{f}(x),
\label{Yukawa}
\end{equation}
where
\begin{equation}
\bar{f}(x)=[z^{2}+x^{2}(1+z)]\sin(x)-xz^{2}\cos(x),
\label{f_x}
\end{equation}
and $x=k\sigma$. Due to symmetry in the YRPM, the Hamiltonian in (\ref{Xi_full_1}) does not include direct pair interactions
of number density fluctuations.
$\Xi_{\rm{HS}}$ is the grand partition function of the one-component hard-sphere model with
the dimensional chemical potential $\nu_{\text{HS}}$.
$\Delta\nu_{N}=\bar\nu-\nu_{\text{HS}}$
where
\begin{eqnarray}
\bar\nu=\bar \nu_{\alpha}=\nu_{\alpha}+\frac{\beta}{2V}\sum_{{\mathbf
k}}\widetilde\phi^{Y}(k), \qquad
\alpha=(1,2).
\label{2.7}
\end{eqnarray}
Hereafter, the subscript $\text{HS}$ refers to the hard-sphere system.
The cumulants ${\mathfrak{M}}_{n}^{(i_{n})}$ are expressed in terms of the Fourier
transforms of the connected correlation functions of the
hard-sphere system \cite{Pat-Mryg-CM}.
$\delta_{{\bf{k}}_{1}+\ldots +{\bf{k}}_{n}}$ is the Kronecker symbol.
In the case of the YRPM, we have the following recurrence relations for the cumulants ${\mathfrak{M}}_{n}^{(i_{n})}$
\cite{Pat-Mryg-CM}:
\begin{eqnarray*}
{\mathfrak{M}}_{n}^{(0)}&=&{\widetilde G}_{n,{\text HS}}, \qquad
{\mathfrak{M}}_{n}^{(1)}=0, \nonumber \\
{\mathfrak{M}}_{n}^{(2)}&=& {\widetilde G}_{n-1,{\text HS}},
\qquad
{\mathfrak{M}}_{n}^{(3)}=0, \nonumber \\
{\mathfrak{M}}_{n}^{(4)}&=&3{\widetilde G}_{n-2,{\text HS}}-2{\widetilde G}_{n-1,{\text HS}},
\end{eqnarray*}
where ${\widetilde G}_{n,{\text
HS}}$ denotes the Fourier
transform of the $n$-particle connected correlation function of a
one-component hard-sphere system.
In general, the dependence of ${\widetilde G}_{n,{\text
HS}}$ on the wave numbers $k_{i}$ is very complicated. Hereafter we use the
following approximation for ${\widetilde G}_{n,{\text HS}}$
\begin{eqnarray*}
&{\widetilde G}_{2,{\text HS}}(k)\simeq {\widetilde G}_{2,{\text HS}}(0)+\displaystyle\frac{k^{2}}{2}
{\widetilde G}_{2,{\text HS}}^{(2)}, \\ \nonumber
&{\widetilde G}_{n,{\text HS}}(k_{1},\ldots,k_{n})\simeq
{\widetilde G}_{n,{\text HS}}(0,\ldots) \quad {\text{for}} \quad
n\geq 3, \nonumber
\end{eqnarray*}
where the superscript $(2)$ denotes the second-order derivative with respect to
the wave vector.
\subsection{Gaussian approximation}
Now we consider the Gaussian approximation of $\Xi$ setting in
Eq.~(\ref{Xi_full_1}) ${\mathfrak{M}}_{n}^{(i_{n})}\equiv 0$ for
$n\geq 3$. Then, after integration over $\omega_{{\bf{k}},N}$ and
$\omega_{{\bf{k}},Q}$ we obtain
\begin{eqnarray*}
\Xi_{{\text G}}&=&\Xi'\int ({\rm d}\rho)
\exp\Big\{\Delta\nu_{N}\rho_{0,N}-\frac{1}{2}\sum_{\bf
k}\left[a_{2}^{(0)}(k)\rho_{{\bf k},N}\rho_{-{\bf k},N} +
a_{2}^{(2)}(k)\rho_{{\bf k},Q}\rho_{-{\bf k},Q}\right]\Big\},
\end{eqnarray*}
where
\begin{displaymath}
\Xi'=\Xi_{\rm{HS}}\exp\left[\Delta\nu_{N}\langle
N\rangle_{\rm{HS}}\right]\prod_{\mathbf
k}\left[{\mathfrak{M}}_{2}^{(0)}{\mathfrak{M}}_{2}^{(2)}\right]^{-1/2},
\end{displaymath}
and
\begin{equation}
a_{2}^{(0)}(k)=\left[{\mathfrak{M}}_{2}^{(0)}(k)\right]^{-1},
\qquad a_{2}^{(2)}(k)=\frac{\beta}{V}\widetilde \phi^{Y}(k)+\left[{\mathfrak{M}}_{1}^{(0)}\right]^{-1}.
\label{C_gaus}
\end{equation}
It follows from Eq.~(\ref{C_gaus}) that $a_{2}^{(0)}(k)$ never vanishes for physical values of the density.
The fact that the YRPM like the RPM does not undergo the gas-liquid instability in the Gaussian approximation
is due to the absence of direct pair interactions of density fluctuations as well as to the neglect of the effect of non-direct
correlations via a charge subsystem at this level of consideration.
By contrast, $a_{2}^{(2)}(k)$ can be equal to zero at $k=k^{*}\neq 0$, where $k^{*}$ is determined from the condition
$\partial a_{2}^{(2)}/\partial k=0$. The locus in the phase diagram at which $a_{2}^{(2)}(k=k^{*})=0$
is called the $\lambda$-line \cite{ciach1,patsaha_mryglod-04} in order to distinguish it from the spinodal line for which $k^{*}=0$.
On the $\lambda$-line the fluid becomes unstable with respect to the charge ordering indicating that
there can be a phase transition to an ordered phase.
For the RPM ($z=0$), it was found that in the presence of fluctuations
the $\lambda$-line disappears and, instead, a first-order phase transition to an ionic crystal appears \cite{ciach-patsahan:1}.
\subsection{Effective Ginzburg-Landau Hamiltonian }
We consider the model (\ref{2.1}) near the gas-liquid critical
point. In this case, the phase space of CVs $\rho_{{\bf k},N}$
contains CV $\rho_{0,N}$ related to the order parameter. In order
to obtain the effective Hamiltonian in terms of $\rho_{{\bf k},N}$,
one should integrate in Eq.~(\ref{Xi_full_1}) over CVs $\omega_{{\bf k},N}$, $\omega_{{\bf
k},Q}$, and $\rho_{{\bf k},Q}$. A detailed derivation of this type of Hamiltonian is
given in Ref.~\cite{Patsahan:13}. Using the results of
Ref.~\cite{Patsahan:13}, we can write an expression for the
effective $\varphi^{4}$ LG Hamiltonian of the model under
consideration
\begin{eqnarray}
&&{\cal H}^{eff}=a_{1,0}\rho_{0,N}+\frac{1}{2!\langle
N\rangle}\sum_{{\mathbf{k}}}\left(a_{2,0}+k^{2}a_{2,2}\right)\rho_{{\bf
k},N}\rho_{-{\bf k},N}+\frac{1}{3!\langle
N\rangle^{2}}\sum_{{\mathbf{k}}_{1},{\mathbf{k}}_{2}} a_{3,0}
\nonumber \\
&& \times\rho_{{\bf k_{1}},N}\rho_{{\bf k_{2}},N}\rho_{-{\bf
k_{1}}-{\bf k_{2}},N}+\frac{1}{4!\langle
N\rangle^{3}}\sum_{{\mathbf{k}}_{1},{\mathbf{k}}_{2},{\mathbf{k}}_{3}}a_{4,0}\rho_{{\bf
k_{1}},N}\rho_{{\bf k_{2}},N}\rho_{{\bf k_{3}},N}\rho_{-{\bf
k_{1}}-{\bf k_{2}}-{\bf k_{3}},N}
\label{H_eff}
\end{eqnarray}
with the coefficients having the following form in a one-loop
approximation:
\begin{eqnarray}
a_{1,0}&=&-\Delta\nu_{N}-\widetilde{\cal C}_{1,\text{Y}}
\label{a10}\\
a_{n,0}&=&-\rho^{n-1}\,\widetilde {\cal
C}_{n,\text{HS}}-\rho^{n-1}\,\widetilde {\cal
C}_{n,\text{Y}}
\label{an0} \\
a_{2,2}&=&-\frac{1}{2}\rho\,\widetilde {\cal
C}_{2,\text{HS}}^{(2)}-\frac{1}{4\langle
N\rangle}\sum_{\mathbf{q}}\widetilde g_{Y}^{(2)}(q)\left[1+\widetilde
g_{Y}(q)\right].
\label{a22}
\end{eqnarray}
Here, we introduce the following notations. The superscript $(2)$ in
Eq.~(\ref{a22}) denotes the second-order derivative with respect to
the wave vector. $\widetilde{\cal C}_{n,\text{HS}}$ is the Fourier
transform of the $n$-particle direct correlation function of a
one-component hard-sphere system at $k=0$, and $\rho=\langle N\rangle/V$ is the number density.
Explicit expressions for $\widetilde{\cal C}_{n,\text{HS}}$ and
$\widetilde {\cal C}_{2,\text{HS}}^{(2)}$ for $n\leq 4$ in the
Percus Yevick (PY) approximation are given in
Ref.~\cite{Patsahan:13} (see Appendix in Ref.~\cite{Patsahan:13}).
The second term on the right-hand side of Eqs.~(\ref{a10})--\ref{a22}) arises from the integration over
CVs $\rho_{{\bf k},Q}$ and $\omega_{{\bf k},Q}$.
In particular, $\rho^{n-1}\widetilde{\cal C}_{n,\text{Y}}$ reads
\begin{eqnarray}
\rho^{n-1}\widetilde{\cal C}_{n,
\text{Y}}&=&\frac{(n-1)!}{2}\frac{1}{\langle
N\rangle}\sum_{\mathbf{q}}\left[\widetilde g_{Y}(q)\right]^{n},
\label{Cn_C}
\end{eqnarray}
where
\begin{eqnarray}
\widetilde g_{Y}(q)&=&-\frac{\beta\rho \widetilde\phi^{Y}(q)}{1+\beta\rho
\widetilde\phi^{Y}(q)}
\label{g_q}
\end{eqnarray}
with $\widetilde\phi^{Y}(q)$ given by Eq.~(\ref{Yukawa}).
Taking into account Eqs.~(\ref{Yukawa}) and (\ref{g_q}), one can obtain
the following explicit expressions for $\rho^{n-1}\widetilde{\cal
C}_{n,\text{Y}}$:
\begin{equation}
-\rho^{n-1}\widetilde{\cal C}_{n,\text{Y}}=\frac{(n-1)!(-24\eta)^{n-1}}{\pi}\int_{0}^{\infty}\,x^{2}
\left[\frac{\bar{f}(x)}{T^{*}x^{3}(z^{2}+x^{2})
+24\eta\bar{f}(x)}\right]^{n}{\rm d}x,
\label{in}
\end{equation}
where $\bar{f}(x)$ is given by Eq.~(\ref{f_x}).
Hereafter, the following reduced units are introduced for the
temperature
\begin{equation}
T^{*}=(\beta K)^{-1}
\label{cr_temp}
\end{equation}
and for the density
\begin{equation}
\eta=\frac{\pi}{6}\rho^{*}, \quad
\rho^{*}=\rho\sigma^{3}.
\label{cr_dens}
\end{equation}
The explicit expression for the second term in Eq.~(\ref{a22}) is too long to be presented herein. We only emphasize that
although the Hamiltonian in Eq.~(\ref{Xi_full_1}) does not include direct pair interactions of number density fluctuations, the effective
short-range attraction does appear in the effective Hamiltonian (\ref{H_eff}). Moreover, in
the limit of charged point particles, i.e., $z=0$ and $\sigma=0$, the expression for $a_{2,2}$ leads to the correct result for
the density-density correlation length (see Refs.~\cite{Patsahan:13,Lee_Fisher:96}).
The term $\Delta\nu_{N}$ in Eq.~(\ref{a10}) can be rewritten
as follows [see Eq.~(\ref{2.7})]:
\begin{equation}
\Delta\nu_{N}=\nu-\nu_{\text{HS}}+\frac{1}{2T^{*}}. \label{delta_nu}
\end{equation}
Summarizing, the expressions for coefficients $a_{2,0}$, $a_{3,0}$,
$a_{4,0}$, and $a_{2,2}$ consist of two terms. While the first term
depends solely on the characteristics of a hard-sphere system, the
second term is of a mixed type and takes into account the charge-charge
(concentration-concentration) correlations.
Coefficient $a_{1,0}$ is the excess part of the chemical potential
$\nu$, and the equation $a_{1,0}=0$ yields the chemical potential in
a one-loop approximation. It follows from Eqs.~(\ref{a10}), (\ref{in}) and
(\ref{delta_nu}) that
\begin{equation}
\nu= \nu_{\text{HS}}-\frac{1}{2T^{*}}+\frac{1}{\pi}\int_{0}^{\infty}\frac{x^{2}{\bar f}(x)}{T^{*}x^{3}(z^{2}+x^{2})
+24\eta\bar{f}(x)}{\rm d}x.
\label{nu_rpa}
\end{equation}
where $\nu_{\text{HS}}$ includes ideal and hard-sphere parts. Using the above equation, one can obtain the gas-liquid diagram in
the mean-field approximation.
\section{Gas-liquid phase transition}
In this section we study the gas-liquid phase diagram of the model (\ref{2.1}) using Eq.~(\ref{an0}) and
Eqs.~(\ref{in})-(\ref{nu_rpa}).
First, we consider the critical point. At the critical point, the system of equations
\begin{eqnarray}
a_{2,0}(\rho_{c},T_{c})=0, \qquad a_{3,0}(\rho_{c},T_{c}) =0
\label{cr-point}
\end{eqnarray}
holds yielding the critical temperature and the critical density for the fixed value of $z$. Using
Eqs.~(\ref{f_x}) and (\ref{in}), these equations can be rewritten as follows:
\begin{eqnarray}
\frac{(1+2\eta)^{2}}{(1-\eta)^{4}}&-&\frac{24\eta}{\pi}\int_{0}^{\infty}\frac{x^{2}{\bar f}^{2}(x){\rm d}x}{[T^{*}x^{3}(z^{2}+x^{2})
+24\eta\bar{f}(x)]^{2}}=0, \label{a2_zero} \\
\frac{(1-7\eta-6\eta^{2})(1+2\eta)}{(1-\eta)^{5}}&-&\frac{1152\eta^{2}}{\pi}\int_{0}^{\infty}\frac{x^{2}{\bar f}^{3}(x){\rm d}x}
{[T^{*}x^{3}(z^{2}+x^{2})
+24\eta\bar{f}(x)]^{3}}=0. \label{a3_zero}
\end{eqnarray}
Here, the PY approximation is used for $\widetilde{\cal C}_{n,\text{HS}}$. It is worth noting that
Eq.~(\ref{a2_zero}) yields the spinodal curve.
Solving Eqs.~(\ref{a2_zero}) and(\ref{a3_zero}) we obtain the critical
temperature $T_{c}^{*}$ and the critical density $\rho_{c}^{*}$ for $z$ ranging from $z=0.001$
to $z=2.781$. At $z\geq 2.782$, the system of equations
(\ref{a2_zero}) and (\ref{a3_zero}) has no solution in the region of the
gas-liquid phase transition indicating a disappearance of the critical point.
The dependence of $T_{c}^{*}$ and
$\rho_{c}^{*}$ on the parameter $z^{-1}$ measuring the interaction
range is displayed in Figs.~2 and 3, respectively. As is seen, the
reduced critical temperature $T_{c}^{*}$ rapidly decreases with an
increase of the interaction range for $z^{-1}\leq 20$ and then
slowly approaches the critical temperature of the RPM
($T_{c}^{*}=0.08446$). The reduced critical density $\rho_{c}^{*}$
demonstrates a sharp decrease in the region $z^{-1}\leq 10$ reaching
the RPM critical value for $z^{-1}\simeq 100$. A decrease of both the critical
temperature and the critical density expressed in the same reduced
units is observed in Ref.~\cite{Fortini:06}.
\begin{figure}[htbp]
\centering
\includegraphics[height=6cm]{patsahan_fig2.eps}
\caption{Reduced critical temperature $T_{c}^{*}$ [Eq.~(\ref{cr_temp})] of the YRPM as a function of the interaction range.
The inset shows
$T_{c}^{*}$ as a function of $z$. The line is a guide to the eye.} \label{fig2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=6cm]{patsahan_fig3.eps}
\caption{Reduced critical density [Eq.~(\ref{cr_dens})] of the YRPM as a function of the interaction range. The inset shows
$\rho_{c}^{*}$ as a function of $z$. The line is a guide to the eye.
} \label{fig3}
\end{figure}
We calculate the spinodal curves for
different values of $z$ using Eq.~(\ref{a2_zero}). The results are
presented in the ($T^{*}$,$\eta$) plane in Fig.~4. As is seen, the
spinodals change their shape with the variation of the interaction
range. For small values of $z$, the curves have a noticeable
maximum at small $\eta$ and change their run passing through a
minimum. The maximum point of the spinodal coincides with the gas-liquid
critical point. The second positive slope of spinodal curves
appearing at higher densities indicates another type of phase
instability
induced by the charge ordering.
We suggest that this branch of the spinodal should be an indication of the pretransitional effects associated
with crystallization.
For the system of oppositely charged colloids, a
broad fluid--${\rm CsCl}$ crystal phase coexistence is found
experimentally \cite{Hynninen-06} and by computer simulations
\cite{Hynninen-06,Fortini:06}. Moreover, it is shown that
fluid-solid phase diagrams of the YRPM and the RPM are
qualitatively similar \cite{Hynninen-06}. When $z$ increases, the
maximum of spinodals moves to higher densities, becomes flatter
and finally disappears at $z>2.781$. At $z=2.781$, the gas-liquid
critical point merges with the spinodal branch induced by the
charge ordering.
\begin{figure}[htbp]
\centering
\includegraphics[height=6cm]{patsahan_fig4.eps}
\caption{Spinodal curves of the YRPM for $z$ ranging from $0.01$ to $2.7$
in the ($T^{*}$,$\eta$) representation.
} \label{fig4}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[height=6cm]{patsahan_fig5.eps}
\caption{Coexistence curves (solid) and spinodal curves (dashed) of the YRPM for $z$
ranging from $1.0$ to $2.7$ in the ($T^{*}$,$\eta$)
representation.
} \label{fig5}
\end{figure}
To calculate the coexistence curves, we use Eq.~(\ref{nu_rpa}) for the chemical potential and employ the Maxwell
double-tangent construction. Figure~5 shows both the coexistence curves (solid lines) and spinodals (dashed lines)
in the ($T^{*}$,$\eta$) plane
for a set of $z$ values. As is seen,
the region of gas-liquid coexistence reduces with an increase of
$z$. Furthermore, the coexistence curves become very flat for $z\geq 2.7$. This means that the liquid phase becomes
more and more difficult to observe in this domain of $z$.
For $z>2.781$, no critical point can be calculated and $z=2.781$ can be considered as the limit value for gas-liquid
phase separation in the approximation considered in this paper. We recall that the limit value for a stable gas-liquid
separation obtained in simulations
is $z=4$ \cite{Fortini:06}.
\section{The crossover temperature}
In this section, we study the effect of the interaction range on
the temperature region in which the crossover from classical
behavior to Ising-like critical behavior occurs. To this end, we
use the Ginzburg criterion \cite{levanyuk,ginzburg}. This criterion
defines the reduced Ginzburg temperature $t_{G}$ which marks a lower
bound of the temperature region where a mean-field description is
self-consistent. For $|t|\ll t_{G}$ where $|t|=|T-T_{c}|/T_{c}$,
Ising critical behavior should be exhibited. Therefore, it is
reasonable to take the reduced Ginzburg temperature as an estimate
of the crossover temperature \cite{Gutkowskii-Anisimov,fisher3,Chaikin_Lubensky}.
The Ginzburg temperature expressed in terms of coefficients of the Hamiltonian (\ref{H_eff}) reads~\cite{fisher3}
\begin{eqnarray}
t_{G}=\displaystyle\frac{1}{32\pi^{2}}\frac{a_{4,0}^{2}}{a_{2,t}
a_{2,2}^{3}},
\label{t_G}
\end{eqnarray}
where $a_{2,t}=\left.\partial a_{2,0}/\partial t\right|_{t=0}$. Taking into account Eqs.~(\ref{an0}) and (\ref{in}), one can
obtain for $a_{2,t}$
\begin{eqnarray}
a_{2,t}=\frac{48\eta T_{c}^{*}}{\pi}\,\int_{0}^{\infty}\frac{x^{5}(z^{2}+x^{2}){\bar{f}}^{2}(x)}{\left(T_{c}^{*}x^{3}(z^{2}+x^{2})
+24\eta\bar{f}(x)\right)^{3}}{\rm
d}x,
\label{a2t_yrpm}
\end{eqnarray}
where $\bar{f}$ is given by (\ref{f_x}).
\begin{figure}[h]
\centering
\includegraphics[height=6cm]{patsahan_fig6.eps}
\caption{The coefficient $a_{2,t}$ as a function of the interaction range
$z^{-1}$. The line is a guide to the eye.
} \label{fig6}
\end{figure}
The relevant coefficients of the LG Hamiltonian are calculated at $T^{*}=T_{c}^{*}$ and $\rho^{*}=\rho_{c}^{*}$ using
Eqs.~(\ref{an0}), (\ref{a22}), (\ref{in}), and
(\ref{a2t_yrpm}).
It is instructive to view the coefficients $a_{2,t}$, $a_{2,2}$, and $a_{4,0}$ as functions of $z^{-1}$.
Figures~6--8 show the dependence of coefficients
on the interaction range. While $a_{2,t}$ is a
decreasing function of $z^{-1}$, the other two coefficients
demonstrate a nonmonotonous behavior. It is worth noting that $a_{2,t}>1$ for the whole range of $z$ for which
coexistence exists. The coefficient $a_{2,2}$ corresponds to a squared range
of the effective density-density attraction.
Being nearly constant for
$z\leq 0.1$, $a_{2,2}$ decreases for larger values of $z$
and attains a minimum at $z\simeq 1.8$. Then, it slightly increases in the range $1.8< z<2.78$. The coefficient $a_{4,0}$
has a maximum at $z\simeq 1.5$ and then (for $z>1.5$) sharply tends
to zero indicating the
presence of a tricritical point at $z=2.781$ for which our estimate is $T_{c}^{*}=0.1709$,
$\rho_{c}^{*}=0.0718$.
For $z\lesssim 0.01$ ($z^{-1}\gtrsim 100$),
all three coefficients become equal to the
corresponding coefficients of the RPM \cite{Patsahan:13}.
\begin{figure}[h]
\centering
\includegraphics[height=6cm]{patsahan_fig7.eps}
\caption{Coefficient $a_{2,2}$ as a function of the interaction range $z^{-1}$ The line is a guide to the eye.
} \label{fig7}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[height=6cm]{patsahan_fig8.eps}
\caption{Coefficient $a_{4,0}$ as a function of the interaction range
$z^{-1}$. The line is a guide to the eye.
} \label{fig8}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[height=7cm]{patsahan_fig9.eps}
\caption{Reduced Ginzburg temperature as a function of the interaction range
$z^{-1}$. The line is a guide to the eye.
} \label{fig9}
\end{figure}
The dependence of the reduced Ginzburg temperature $t_{G}$ on the
interaction range is shown in Fig.~9. For $z\simeq 0.01$ ($z^{-1}\simeq 100$), the
reduced Ginzburg temperature approaches the value $t_{G}=0.0053$ obtained for the
RPM \cite{Patsahan:13}. For large values of $z$ (small $z^{-1}$), $t_{G}$ shows a nonmonotonous behavior
passing through a sharp maximum at $z\simeq 1.5$ and approaching
zero at $z\simeq 2.78$. Remarkably, a maximum value of $t_{G}$ is about $10$ times larger than that obtained for the RPM.
\section{Conclusions}
Using the approach that exploits the method of CVs we have studied
the gas-liquid coexistence and the associated crossover behavior in the
screened Coulomb restricted primitive model (YRPM). For this model, we have
obtained explicit expressions for all the relevant coefficients of
the LG Hamiltonian in a one-loop approximation. Gas-liquid phase diagram, critical parameters
and Ginzburg temperature are calculated for $0.001\leq z\leq
2.781$ using these
expressions. It should be emphasized that the approximation considered produces
the mean-field phase diagram.
First, we have studied the dependence of critical temperature and critical
density on the interaction range of the Yukawa potential. The critical temperature scaled by the Yukawa potential
contact value increases with an increase
of the inverse screening length for the whole range of $z$ for which coexistence exists.
The reduced critical density shows a
similar trend. Both trends qualitatively agree with the results
of simulations \cite{Fortini:06}. A rapid increase in the critical temperature and density above
the corresponding values of the RPM (up to $z\approx 4$)
was also found theoretically using the MSA and the GMSA \cite{Carvalho_Evans:97,Mier-Y-Teran}.
As for the gas-liquid phase diagram, our results have shown that the region of coexistence
in the temperature-density plane reduces with an
increase of the inverse screening length $z$ and completely
disappears at $z> 2.78$.
The trend of the evolution of gas-liquid coexistence
with the variation of $z$ is generally consistent with the
results of computer simulations indicating a stable gas-liquid separation for $z\leq 4$ \cite{Fortini:06}.
However, the gas-liquid binodal obtained in simulations does not disappear but becomes
metastable with respect to the solid-fluid separation for $z>4$.
In this study, we have focused exclusively
on the gas-liquid equilibrium. The description of transitions involving a solid phase requires going beyond
the treatment we have presented here. This issue will be addressed elsewhere.
Finally, we have studied the effect of the interaction region on the
crossover behavior by applying the Ginzburg criterion. We have
analyzed the coefficients of the LG Hamiltonian as functions of the
interaction range. It is significant that for $z\leq 0.01$, all the
coefficients approach the values obtained for the RPM. It appears
that the coefficient $a_{4,0}$ decreases for $z>1.5$ and approaches
zero when $z\simeq 2.78$ indicating the existence of a tricritical
point. Accordingly, the reduced Ginzburg temperature tends to zero
in this domain of $z$. In this case, the tricritical point is the point where the
gas-liquid critical point merges with the spinodal branch induced
by the charge ordering. The possible existence of a tricritical
point for the YRPM with a large $z$ was discussed in
Ref.~\cite{stell1}. For $z<2.78$, $t_{G}$ shows a nonmonotonous
behavior. First, $t_{G}$ increases reaching a maximum at $z\simeq
1.5$ and then for $z< 1.5$, $t_{G}$ again decreases approaching the
RPM value for $z\simeq 0.01$. It is interesting to note that the
reduced Ginzburg temperature for the YRPM with $z=1.8$ is about $10$
times larger than $t_{G}$ for the RPM ($z=0$). Therefore, we have
found that an increase in the interaction region from the one
typical of nonionic fluids to the one typical of ionic fluids leads
to a decrease of the temperature region where the crossover from the
mean-field critical behavior to Ising model criticality occurs.
Extending our previous studies, we have clearly demonstrated that
the range of the interactions plays a crucial role in the crossover
behavior observed in ionic fluids.
|
1,477,468,751,410 | arxiv | \section{Introduction}
Many materials in nature are highly heterogeneous and their properties can vary at different scales.
Direct numerical simulations in such multiscale media are prohibitively expensive and some type
of model reduction is needed.
Multiscale approaches such as homogenization and numerical homogenization \cite{cao2005iterated,abdulle2006analysis,schroder2014numerical,buck2013multiscale,francfort1986homogenization,oleinik2009mathematical,vinh2011homogenized,liu2009multiscale} have been routinely
used to model macroscopic properties and macroscopic behavior of elastic materials.
These approaches compute the effective material properties based on representative volume simulations.
These properties are further used to solve macroscale equations. In this paper, our goal
is to design multiscale method for elasticity equations in the media when the media properties
do not have scale separation and classical homogenization and numerical homogenization
techniques do not work.
We are motivated by seismic wave applications when elastic wave propagation
in heterogeneous subsurface formation is studied where the subsurface
properties can contain vugs,
fractures, and cavities of different sizes. In this paper, we develop multiscale methods for
static problems and present their analysis.
In this paper, we design a multiscale model reduction techniques using GMsFEM for
steady state elasticity equation in heterogeneous media
\begin{equation}
\label{eq:elastic1}
{\partial \over \partial x_i} (c_{ijkl}(x) e_{kl}(u))=f_j(x),
\end{equation}
where $e_{kl}(u)={1\over 2}({\partial u_k \over \partial x_l}+{\partial u_l \over \partial x_k})$
and $c_{ijkl}(x)$ is a multiscale field with a high contrast.
GMsFEM has been studied for a various applications related to flow problems
(see \cite{egh12, eglp13, cel14, Efendiev_LS_MSDG_2013, elms2014}).
In GMsFEM, we solve equation (\ref{eq:elastic1}) on a coarse grid
where each coarse grid consists of a union of fine-grid blocks. In particular, we
design
(1) a snapshot space (2) an offline space for each coarse patch. The offline space
consists of multiscale basis functions that are coupled in a global formulation.
In this paper, we consider several choices for snapshot spaces, offline spaces, and global
coupling. The main idea of the snapshot space in each coarse patch is to provide
an exhaustive space where an appropriate spectral decomposition is performed.
This space contains local
functions that can mimic the global solution behavior in the coarse patch
for all right hand sides or boundary conditions. We consider two choices for
the snapshot space. The first one consists of all fine-grid functions in each coarse patch
and the second one consists of harmonic extensions. Next, we propose a local
spectral decomposition in the snapshot space which allows selecting multiscale basis
functions. This local spectral decomposition is based on the analysis and depends
on the global coupling mechanisms. We consider several choices for the local
spectral decomposition including oversampling approach where larger domains
are used in the eigenvalue problem. The oversampling technique uses
larger domains to compute snapshot vectors that are more consistent
with local solution space and thus can have much lower dimension.
To couple multiscale basis functions constructed in the offline space, we consider
two methods, conforming Galerkin (CG) approach and discontinuous Galerkin (DG) approach based
on symmetric interior penalty method for (\ref{eq:elastic1}).
These approaches are studied for linear elliptic equations in \cite{egh12,Efendiev_JCP_2013}.
Both approaches provide a global coupling
for multiscale basis functions where the solution is sought in the space spanned
by these multiscale
basis functions. This representation
allows approximating the solution
with a reduced number of degrees of freedom.
The constructions of the basis functions are different
for continuous Galerkin and discontinuous Galerkin methods as the local
spectral decomposition relies on the analysis. In particular, for continuous Galerkin
approach, we use partition of unity functions and discuss several choices for partition
of unity functions. We provide an analysis of both approaches. The offline space construction
is based on the analysis.
We present numerical results where we study the convergence of continuous and
discontinuous Galerkin
methods using various snapshot spaces as well as with and without the use of oversampling.
We consider highly heterogeneous coefficients that contain high contrast.
Our numerical
results show that the proposed approaches allow approximating the solution accurately with a fewer degrees
of freedom. In particular, when using the snapshot space consisting of harmonic extension functions,
we obtain better convergence results. In addition, oversampling methods and the use of snapshot spaces
constructed in the oversampled domains can substantially improve the convergence.
The paper is organized as follows. In Section \ref{sec:prelim}, we state the problem and the notations for coarse and fine grids. In Section \ref{sec:gmsfem}, we give the construction of multiscale basis functions, snapshot spaces
and offline spaces, as well as global coupling via CG and DG.
In Section \ref{sec:gmsfem_num_res}, we present numerical results.
Sections \ref{sec:gmsfem_error}-\ref{sec:gmsfem_error_DG} are
devoted to the analysis of the methods.
\section{Preliminaries}
\label{sec:prelim}
In this section, we will present the general framework of GMsFEM
for linear elasticity in high-contrast media.
Let $D\subset\mathbb{R}^{2}$ (or $R^3$) be a bounded domain representing the elastic body of interest,
and let $ {u} = (u_1,u_2)$ be the displacement field.
The strain tensor $ {\epsilon}( {u}) = (\epsilon_{ij}( {u}))_{1\leq i,j \leq 2}$
is defined by
\begin{equation*}
{\epsilon}( {u}) = \frac{1}{2} ( \nabla {u} + \nabla {u}^T ),
\end{equation*}
where $\displaystyle \nabla {u} = (\frac{\partial u_i}{\partial x_j})_{1\leq i,j \leq 2}$.
In the component form, we have
\begin{equation*}
\epsilon_{ij}( {u}) = \frac{1}{2} \Big( \frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \Big), \quad 1\leq i,j \leq 2.
\end{equation*}
In this paper, we assume the medium is isotropic.
Thus, the stress tensor $ {\sigma}( {u}) = (\sigma_{ij}( {u}))_{1\leq i,j \leq 2}$
is related to the strain tensor $ {\epsilon}( {u})$ in the following way
\begin{equation*}
{\sigma} = 2\mu {\epsilon} + \lambda \nabla\cdot {u} \, {I},
\end{equation*}
where $\lambda>0$ and $\mu>0$ are the Lam\'e coefficients.
We assume that $\lambda$ and $\mu$ have highly heterogeneous spatial variations with high contrasts.
Given a forcing term $ {f} = (f_1,f_2)$, the displacement field $ {u}$ satisfies
the following
\begin{equation}
\label{ob_equ}
- \nabla \cdot {\sigma} = {f}, \quad\text{ in } \; D
\end{equation}
or in component form
\begin{equation}
- \Big(\frac{\partial \sigma_{i1}}{\partial x_1} + \frac{\partial \sigma_{i2}}{\partial x_2} \Big) = f_i, \quad \text{ in } \; D, \quad i=1,2.
\end{equation}
For simplicity, we will consider the homogeneous Dirichlet boundary condition $ {u} = {0}$ on $\partial D$.
Let $\mathcal{T}^H$ be a standard triangulation of the domain $D$
where $H>0$ is the mesh size. We call $\mathcal{T}^H$ the coarse grid
and $H$ the coarse mesh size.
Elements of $\mathcal{T}^H$ are called coarse grid blocks.
The set of all coarse grid edges is denoted by $\mathcal{E}^H$
and the set of all coarse grid nodes is denoted by $\mathcal{S}^H$.
We also use $N_S$ to denote the number of coarse grid nodes, $N$ to denote the number of coarse grid blocks.
In addition, we let $\mathcal{T}^h$
be a conforming refinement of the triangulation $\mathcal{T}^H$.
We call $\mathcal{T}^h$ the fine grid and $h>0$ is the fine mesh size.
We remark that the use of the conforming refinement is only to simplify the discussion
of the methodology and is not a restriction of the method.
Let $V^h$ be a finite element space defined on the fine grid.
The fine-grid solution $ {u}_h$ can be obtained as
\begin{equation}
\label{cg_fine_sol}
a( {u}_h, {v}) = ( {f}, {v}), \quad \forall {v}\in V^h,
\end{equation}
where
\begin{equation}
a( {u}, {v}) = \int_D \Big( 2\mu {\epsilon}( {u}) : {\epsilon}( {v})
+ \lambda \nabla\cdot {u} \, \nabla\cdot {v} \Big) \; d {x},
\quad
( {f}, {v}) = \int_D {f} \cdot {v} \; d {x}
\end{equation}
and
\begin{equation}
{\epsilon}( {u}) : {\epsilon}( {v}) = \sum_{i,j=1}^2 \epsilon_{ij}( {u}) \epsilon_{ij}( {v}),
\quad
{f} \cdot {v} = \sum_{i=1}^2 f_i v_i.
\end{equation}
Now, we present GMsFEM.
The discussion consists of two main steps, namely,
the construction of local basis functions
and the global coupling.
In this paper, we will develop and analyze two types of global coupling,
namely, the continuous Galerkin coupling and the discontinuous Galerkin coupling.
These two couplings will require two types of local basis functions.
In essence, the CG coupling will need vertex-based local basis functions
and the DG coupling will need element-based local basis functions.
\begin{figure}[ht]
\centering
\includegraphics[width=17cm,height=8cm]{over_sampled_cg_dg}
\caption{Illustration of a coarse neighborhood, oversampled coarse neighborhood, coarse block and oversampled coarse block.}
\label{fig:grid}
\end{figure}
For each vertex $ {x}_i \in \mathcal{S}^H$ in the coarse grid, we define
the coarse neighborhood $\omega_i$ by
\begin{equation*}
\omega_i = \bigcup \{ K_j \; : \; K_j \subset \mathcal{T}^H, \; {x}_i \in K_j \}.
\end{equation*}
That is, $\omega_i$ is the union of all coarse grid blocks $K_j$
having the vertex $ {x}_i$
(see Figure~\ref{fig:grid}).
A snapshot space $V^{i,\text{snap}}$ is constructed for each coarse neighborhood $\omega_i$.
The snapshot space contains a large set that represents the local solution space.
A spectral problem is then constructed to get a reduced dimensional space.
Specifically, the spectral problem is solved in the snapshot space
and eigenfunctions corresponding to dominant modes are used
as the final basis functions.
To obtain conforming basis functions, each of these selected modes
will be multiplied by a partition of unity function.
The resulting space is denoted by $V^{i,\text{off}}$,
which is called the offline space for the $i$-th
coarse neighborhood $\omega_i$.
The global offline space $V^{\text{off}}$
is then defined as the linear span of all these $V^{i,\text{off}}$,
for $i=1,2,\cdots, N_S$.
The CG coupling can be formulated as to find $ {u}_H^{\text{CG}} \in V^{\text{off}}$
such that
\begin{equation}
\label{cg_ms_sol}
a( {u}_H^{\text{CG}}, {v}) = ( {f}, {v}), \quad \forall {v}\in V^{\text{off}}.
\end{equation}
The DG coupling can be constructed in a similar fashion.
A snapshot space $V^{i,\text{snap}}$ is constructed for each coarse grid block $K_i$.
A spectral problem is then solved in the snapshot space
and eigenfunctions corresponding to dominant modes are used as the final basis functions.
This space is called the offline space $V^{i,\text{off}}$ for the $i$-th coarse grid block.
The global offline space $V^{\text{off}}$
is then defined as the linear span of all these $V^{i,\text{off}}$,
for $i=1,2,\cdots, N$.
The DG coupling can be formulated as: find $ {u}_H^{\text{DG}} \in V^{\text{off}}$
such that
\begin{equation}
a_{\text{DG}}( {u}_H^{\text{DG}}, {v}) = ( {f}, {v}), \quad \forall {v}\in V^{\text{off}},
\label{eq:ipdg}
\end{equation}
where the bilinear form $a_{\text{DG}}$ is defined as
\begin{equation}
a_{\text{DG}}( {u}, {v}) = a_H( {u}, {v})
- \sum_{E\in \mathcal{E}^H} \int_E \Big( \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {v}}
+ \average{ {\sigma}( {v}) \, {n}_E} \cdot \jump{ {u}} \Big) \; ds
+ \sum_{E\in\mathcal{E}^H} \frac{\gamma}{h} \int_E \average{\lambda+2\mu} \jump{ {u}} \cdot \jump{ {v}} \; ds
\label{eq:bilinear-ipdg}
\end{equation}
with
\begin{equation}
a_H( {u}, {v}) =
\sum_{K\in\mathcal{T}_{H}} a_H^K(u,v),
\quad
a_H^K(u,v) =
\int_{K} \Big( 2\mu {\epsilon}({u}): {\epsilon}({v})
+ \lambda \nabla\cdot {u} \nabla\cdot {v} \Big) \; d {x},
\end{equation}
where $\gamma > 0$ is a penalty parameter, $ {n}_E$ is a fixed unit normal vector defined on the coarse edge $E$
and $ {\sigma}( {u}) \, {n}_E$
is a matrix-vector product.
Note that, in (\ref{eq:bilinear-ipdg}), the average and the jump operators are defined
in the classical way.
Specifically, consider an interior coarse edge $E\in\mathcal{E}^H$
and let $K^{+}$ and $K^{-}$ be the two coarse grid blocks sharing the edge $E$.
For a piecewise smooth function $G$, we define
\begin{equation*}
\average{G} = \frac{1}{2}(G^{+} + G^{-}), \quad\quad \jump{G} = G^{+} - G^{-}, \quad\quad \text{ on } \, E,
\end{equation*}
where $G^{+} = G|_{K^{+}}$ and $G^{-} = G|_{K^{-}}$
and we assume that the normal vector $ {n}_E$
is pointing from $K^{+}$ to $K^{-}$.
For a coarse edge $E$ lying on the boundary $\partial D$, we define
\begin{equation*}
\average{G} = \jump{G} = G, \quad\quad \text{ on } \, E,
\end{equation*}
where we always assume that $ {n}_E$ is pointing outside of $D$.
For vector-valued functions, the above average and jump operators are defined component-wise.
We note that the DG coupling (\ref{eq:ipdg})
is the classical interior penalty discontinuous Galerkin (IPDG) method
with our multiscale basis functions.
Finally, we remark that, we use the same notations $V^{i,\text{snap}}, V^{i,\text{off}}$ and $V^{\text{off}}$
to denote the local snapshot, local offline and global offline spaces
for both the CG coupling and the DG coupling to simplify notations.
\section{Construction of multiscale basis functions}
\label{sec:gmsfem}
This section is devoted to the construction of multiscale basis functions.
\subsection{Basis functions for CG coupling}\label{key:cg_coupling}
We begin by the construction of local snapshot spaces.
Let $\omega_i$ be a coarse neighborhood, $i=1,2,\cdots, N_S$.
We will define two types of local snapshot spaces.
The first type of local snapshot space is
\begin{equation*}
V_1^{i,\text{snap}} = V^h(\omega_i),
\end{equation*}
where $V^h(\omega_i)$ is the restriction of the conforming space to $\omega_i$.
Therefore, $V_1^{i,\text{snap}}$ contains all possible fine scale functions defined on $\omega_i$.
The second type of local snapshot space contains all possible harmonic extensions. Next, let $V^h(\partial\omega_i)$ be the restriction of the conforming space to $\partial\omega_i$.
Then we define the fine-grid delta function $\delta_k \in V^h(\partial\omega_i)$ on $\partial\omega_i$ by
\begin{equation*}
\delta_k( {x}_l) =
\begin{cases}
1, \quad & l = k \\
0, \quad & l \ne k,
\end{cases}
\end{equation*}
where $ \{{x}_l\}$ are all fine grid nodes on $\partial\omega_i$.
Given $\delta_k$, we find $ {u}_{k1}$ and $ {u}_{k2}$ by
\begin{equation}
\begin{split}
- \nabla \cdot {\sigma}( {u}_{k1}) &= {0}, \quad\text{ in } \; \omega_i \\
{u}_{k1} &= (\delta_k, 0)^T, \quad\text{ on } \; \partial\omega_i
\end{split}
\label{eq:cg_snap_har_1}
\end{equation}
and
\begin{equation}
\begin{split}
- \nabla \cdot {\sigma}( {u}_{k2}) &= {0}, \quad\text{ in } \; \omega_i \\
{u}_{k2} &= (0,\delta_k)^T, \quad\text{ on } \; \partial\omega_i.
\end{split}
\label{eq:cg_snap_har_2}
\end{equation}
The linear span of the above harmonic extensions is our second type of local snapshot space $V^{i,\text{snap}}_2$.
To simplify the notations, we will use $V^{i,\text{snap}}$ to denote $V^{i,\text{snap}}_1$ or $V^{i,\text{snap}}_2$
when there is no need to distinguish the two type of spaces.
Moreover, we write
\begin{equation*}
V^{i,\text{snap}} = \text{span} \{ {\psi}^{i,\text{snap}}_k, \quad k=1,2,\cdots, M^{i,\text{snap}} \},
\end{equation*}
where $M^{i,\text{snap}}$ is the number of basis functions in $V^{i,\text{snap}}$.
We will perform a dimension reduction on the above snapshot spaces
by the use of a spectral problem.
First, we will need a partition of unity function $\chi_i$
for the coarse neighborhood $\omega_i$.
One choice of a partition of unity function
is the coarse grid hat functions $\Phi_i$, that is, the piecewise bi-linear function on the coarse grid
having value $1$ at the coarse vertex $ {x}_i$
and value $0$ at all other coarse vertices.
The other choice is the multiscale partition of unity function, which is defined in the following way.
Let $K_j$ be a coarse grid block having the vertex $ {x}_i$. Then we consider
\begin{equation}
\begin{split}
- \nabla \cdot {\sigma}( {\zeta}_i) &= {0}, \quad\text{ in } \; K_j \\
{\zeta}_{i} &= (\Phi_i,0)^T, \quad\text{ on } \; \partial K_j.
\end{split}
\end{equation}
Then we define the multiscale partition of unity as $\widetilde{\Phi}_i = ( {\zeta}_i)_1$.
The values of $\widetilde{\Phi}_i $ on the other coarse grid blocks are defined similarly.
Based on our analysis to be presented in the next sections,
we define the spectral problem as
\begin{equation}
\int_{\omega_i} \Big( 2\mu {\epsilon}( {u}) : {\epsilon}( {v})
+ \lambda \nabla\cdot {u} \, \nabla\cdot {v} \Big) \; d {x}
= \xi \int_{\omega_i} \tilde{\kappa} {u} \cdot {v} \; d {x},
\label{eq:spec-cg}
\end{equation}
where $\xi$ denotes the eigenvalue and
\begin{equation}
\tilde\kappa = \sum_{i=1}^{N_S}(\lambda+2\mu) | \nabla \chi_i |^2.
\label{eq:kappa_tilda}
\end{equation}
The above spectral problem (\ref{eq:spec-cg}) is solved in the snapshot space.
We let $(\phi_k, \xi_k)$
be the eigenfunctions and the corresponding eigenvalues.
Assume that
\begin{equation*}
\xi_1 \leq \xi_2 \leq \cdots \leq\xi_{M^{i,\text{snap}}}.
\end{equation*}
Then the first $L_i$ eigenfunctions will be used to construct the local offline space.
We define
\begin{equation}
{\psi}^{i,\text{off}}_l = \sum_{k=1}^{M^{i,\text{snap}}} \phi_{lk} {\psi}^{i,\text{snap}}_k, \quad\quad l=1,2,\cdots, L_i,
\end{equation}
where $\phi_{lk}$ is the $k$-th component of $\phi_l$.
The local offline space is then defined as
\begin{equation*}
V^{i,\text{off}} = \text{span} \{ \chi_i {\psi}^{i,\text{off}}_l, \quad l=1,2,\cdots, L_i \}.
\end{equation*}
Next, we define the global continuous Galerkin offline space as
\begin{equation*}
V^{\text{off}} = \text{span} \{ V^{i,\text{off}}, \quad i=1,2,\cdots, N_S \}.
\end{equation*}
\subsection{Basis functions for DG coupling}\label{key:dg_coupling}
We will construct the local basis functions required for the DG coupling.
We also provide two types of snapshot spaces as in CG case.
The first type of local snapshot space is
all possible fine grid bi-linear functions defined on $K_i$.
The second type of local snapshot space $V^{i,snap}$ for the coarse grid block $K_i$
is defined as the linear span of all harmonic extensions.
Specifically, given $\delta_k$, we find $ {u}_{k1}$ and $ {u}_{k2}$ by
\begin{equation}
\begin{split}
- \nabla \cdot {\sigma}( {u}_{k1}) &= {0}, \quad\text{ in } \; K_i \\
{u}_{k1} &= (\delta_k, 0)^T, \quad\text{ on } \; \partial K_i
\end{split}
\label{eq:dg_snap_har_1}
\end{equation}
and
\begin{equation}
\begin{split}
- \nabla \cdot {\sigma}( {u}_{k2}) &= {0}, \quad\text{ in } \; K_i \\
{u}_{k2} &= (0,\delta_k)^T, \quad\text{ on } \; \partial K_i.
\end{split}
\label{eq:dg_snap_har_2}
\end{equation}
The linear span of the above harmonic extensions is the local snapshot space $V^{i,\text{snap}}$.
We also write
\begin{equation*}
V^{i,\text{snap}} = \text{span} \{ {\psi}^{i,\text{snap}}_k, \quad k=1,2,\cdots, M^{i,\text{snap}} \},
\end{equation*}
where $M^{i,\text{snap}}$ is the number of basis functions in $V^{i,\text{snap}}$.
We will perform a dimension reduction on the above snapshot spaces
by the use of a spectral problem.
Based on our analysis to be presented in the next sections,
we define the spectral problem as
\begin{equation}
\int_{K_i} \big(2\mu {\epsilon}( {u}): {\epsilon}( {v}) +
\lambda \nabla\cdot {u} \nabla\cdot {v}\big)d {x}
= \frac{\xi}{H} \int_{\partial K_i}\left\langle\lambda+2\mu\right\rangle {u}\cdot {v} \; ds,
\label{eq:spec-dg}
\end{equation}
where $\xi$ denotes the eigenvalues and $\left\langle\lambda+2\mu\right\rangle$ is the maximum value of $\average{ \lambda+2\mu}$ on $\partial K_i$.
The above spectral problem (\ref{eq:spec-dg}) is again solved in the snapshot space $V^{i,\text{snap}}$.
We let $(\phi_k, \xi_k)$, for $k=1,2,\cdots, M^{i,\text{snap}}$
be the eigenfunctions and the corresponding eigenvalues.
Assume that
\begin{equation*}
\xi_1\leq \xi_2 \leq \cdots \leq \xi_{M^{i,\text{snap}}}.
\end{equation*}
Then the first $L_i$ eigenfunctions will be used to construct the local offline space.
Indeed, we define
\begin{equation}
{\psi}^{i,\text{off}}_l = \sum_{k=1}^{M^{i,\text{snap}}} \phi_{lk} {\psi}^{i,\text{snap}}_k, \quad\quad l=1,2,\cdots, L_i,
\end{equation}
where $\phi_{lk}$ is the $k$-th component of $\phi_l$.
The local offline space is then defined as
\begin{equation*}
V^{i,\text{off}} = \text{span} \{ {\psi}^{i,\text{off}}_l, \quad l=1,2,\cdots, L_i \}.
\end{equation*}
The global offline space is also defined as
\begin{equation*}
V^{\text{off}} = \text{span} \{ V^{i,\text{off}}, \quad i=1,2,\cdots, N \}.
\end{equation*}
\subsection{Oversampling technique}
In this section,
we present an
oversampling technique for generating multiscale basis functions. The main idea
of
oversampling is to solve local spectral problem in a larger domain.
This allows obtaining a snapshot space that has a smaller dimension since snapshot
vectors contain solution oscillation near the boundaries. In our previous approaches,
we assume that the snapshot vectors can have an arbitrary value on the boundary
of coarse blocks which yield to large dimensional coarse spaces.
For the harmonic extension snapshot case,
we solve equation (\ref{eq:cg_snap_har_1}) and (\ref{eq:cg_snap_har_2}) in $\omega_i^+$ (see Figure \ref{fig:grid})
instead of $\omega_i$ for CG case, and solve the equation (\ref{eq:dg_snap_har_1}) and (\ref{eq:dg_snap_har_2}) in $K_i^+$ instead of
$K_i$ for DG case. We denote the solutions as $\psi_i^{+,\text{snap}}$, and their restrictions on $\omega_i$ or $ K_i $ as $\psi_i^{\text{snap}}$.
We reorder these functions according eigenvalue behavior and write
$$
R_{\text{snap}}^+ = \left[ \psi_{1}^{+,\text{snap}}, \ldots, \psi_{M_{\text{snap}}}^{+,\text{snap}} \right] \quad \text{and} \quad R_{\text{snap}} = \left[ \psi_{1}^{\text{snap}}, \ldots, \psi_{M_{\text{snap}}}^{\text{snap}} \right].
$$
where $M_{snap}$ denotes the total number of functions kept in the snapshot space.
For CG case we define the following spectral problems in the space of snapshot:
\begin{equation}
R_{\text{snap}}^TAR_{\text{snap}}\Psi_k=\zeta (R_{\text{snap}}^{+})^TM^+R_{\text{snap}}^+\Psi_k\label{cg_over_1},
\end{equation}
or \begin{equation}
(R_{\text{snap}}^{+})^TA^+R_{\text{snap}}^+\Psi_k=\zeta (R_{\text{snap}}^{+})^TM^+R_{\text{snap}}^+\Psi_k\label{cg_over_2},
\end{equation}
where
\begin{equation*}
\begin{split}
& A=[a_{kl}] =\int_{\omega_i} \Big( 2\mu {\epsilon}( {\psi_k^{\text{snap}}}) : {\epsilon}( {\psi_l^{\text{snap}}})
+ \lambda \nabla\cdot {\psi_k^{\text{snap}}} \, \nabla\cdot {\psi_l^{\text{snap}}} \Big) \; d {x} ,\\
& A^+=[a_{kl}^+] =\int_{\omega_i^+} \Big( 2\mu {\epsilon}( {\psi_k^{+,\text{snap}}}) : {\epsilon}( {\psi_l^{+,\text{snap}}})
+ \lambda \nabla\cdot {\psi_k^{+,\text{snap}}} \, \nabla\cdot {\psi_l^{+,\text{snap}}} \Big) \; d {x} ,\\
& M^+=[m_{kl}^+] =\int_{\omega_i^+} \tilde{\kappa} {\psi_k^{+,\text{snap}}} \cdot {\psi_l^{+,\text{snap}}} \; d {x},
\end{split}
\end{equation*}
where $\tilde{\kappa}$ is defined through (\ref{eq:kappa_tilda}).
The local spectral problem for DG coupling is defined as
\begin{equation}
(R_{\text{snap}}^{+})^TA^+R_{\text{snap}}^+\Psi_k=\zeta (R_{\text{snap}}^{+})^TM_1^+R_{\text{snap}}^+\Psi_k\label{dg_over_1}
\end{equation}
or
\begin{equation}
(R_{\text{snap}}^{+})^TA^+R_{\text{snap}}^+\Psi_k=\zeta (R_{\text{snap}}^{+})^TM_2^+R_{\text{snap}}^+\Psi_k\label{dg_over_2}
\end{equation}
in the snapshot space, where
\begin{equation*}
\begin{split}
& A^+=[a_{kl}^+] =\int_{K_i^+} \Big( 2\mu {\epsilon}( {\psi_k^{+,\text{snap}}}) : {\epsilon}( {\psi_l^{+,\text{snap}}})
+ \lambda \nabla\cdot {\psi_k^{+,\text{snap}}} \, \nabla\cdot {\psi_l^{+,\text{snap}}} \Big) \; d {x} ,\\
& M_1^+=[m_{1,kl}^+] =\frac{1}{H}\int_{K_i^+} \average{\lambda+2\mu} {\psi_k^{+,\text{snap}}} \cdot {\psi_l^{+,\text{snap}}} \; d {x}, \\
& M_2^+=[m_{2,kl}^+] =\frac{1}{H}\int_{\partial K_i^+} \average{\lambda+2\mu} {\psi_k^{+,\text{snap}}} \cdot {\psi_l^{+,\text{snap}}} \; d {x}.
\end{split}
\end{equation*}
After solving above local spectral problems, we form
the offline space as in the no oversampling case, see Section \ref{key:cg_coupling}
for CG coupling and Section \ref{key:dg_coupling} for DG coupling.
\section{Numerical result}
\label{sec:gmsfem_num_res}
In this section, we present numerical results for CG-GMsFEM and DG-GMsFEM with two models.
We consider different choices of snapshot spaces such as local-fine grid functions
and harmonic functions and use different
local spectral problems such as no-oversampling and oversampling described in the paper.
For the first model, we consider the medium that has no-scale separation and features such
as high conductivity channels and isolated inclusions.
The Young's modulus $E(x)$ is depicted in Figure \ref{fig:young_modulus}, $\lambda(x)=\frac{\nu}{(1+\nu)(1-2\nu)}E(x)$, $\mu(x)=\frac{1}{2(1+\nu)}E(x)$,
the Poisson ratio $\nu$ is taken to be $0.22$.
For the second example, we use the model that is used in
\cite{gfgce14} for the simulation of subsurface elastic waves
(see Figure~\ref{fig:seg_model}).
In all numerical tests, we use constant force and homogeneous Dirchlet boundary condition. In all tables below, $\Lambda_*$ represent the minimum discarded eigenvalue of the corresponding spectral problem. We note that the first three eigenbasis are constant and linear functions, therefore we present our numerical results starting from fourth eigenbasis
in all cases.
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{youngmodulus}
\par\end{centering}
\caption{ $\text{Young's modulus (Model 1)}$ }
\label{fig:young_modulus}
\end{figure}
\begin{figure}[H]
\begin{centering}
\includegraphics[scale=0.5]{seg_lambda}\includegraphics[scale=0.5]{seg_mu}
\par\end{centering}
\caption{Left: $\lambda$ ~~Right: $\mu$ (Model 2) }
\label{fig:seg_model}
\end{figure}
Before presenting the numerical results, we summarize our numerical findings.
\begin{itemize}
\item We observe a fast decay in the error as more basis functions are added in both CG-GMsFEM and DG-GMsFEM
\item We observe the use of multiscale partition of unity improves the accuracy of
CG-GMsFEM compared to the use of piecewise bi-linear functions
\item We observe an improvement in the accuracy (a slight improvement
in CG case and a large improvement in DG case)
when using oversampling for
the examples we considered and the decrease in the snapshot space dimension
\end{itemize}
\subsection{Numerical results for Model 1 with conforming GMsFEM (CG-GMsFEM)}
For the first model, we divide the domain $D=[0,1]\times[0,1]$ into $10\times 10$ coarse grid blocks, inside each coarse block we use $10\times 10$ fine scale square blocks, which result in a $100\times100$ fine grid blocks.
The dimension of the reference solution is 20402. We will show the performance of CG-GMsFEM with the use of local fine-scale snapshots and harmonic extension snapshots. Both bi-linear and multiscale partition of unity functions (see section \ref{key:cg_coupling}) will be considered. For each case, we will provide the comparsion using oversampling and no-oversampling.
For the error measure, we use relative weighted $L^2$ norm error and weighted $H^1$ norm error to compare the accuracy of CG-GMsFEM, which is defined as
\[
e_{L^2}=\cfrac{\|(\lambda+2\mu )( u_{H}- u_h)\|_{L^{2}(D)}}{\|(\lambda+2\mu ) u_h\|_{L^{2}(D)}},\quad
e_{H^{1}}=\sqrt{\cfrac{a( {u_{H}-u_h}, {u_{H}-u_h})}{a( {u_h}, {u_h})}}\]
where $ {u_H}$ and $ {u_h}$ are CG-GMsFEM defined in (\ref{cg_ms_sol}) and fine-scale CG-FEM solution defined in (\ref{cg_fine_sol}) respectively.
Tables \ref{lin_spec} and \ref{ms_spec} show the numerical results of using local fine-scale
snapshots with piecewise bi-linear function and multiscale functions as partition of unity respectively. As we observe, when using more multiscale basis, the errors decay rapidly, especially for multiscale partition of unity. For example, we can see that the weighted $L^2$ error drops from 24.9\% to 1.1\% in the case of using bi-linear function as partition of unity with no oversampling, while the dimension increases from 728 to 2672. If we use multiscale partition of unity, the corresponding weighted $L^2$ error drops from 8.4\% to 0.6\%, which demonstrates a great advantage of multiscale partition of unity. Oversampling can help improve the accuracy as our results indicate. The local eigenvalue problem used for oversampling is Eq.(\ref{cg_over_2}).
Next, we present the numerical results when harmonic extensions are used as snapshots in Tables \ref{lin_har} and \ref{ms_har}. We can observe similar trends as in the local fine-scale snapshot case. The errors decrease as the number of basis functions increase. The $L^2$ error is less than $1$\% when about $13$\% percent of degrees of freedom is used. Similarly, the oversampling method helps to
improve the accuracy. In this case, the local eigenvalue problem used for oversampling is Eq.(\ref{cg_over_1}).
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\specialcell{Dimension}} & \multicolumn{2}{c|}{$1/\Lambda_*$} & \multicolumn{2}{c|}{$e_{L^{2}}$} & \multicolumn{2}{c|}{$e_{H^{1}}$}\tabularnewline
\cline{2-7}
& \specialcell{without\\oversampling} & \specialcell{with\\ oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} \tabularnewline
\hline
\hline
728 & 1.3e+07 & 1.4e+07 & 0.249 & 0.215 & 0.444 & 0.409 \tabularnewline
\hline
1214 & 3.1e+06 & 5.6e+06 & 0.048 & 0.047 & 0.220 & 0.213 \tabularnewline
\hline
1700 & 7.0e+05 & 2.7e+06 & 0.027 & 0.024& 0.162 & 0.153 \tabularnewline
\hline
2186 & 1.8e+00 & 1.7e+06 & 0.018 & 0.016 & 0.133 & 0.123 \tabularnewline
\hline
2672 & 9.9e-01 & 1.4e+06 & 0.011 & 0.010 & 0.105 & 0.099 \tabularnewline
\hline
\end{tabular}
\caption{Relative errors between CG-MsFEM solution and the fine-scale CG-FEM solution, piecewise bi-linear partition of unity functions are used. The case with local fine-scale snapshots.}
\label{lin_spec}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\specialcell{Dimension}} & \multicolumn{2}{c|}{$1/\Lambda_*$} & \multicolumn{2}{c|}{$e_{L^{2}}$} & \multicolumn{2}{c|}{$e_{H^{1}}$}\tabularnewline
\cline{2-7}
& \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} \tabularnewline
\hline
\hline
728 & 6.9e+06 & 6.2e+06 & 0.084& 0.110 & 0.254& 0.274 \tabularnewline
\hline
1214 & 5.8e+00& 3.2e+06 & 0.031& 0.028 & 0.166& 0.160 \tabularnewline
\hline
1700 & 2.1e+00 & 1.2e+06 & 0.015& 0.012& 0.111& 0.105 \tabularnewline
\hline
2186 & 1.3e+00 & 5.9e+05 & 0.009& 0.008 & 0.088& 0.083 \tabularnewline
\hline
2672 & 9.4e-01& 1.0e+01 & 0.006& 0.005 & 0.071& 0.066 \tabularnewline
\hline
\end{tabular}\caption{Relative errors between CG-MsFEM solution and the fine-scale CG-FEM solution, multiscale partition of unity functions are used. The case with local fine-scale snapshots.}
\label{ms_spec}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\specialcell{Dimension}} & \multicolumn{2}{c|}{$1/\Lambda_*$} & \multicolumn{2}{c|}{$e_{L^{2}}$} & \multicolumn{2}{c|}{$e_{H^{1}}$}\tabularnewline
\cline{2-7}
& \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} \tabularnewline
\hline
\hline
728 & 1.3e+07& 1.2e+07 & 0.254& 0.218 & 0.446& 0.418 \tabularnewline
\hline
1214 & 2.1e+06& 5.5e+06 & 0.047& 0.048 & 0.218& 0.217 \tabularnewline
\hline
1700 & 2.8e+05& 3.2e+06 & 0.024& 0.022& 0.153& 0.148 \tabularnewline
\hline
2186 & 1.2e+00& 9.8e+05 & 0.016& 0.015 & 0.124& 0.122 \tabularnewline
\hline
2672 & 5.8e-01& 2.1e+04 & 0.008& 0.010 & 0.102& 0.099 \tabularnewline
\hline
\end{tabular}\caption{Relative errors between CG-MsFEM solution and the fine-scale CG-FEM solution, piecewise bi-linear partition of unity functions are used. The case with hamonic snapshots.}
\label{lin_har}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Dimension} & \multicolumn{2}{c|}{$1/\Lambda_*$} & \multicolumn{2}{c|}{$e_{L^{2}}$} & \multicolumn{2}{c|}{$e_{H^{1}}$}\tabularnewline
\cline{2-7}
& \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} \tabularnewline
\hline
\hline
728 & 7.0e+06& 7.2e+06 & 0.087& 0.112 & 0.259& 0.291 \tabularnewline
\hline
1214 & 5.5e+00& 3.2e+06 & 0.034& 0.032 & 0.174& 0.169 \tabularnewline
\hline
1700 & 1.9e+00& 1.5e+06 & 0.015& 0.013& 0.115& 0.112 \tabularnewline
\hline
2186 & 1.0e+00& 2.5e+05 & 0.009& 0.008 & 0.090& 0.089 \tabularnewline
\hline
2672 & 7.1e-01& 1.7e+00 & 0.007& 0.006 & 0.075& 0.074 \tabularnewline
\hline
\end{tabular}\caption{Relative errors between CG-MsFEM solution and the fine-scale CG-FEM solution, multiscale partition of unity functions are used. The case with hamonic snapshots.}
\label{ms_har}
\end{table}
\subsection{Numerical results for Model 1 with DG-GMsFEM}
In this section, we consider numerical results for DG-GMsFEM discussed in Section \ref{key:dg_coupling}.
To show the performance of DG-GMsFEM, we use the same model (see Figure \ref{fig:young_modulus}) and the coarse and fine grid settings as in the CG case. We will also present the result of using both harmonic extension and eigenbasis (local fine-scale) as snapshot space. To measure the error,
we define broken weighted $L^2$ norm error and $H^1$ norm error
\[
e_{L^2}=\sqrt{\frac{\sum_{K\in\mathcal{T}_{H}}\int_{K}(\lambda+2\mu)( u_{H}- u_h)^2dx}{\sum_{K\in\mathcal{T}_{H}}\int_{K} (\lambda+2\mu) u_h^2dx}} \quad
e_{H^1} =\sqrt{\frac{\sum_{K\in\mathcal{T}_{H}}\int_{K} {\sigma}( u_{H}- u_h)): {\varepsilon}( u_{H}- u_h))dx}{\sum_{K\in\mathcal{T}_{H}}\int_{K} {\sigma}( u_h): {\varepsilon}( u_h)dx}}
\]
where $ {u_H}$ and $ {u_h}$ are DG-GMsFEM defined in (\ref{eq:ipdg}) and fine-scale DG-FEM solution defined in
(\ref{eq:ipdgfine}) respectively.
In Table \ref{dg-spe}, the numerical results of DG-MsFEM with local fine-scale functions as the snapshot space is shown. We observe that DG-MsFEM shows a better approximation compared to CG-MsFEM if oversampling is used.
The error decreases more rapidly as we add basis. More specifically, the relative broken $L^2$ error and $H^1$ error decrease from $14.1$\%, $52.5$\% to $0.2$\% and $5.8$\% respectively, while
the degrees of freedom of the coarse system increase from $728$ to $2696$, where the latter is only $13.2$\% of the reference solution. The local eigenvalue problem used for oversampling is Eq.(\ref{dg_over_1}).
Table \ref{dg-har} shows the corresponding results when harmonic functions are used to construct the snapshot space. We observe similar errors decay trend as local fine-scale snapshots are used. Oversampling can help improve the results significantly. Although the error is very large when the dimension of coarse system is $728$ ($4$ multiscale basis is used), the error becomes very small when the dimension reaches $1728$ ($9$ multiscale basis is used). The local eigenvalue problem used for oversampling here is Eq.(\ref{dg_over_2}). We remark that oversampling can not only help decrease the error, but also decrease the dimension of the snapshot space greatly in peridoic case.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Dimension} & \multicolumn{2}{c|}{$1/\Lambda_*$} & \multicolumn{2}{c|}{$e_{L^{2}}$} & \multicolumn{2}{c|}{$e_{H^{1}}$}\tabularnewline
\cline{2-7}
& \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} \tabularnewline
\hline
\hline
728 & 4.9e-03& 1.5e-03 & 0.281& 0.141 & 0.554& 0.525 \tabularnewline
\hline
1184 & 3.0e-03& 8.5e-04 & 0.118& 0.019 & 0.439& 0.209 \tabularnewline
\hline
1728 & 2.1e-03& 5.6e-04 & 0.108& 0.012& 0.394& 0.145 \tabularnewline
\hline
2184& 1.2e-03& 3.5e-04 & 0.073& 0.007& 0.348& 0.096 \tabularnewline
\hline
2696 & 1.0e-03& 2.7e-04 & 0.056& 0.002 & 0.300& 0.058 \tabularnewline
\hline
\end{tabular}\caption{Relative errors between DG-MsFEM solution and the fine-scale DG-FEM solution. The case with local fine-scale snapshots. }
\label{dg-spe}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Dimension} & \multicolumn{2}{c|}{$1/\Lambda_*$} & \multicolumn{2}{c|}{$e_{L^{2}}$} & \multicolumn{2}{c|}{$e_{H^{1}}$}\tabularnewline
\cline{2-7}
& \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} & \specialcell{without\\oversampling} & \specialcell{with\\oversampling} \tabularnewline
\hline
\hline
728 & 2.9e-01& 1.6e-01 & 0.285& 0.149 & 0.557& 0.528 \tabularnewline
\hline
1184 & 1.6e-01& 6.5e-02 & 0.193& 0.076 & 0.515& 0.366\tabularnewline
\hline
1728 & 1.0e-01& 5.4e-02 & 0.114& 0.009& 0.432& 0.155 \tabularnewline
\hline
2184& 7.1e-02 & 3.9e-02 & 0.081& 0.004& 0.326& 0.078 \tabularnewline
\hline
2696 & 6.3e-02 & 2.8e-02& 0.043& 0.002 & 0.231& 0.060 \tabularnewline
\hline
\end{tabular}\caption{Relative errors between DG-MsFEM solution and the fine-scale DG-FEM solution. The case with hamonic snapshots. }
\label{dg-har}
\end{table}
\subsection{Numerical results for Model 2}
The purpose of this example is to test a method for an earth model that
is used in \cite{gfgce14}.
The domain for the second model is $D=(0,6000)^2$ (in meters) which is divided into $900=30\times30$ square coarse grid blocks, inside each coarse block we generate $20\times 20$ fine scale square blocks. The reference solution is computed through standard CG-FEM on the resulting $600\times 600$ fine grid. We note that the dimension of the reference solution is 722402. The numerical results for CG-MsFEM and DG-MsFEM are presented in Table \ref{seg_lin_eigen} and \ref{dg_seg_eigen}
respectively. We observe the relatively low errors compared to the high contrast case and the error decrease with the dimension increase of the offline space. Both coupling methods (CG and DG) show very good approximation ability.
\begin{table}[H]
\centering \begin{tabular}{|c|c|c|c|c|c|c|}
\hline
dimension & $\frac{1}{\Lambda_*}$ & $e_{L^2}$ & $e_{H^{1}}$ \tabularnewline
\hline
6968 & 4.9e+00 & 3.1e-03 & 5.4e-02 \tabularnewline
\hline
8650 & 4.5e+00 & 2.7e-03 & 5.2e-02 \tabularnewline
\hline
10332 & 3.9e+00 & 2.5e-03& 4.9e-02 \tabularnewline
\hline
12014 & 3.6e+00 & 2.2e-03 & 4.7e-02 \tabularnewline
\hline
\end{tabular}\caption{Relative errors between CG-MsFEM solution and the fine-scale CG-FEM solution, piecewise bi-linear partition of unity functions are used. The case with local fine-scale snapshots.}
\label{seg_lin_eigen}
\end{table}
\begin{table}[H]
\centering \begin{tabular}{|c|c|c|c|c|c|c|}
\hline
dimension & $\frac{1}{\Lambda_*}$ & $e_{L^2}$ & $e_{H^{1}}$ \tabularnewline
\hline
7200 & 6.3e-06 & 4.1e-03 & 7.1e-02 \tabularnewline
\hline
9000 & 6.0e-06 & 4.0e-03 & 6.6e-02 \tabularnewline
\hline
10800 & 4.6e-06 & 3.8e-03& 6.3e-02 \tabularnewline
\hline
12600 & 4.5e-06 & 3.1e-03 & 5.9e-02 \tabularnewline
\hline
\end{tabular}\caption{Relative errors between DG-MsFEM solution and the fine-scale DG-FEM solution. The case with local fine-scale snapshots.}
\label{dg_seg_eigen}
\end{table}
\section{Error estimate for CG coupling}
\label{sec:gmsfem_error}
In this section, we present error analysis for both no oversampling
and oversampling cases. In the below, $a\preceq b$ means $a\leq Cb$, where $C$ is a contant that is independend of the mesh size and the contrast of the coefficient.
\subsection{No oversampling case}
\begin{lemma}
\label{cacc_lemma}
Let $\omega_n$ coarse neighborhood. For any $\psi\in H^1(\omega_n) $, we define $r=-div(\sigma(\psi))$. Then we have
\begin{equation}
\label{cacc}
\int_{\omega_n}2\mu \chi_n^2 \epsilon(\psi):\epsilon(\psi)+\int_{\omega_n}\lambda \chi_n^2 (\nabla\cdot{\psi})^2\\
\preceq |\int_{\omega_n} \chi_n^2{r}\cdot{{\psi}} |+\int_{\omega_n} (\lambda + 2\mu)|\nabla \chi_n|^2 {\psi}^2,
\end{equation}
where $\chi_n$ is a scalar partition of unity subordinated to the coarse neighborhood $\omega_n$.
\end{lemma}
{\it Proof}.
Multiplying both sides of $-div(\sigma(\psi))=r$ by $\chi_n^2 \psi$, we have
\begin{equation}
\begin{split}
\int_{\omega_n} \chi_n^2{r}\cdot{\psi} &=\int_{\omega_n} 2 \mu \epsilon(\psi):\epsilon(\chi_n^2\psi)+\int_{\omega_n}\lambda \nabla\cdot\psi \nabla\cdot(\chi_n^2{\psi}) \\
&= \int_{\omega_n} 2\mu \chi_n^2 \epsilon(\psi):\epsilon(\psi)+ \int_{\omega_n}2 \mu \chi_n \epsilon_{ij}({\psi}) ( \psi_i {\partial \chi_n\over \partial x_j} + \psi_j {\partial \chi_n\over \partial x_i})\\
&\quad+\int_{\omega_n} \lambda \chi_n^2 (\nabla\cdot{\psi})^2 + \int_{\omega_n}2 \lambda \nabla\cdot\psi \chi_n \psi\cdot\nabla\chi_n \\
&=\int_{\omega_n} 2\mu \chi_n^2 \epsilon(\psi):\epsilon(\psi) +\int_{\omega_n} 2 \left( \sqrt{2\mu} \chi_n \epsilon_{ij}({\psi})\right) \left(\sqrt{\mu/2} ( \psi_i {\partial \chi_n\over \partial x_j} + \psi_j {\partial \chi_n\over \partial x_i})\right)\\
&\quad+\int_{\omega_n}\lambda \chi_n^2 (\nabla\cdot\psi)^2 +
\int_{\omega_n} 2 \left( \sqrt{\lambda} \chi_n \nabla\cdot\psi\right) \left(\sqrt{\lambda}\psi\cdot\nabla\chi_n\right).
\end{split}
\end{equation}
Therefore,
\begin{equation}
\begin{split}\label{caco}
& \int_{\omega_n}2\mu \chi_n^2 \epsilon(\psi):\epsilon(\psi) +\int_{\omega_n}\lambda \chi_n^2 (\nabla\cdot\psi)^2\\
&\leq|\int_{\omega_n} \chi_n^2{r}\cdot{\psi} |+|
\int_{\omega_n} 2 \left( \sqrt{2\mu} \chi_n \epsilon_{ij}({\psi})\right) \left(\sqrt{\mu/2} ( \psi_i {\partial \chi_n\over \partial x_j} + \psi_j {\partial \chi_n\over \partial x_i})\right)+
\int_{\omega_n} 2 \left( \sqrt{\lambda} \chi_n \nabla\cdot\psi\right) \left(\sqrt{\lambda}\psi\cdot\nabla\chi_n\right)|\\
&\preceq |\int_{\omega_n} \chi_n^2{r}\cdot{{\psi}} |+\int_{\omega_n} (2\lambda + 4\mu)|\nabla \chi_n|^2 {\psi}^2\\
&\preceq |\int_{\omega_n} \chi_n^2{r}\cdot{{\psi}} |+\int_{\omega_n} (\lambda + 2\mu)|\nabla \chi_n|^2 {\psi}^2.
\end{split}
\end{equation}
In the last step, we have used $2ab\leq \epsilon a^2+\frac{1}{\epsilon}b^2$, and $(ab+cd)^2\leq (a^2+c^2)(b^2+d^2)$.
\begin{flushright}
$\square$
\end{flushright}
Next, we will show the convergence of the CG-GMsFEM solution defined in (\ref{cg_ms_sol}) without oversampling. We take $I^{\omega_n}{u_h}$ to be the first $L_n$ terms of spectral expansion of $u$ in terms of eigenfunctions of the problem
$ -\text{div}(\sigma(\phi_{n}))=\xi \tilde{\kappa}\phi_{n}$ solved in $V^h{(\omega_n)}$. Applying Cea's Lemma, Lemma \ref{cacc_lemma} and using the fact that $\chi_n\preceq 1$, we can get
\begin{equation}
\label{cg_equ1}
\begin{split}
& \quad\int_D \left(2\mu \epsilon({u_h}-{u_H}):\epsilon({u_h}-{u_H})+\lambda (\nabla\cdot({u_h}-{u_H}))^2\right) \\
&\preceq \sum_{n=1}^{N_s} \int_{\omega_n}\left(2\mu \epsilon(\chi_n({u_h}-I^{\omega_n}{u_h})):\epsilon(\chi_n({u_h}-I^{\omega_n}{u_h})) +\lambda (\nabla\cdot(\chi_n({u_h}-I^{\omega_n}{u_h})))^2\right) \\
&\preceq\sum_{n=1}^{N_s}\int_{\omega_n}2\mu \chi_n^2 \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h}) +\sum_{n=1}^{N_s}\int_{\omega_n}\lambda \chi_n^2 (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\\
&\quad+\sum_{n=1}^{N_s} \int_{\omega_n}(\lambda +2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2\\
&\preceq\sum_{n=1}^{N_s} \int_{\omega_n}(\lambda +2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2+\sum_{n=1}^{N_s}|\int_{\omega_n} \chi_n^2g\cdot({u_h}-I^{\omega_n}{u_h}) |\\
&\preceq\sum_{n=1}^{N_s} \int_{\omega_n}(\lambda +2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2+\sum_{n=1}^{N_s}\int_{\omega_n}((\lambda+2\mu) |\nabla\chi_n|^2)^{-1} {g}^2,
\end{split}
\end{equation}
where $g=f+\text{div}(\sigma(I^{\omega_n}{u_h}))$, $f$ is the right hand side of (\ref{ob_equ})
Using the properties of the eigenfunctions, we obtain
\begin{equation}
\int_{\omega_n}(\lambda+2\mu)\sum_{s=1}^{N_s}|\nabla\chi_s|^2 ({u_h}-I^{\omega_n}{u_h})^2\preceq \frac{1}{\xi_{L_n+1}^{\omega_n}}\left(\int_{\omega_n}2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h})+
\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\right) .
\end{equation}
Then, the first term in the right hand side of (\ref{cg_equ1}) can be estimated as follows
\begin{equation}
\label{cg_eq2}
\begin{split}
&\quad\sum_{n=1}^{N_s} \int_{\omega_n}(\lambda +2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2
\preceq \sum_{n=1}^{N_s}\int_{\omega_n}(\lambda+2\mu)\sum_{s=1}^{N_s}|\nabla\chi_s|^2 |({u_h}-I^{\omega_n}{u_h})^2\\
&\preceq \sum_{n=1}^{N_s}\frac{1}{\xi_{L_n+1}^{\omega_n}}\left(\int_{\omega_n}2\mu \epsilon({u_h}-I^{\omega_n}{u_h}): \epsilon({u_h}-I^{\omega_n}{u_h})+\int_{\omega_n}\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\right) \\
&\preceq \sum_{n=1}^{N_s}\frac{\alpha^{\omega_n}_{L_n+1}}{\xi_{L_n+1}^{\omega_n}}\left(\int_{\omega_n}2\mu \chi_n^2 \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h})+\int_{\omega_n}\lambda \chi_n^2 (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\right) \\
&\preceq \sum_{n=1}^{N_s}\frac{\alpha^{\omega_n}_{L_n+1}}{\xi_{L_n+1}^{\omega_n}}\int_{\omega_n} (\lambda + 2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2+
\sum_{n=1}^{N_s}\frac{\alpha^{\omega_n}_{L_n+1}}{\xi_{L_n+1}^{\omega_n}}|\int_{\omega_n} \chi_n^2{g}\cdot({u_h}-I^{\omega_n}{u_h}) |\\
&\preceq \frac{1}{\Lambda_*}\left(\sum_{n=1}^{N_s}\int_{\omega_n}(\lambda + 2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2+\sum_{n=1}^{N_s}|\int_{\omega_n} \chi_n^2{g}\cdot({u_h}-I^{\omega_n}{u_h}) |\right),
\end{split}
\end{equation}
where
\begin{equation*}
\Lambda_*=\text{min}_{\omega_n}\frac{\xi_{L_n+1}^{\omega_n}}{\alpha^{\omega_n}_{L_n+1}},
\end{equation*}
and\\
\begin{equation*}
{\alpha^{\omega_n}_{L_n+1}}=\frac{\int_{\omega_n} 2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h})+\int_{\omega_n}\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2} {\int_{\omega_n}2\mu \chi_n^2 \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h})+\int_{\omega_n}\lambda \chi_n^2 (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2}.
\end{equation*}
Applying inequality (\ref{cg_eq2}) $m$ times, we have
\begin{equation}
\begin{split}
&\quad\sum_{n=1}^{N_s} \int_{\omega_n}(\lambda +2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2\\
&\preceq \left(\frac{1}{\Lambda_*}\right)^m\sum_{n=1}^{N_s}\int_{\omega_n}(\lambda + 2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2+\sum_{l=1}^m \left(\frac{1}{\Lambda_*}\right)^l
\sum_{n=1}^{N_s}|\int_{\omega_n} \chi_n^2{g}\cdot({u_h}-I^{\omega_n}{u_h}) |\\
&\preceq \left(\frac{1}{\Lambda_*}\right)^m\sum_{n=1}^{N_s}\int_{\omega_n}(\lambda + 2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2+ (\Lambda_*)^m\left(\frac{1-\Lambda_*^{-m}}{\Lambda_*-1}\right)\sum_{n=1}^{N_s}\int_{\omega_n}((\lambda+2\mu)|\nabla\chi_n|^2)^{-1}{g}^2,
\end{split}
\end{equation}
Taking into account that
\begin{equation}
\sum_{n=1}^{N_s} \int_{\omega_n}(\lambda +2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2\preceq \sum_{n=1}^{N_s}\int_{\omega_n}(\lambda+2\mu)\sum_{s=1}^{N_s} |\nabla\chi_s|^2 ({u_h}-I^{\omega_n}{u_h})^2,\\
\end{equation}
and
\begin{equation}
\label{fem_estimate}
\sum_{n=1}^{N_s}\int_{\omega_n} \left(2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h})+\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\right) \preceq \int_{D}\left(2\mu \epsilon({u_h}):\epsilon(u)+\lambda (\nabla\cdot{u_h})^2\right) .
\end{equation}
inequality (\ref{cg_equ1}) becomes
\begin{equation}
\begin{split}
&\quad\int_D\left(2\mu \epsilon({u_h}-{u_H}): \epsilon({u_h}-{u_H})+\lambda (\nabla\cdot({u_h}-{u_H}))^2\right) \\
& \preceq\left(\frac{1}{\Lambda_*}\right)^{m+1}\left(\sum_{n=1}^{N_s}\int_{\omega_n}2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h}) +\sum_{n=1}^{N_s}\int_{\omega_n}\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\right) \\
&\quad+\left(\Lambda_*^m\left(\frac{1-\Lambda_*^{-m}}{\Lambda_*-1}\right)+1\right)
\sum_{n=1}^{N_s}\int_{\omega_n}((\lambda+2\mu)|\nabla\chi_n|^2)^{-1}{g}^2\\
&\preceq\big(\frac{1}{\Lambda_*}\big)^{m+1}\int_{D} \left(2\mu \epsilon({u_h}):\epsilon(u)+\lambda(\nabla\cdot{u_h})^2\right)
+ \left((\Lambda_*)^m\left(\frac{1-(\Lambda_*)^{-m}}{\Lambda_*-1}\right)+1\right)R,
\end{split}
\end{equation}
where $R=\sum_{n=1}^{N_s}\int_{\omega_n}((\lambda+2\mu)|\nabla\chi_n|^2)^{-1}{g}^2$. If $|{g}|\preceq1$, then $\int_{\omega_n}((\lambda+2\mu)|\nabla\chi_n|^2)^{-1}{g}^2\preceq H^2$, from which we obtain
\begin{equation}
\begin{split}
\quad \int_D\left(2\mu \epsilon({u_h}-{u_H}):\epsilon({u_h}-{u_H})+\lambda (\nabla\cdot({u_h}-{u_H}))^2\right) &\preceq
\big(\frac{1}{\Lambda_*}\big)^{m+1} \int_{D}\left( 2\mu \epsilon({u_h}):\epsilon(u)+\lambda(\nabla\cdot{u_h})^2\right)\\
& \quad+\left((\Lambda_*)^m\left(\frac{1-(\Lambda_*)^{-m}}{\Lambda_*-1}\right)+1\right)H^2.
\end{split}
\end{equation}
Combining results above, we have
\begin{theorem}
Let $u\in V^h_{CG}$ be the fine-scale CG-FEM solution defined in (\ref{cg_fine_sol}) and $u_H$ be the CG-GMsFEM solution defined in (\ref{cg_ms_sol}) without oversampling. If $\Lambda_*\ge1$ and $\int_D (\lambda+2\mu)^{-1}g^2\preceq1$, let $ n=-\frac{log(H)}{log\Lambda_*}$, then
\begin{equation*}
\quad\int_D\left(2\mu \epsilon({u_h}-{u_H}):\epsilon({u_h}-{u_H})+\lambda (\nabla\cdot({u_h}-{u_H}))^2\right) \preceq\left(\frac{H}{\Lambda_*}\right)\left(\int_{D}\left(2\mu \epsilon(u):\epsilon(u)+\lambda (\nabla\cdot{u_h})^2 \right)+1\right).
\end{equation*}
\end{theorem}
\subsection{Oversampling case}
In this subsection, we will analyze the convergence of CG-GMsFEM solution defined in (\ref{cg_ms_sol}) with oversampling. We define $I^{\omega_n^{+}}{u_h}$ as an interpolation of
${u_h}$ in $\omega_n^{+}$ using the first $L_n$ modes for the eigenvalue problem (\ref{cg_over_1}).
Let $\chi_n^+$ be a partition of unity subordinated to the coarse neighborhood $\omega_n^+$. We require $\chi_n^+$ to be zero on $\partial\omega_n^+$ and
\begin{equation*}
|\nabla\chi_n|^2\preceq|\nabla\chi_n^+|^2.
\end{equation*}
Using the same argument as Lemma \ref{cacc_lemma}, it is easy to deduce
\begin{equation}
\begin{split}\label{cao}
&\quad\int_{\omega_n^+}\left(2\mu |\chi_n^+|^2\epsilon({u_h}-I^{\omega_n^+}{u_h}):\epsilon({u_h}-I^{\omega_n^+}{u_h})+\lambda |\chi_n^+|^2 (\nabla\cdot({u_h}-I^{\omega_n^+}{u_h}))^2 \right)\\
&\preceq |\int_{\omega_n^+} |\chi_n^+|^2{g}\cdot({u_h}-I^{\omega_n^+}{u_h})|+
\int_{\omega_n^+} (\lambda + 2\mu)|\nabla \chi_n^+|^2 ({u_h}-I^{\omega_n^+}{u_h})^2,
\end{split}
\end{equation}
where $g=f+\text{div}(\sigma(I^{\omega_n}{u_h}))$, $I^{\omega_n}{u_h} =I^{\omega_n^{+}}{u_h}$ in $\omega_n$.
Applying eigenvalue problem (\ref{cg_over_1}), we obtain
\begin{equation}\label{olocal}
\int_{\omega_n^+}(\lambda+2\mu) |\nabla\chi_n^+|^2 ({u_h}-I^{\omega_n^+}{u_h})^2\preceq \frac{1}{\xi_{L_n+1}^{\omega_n}}\int_{\omega_n}
\left(2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h}) +\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\right).
\end{equation}
Using the definition of interpolation $I^{\omega_n^+}{u_h}$, we have
\begin{equation}
\begin{split}
&\quad\sum_{n=1}^{N_s}\int_{\omega_n}(\lambda+2\mu) |\nabla\chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2
\preceq\sum_{n=1}^{N_s}\int_{\omega_n^+}(\lambda+2\mu) |\nabla\chi_n^+|^2 ({u_h}-I^{\omega_n^+}{u_h})^2\\
&\preceq\sum_{n=1}^{N_s} \frac{1}{\xi_{L_n+1}^{\omega_n}}\left(\int_{\omega_n}2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h})+\int_{\omega_n}
\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\right) \\
&\preceq\sum_{n=1}^{N_s} \frac{1}{\xi_{L_n+1}^{\omega_n}}\left(\int_{\omega_n^+} 2\mu |\nabla\chi_n^+|^2\epsilon({u_h}-I^{\omega_n^+}{u_h}):\epsilon({u_h}-I^{\omega_n^+}{u_h})+\int_{\omega_n^+}
\lambda |\nabla\chi_n^+|^2(\nabla\cdot({u_h}-I^{\omega_n^+}{u_h}))^2\right) \\
&\preceq\sum_{n=1}^{N_s}\frac{1}{\xi_{L_n+1}^{\omega_n}}\int_{\omega_n^+} (\lambda + 2\mu)|\nabla \chi_n^+|^2 ({u_h}-I^{\omega_n^+}{u_h})^2+\sum_{n=1}^{N_s} \frac{1}{\xi_{L_n+1}^{\omega_n}}|\int_{\omega_n^+}|\chi_n^+|^2{g}\cdot({u_h}-I^{\omega_n^+}{u_h})|\\
&\preceq\frac{1}{\Lambda_*^+}\left(\sum_{n=1}^{N_s}\int_{\omega_n^+} (\lambda + 2\mu)|\nabla \chi_n^+|^2 ({u_h}-I^{\omega_n^+}{u_h})^2+\sum_{n=1}^{N_s} |\int_{\omega_n^+} |\chi_n^+|^2{g}\cdot({u_h}-I^{\omega_n^+}{u_h})|\right)\\
&\preceq\frac{1}{\Lambda_*^+}\sum_{n=1}^{N_s}\left(\frac{1}{\xi_{L_n+1}^{\omega_n}}\int_{\omega_n} \big(2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h}) +
\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2\big) + |\int_{\omega_n^+} |\chi_n^+|^2g\cdot({u_h}-I^{\omega_n^+}u)|\right),
\end{split}
\end{equation}
where ${\Lambda_*^+}=\text{min}_{\omega_n}\xi_{L_n+1}^{\omega_n}$.
Applying the last inequality $m$ times with (\ref{olocal}), we get
\begin{equation}
\begin{split}
&\quad\int_{\omega_n^+}(\lambda+2\mu) |\nabla\chi_n^+|^2 ({u_h}-I^{\omega_n^+}{u_h})^2\\
&\preceq \big(\frac{1}{\Lambda_*^+}\big)^m\left(\frac{1}{\xi_{L_n+1}^{\omega_n}}\int_{\omega_n} 2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h}) + \frac{1}{\xi_{L_n+1}^{\omega_n}}\int_{\omega_n}\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2 \right)\\
&\quad+\sum_{l=1}^{m}\big(\frac{1}{\Lambda_*^+}\big)^l\sum_{n=1}^{N_s} |\int_{\omega_n^+}|\chi_n^+|^2 {g}\cdot({u_h}-I^{\omega_n^+}{u_h})|\\
&\preceq \big(\frac{1}{\Lambda_*^+}\big)^{m+1}\left(\int_{\omega_n} 2\mu \epsilon({u_h}-I^{\omega_n}{u_h}):\epsilon({u_h}-I^{\omega_n}{u_h})+\int_{\omega_n}\lambda (\nabla\cdot({u_h}-I^{\omega_n}{u_h}))^2 \right)\\
&\quad+(\Lambda_*^+)^m\left(\frac{1-(\Lambda_*^+)^{-m}}{\Lambda_*^+-1}\right)\sum_{n=1}^{N_s}\int_{\omega_n^+}((\lambda+2\mu)|\nabla\chi_n|^2)^{-1}{g}^2.\\
\end{split}
\end{equation}
Taking into account inequality (\ref{fem_estimate}), we have
\begin{equation}
\begin{split}
&\int_{D}\left(2\mu \epsilon({u_h}-{u_H}):\epsilon({u_h}-{u_H})+\lambda (\nabla\cdot({u_h}-{u_H}))^2\right)\\
\preceq&\sum_{n=1}^{N_s} \int_{\omega_n}(\lambda +2\mu)|\nabla \chi_n|^2 ({u_h}-I^{\omega_n}{u_h})^2+\sum_{n=1}^{N_s}|\int_{\omega_n} \chi_n^2g\cdot({u_h}-I^{\omega_n}{u_h}) |\\
\preceq& \big(\frac{1}{\Lambda_*^+}\big)^{m+1}\int_{D} \left( 2\mu \epsilon (u):\epsilon(u)+\lambda (\nabla\cdot{u_h})^2\right)
+ \left((\Lambda_*^+)^m\left(\frac{1-(\Lambda_*^+)^{-m}}{\Lambda_*^+-1}\right)+1\right)R.
\end{split}
\end{equation}
where $R=\sum_{n=1}^{N_s}\int_{\omega_n}((\lambda+2\mu)|\nabla\chi_n^+|^2)^{-1}{g}^2$.
Therefore, similar with the no oversampling case, we have
\begin{theorem}
Let $u\in V^h_{CG}$ be the fine-scale CG-FEM solution defined in (\ref{cg_fine_sol}) and $u_H$ be the CG-GMsFEM solution defined in (\ref{cg_ms_sol}) with oversampling. If $\Lambda_*^+\ge1$ and $\int_D (\lambda+2\mu)^{-1}g^2\preceq1$, let $ n=-\frac{log(H)}{log\Lambda_*^+}$, then
\begin{equation*}
\quad\int_{D}\left(2\mu \epsilon({u_h}-{u_H}):\epsilon({u_h}-{u_H})+\lambda (\nabla\cdot({u_h}-{u_H}))^2\right) \preceq\frac{H}{\Lambda_*^+}\left(\int_{D} \left( 2\mu \epsilon (u):\epsilon(u)+\lambda (\nabla\cdot{u_h})^2\right)+1\right).
\end{equation*}
\end{theorem}
\section{Error estimate for DG coupling}
\label{sec:gmsfem_error_DG}
In this section, we will analyze the DG coupling of the GMsFEM (\ref{eq:ipdg}).
For any $ {u}$, we define the DG-norm by
\begin{equation*}
\| {u} \|_{\text{DG}}^2 = a_H( {u}, {u})
+ \sum_{E\in\mathcal{E}_{H}}\frac{\gamma}{h}\int_{E} \average{\lambda+2\mu} \jump{{u}}^2 \; ds.
\end{equation*}
Let $K$ be a coarse grid block and let $ {n}_{\partial K}$ be the unit outward normal vector on $\partial K$.
We denote $V^h(\partial K)$
by the restriction of the conforming space $V^h$ on $\partial K$.
The normal flux $ {\sigma}( {u}) \, {n}_{\partial K}$
is understood as an element in $V^h(\partial K)$ and is defined by
\begin{equation}
\int_{\partial K} ( {\sigma}( {u}) \, {n}_{\partial K}) \cdot {v} =
\int_{K} \Big( 2\mu {\epsilon}( {u}): {\epsilon}(\widehat{ {v}})
+ \lambda \nabla\cdot {u} \nabla\cdot \widehat{ {v}} \Big) \; d {x}, \quad {v} \in V^h(\partial K),
\label{eq:flux}
\end{equation}
where $\widehat{ {v}}$ is the harmonic extension of $ {v}$ in $K$.
By the Cauchy-Schwarz inequality,
\begin{equation*}
\int_{\partial K} ( {\sigma}( {u}) \, {n}_{\partial K}) \cdot {v} \leq
a_H^K(u,u)^{\frac{1}{2}} \, a_H^K(\widehat{v},\widehat{v})^{\frac{1}{2}}.
\end{equation*}
By an inverse inequality and the fact that $\widehat{v}$ is the harmonic extension of $v$
\begin{equation*}
a_H^K(\widehat{v},\widehat{v}) \leq \kappa_K C^2_{\text{inv}} h^{-1} \int_{\partial K} |v|^2 \; dx,
\end{equation*}
where $\kappa_K = \max_K \{ \lambda+2\mu \}$ and $C_{\text{inv}} > 0$ is the constant from inverse inequality.
Thus,
\begin{equation*}
\int_{\partial K} ( {\sigma}( {u}) \, {n}_{\partial K}) \cdot {v} \leq \kappa_K^{\frac{1}{2}} C_{\text{inv}} h^{-\frac{1}{2}} \| v\|_{L^2(\partial K)} \,
a_H^K(u,u)^{\frac{1}{2}}.
\end{equation*}
This shows that
\begin{equation*}
\int_{\partial K} | {\sigma}( {u}) \, {n}_{\partial K} |^2
\leq \kappa_K C^2_{\text{inv}} h^{-1} a_H^K(u,u).
\end{equation*}
Our first step in the convergence analysis
is to establish the continuity and the coercivity
of the bilinear form (\ref{eq:bilinear-ipdg})
with respect to the DG-norm.
\begin{lemma}
Assume that the penalty parameter $\gamma$ is chosen so that $\gamma > 2 C_{\text{inv}}^2$.
The bilinear form $a_{\text{DG}}$ defined in (\ref{eq:bilinear-ipdg})
is continuous and coercive, that is,
\begin{eqnarray}
a_{\text{DG}}( {u}, {v})
&\leq& \| {u} \|_{\text{DG}} \, \| {v} \|_{\text{DG}}, \\
a_{\text{DG}}( {u}, {u})
&\geq& a_0 \| {u} \|_{\text{DG}}^2,
\end{eqnarray}
for all $ {u}, {v}$, where $a_0 = 1 - \sqrt{2} C_{\text{inv}} \gamma^{-\frac{1}{2}} >0$.
\end{lemma}
{\it Proof}. By the definition of $a_{\text{DG}}$, we have
\begin{equation*}
a_{\text{DG}}( {u}, {v}) = a_H( {u}, {v})
- \sum_{E\in \mathcal{E}^H} \int_E \Big( \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {v}}
+ \average{ {\sigma}( {v}) \, {n}_E} \cdot \jump{ {u}} \Big) \; ds
+ \sum_{E\in\mathcal{E}^H} \frac{\gamma}{h} \int_E \average{\lambda+2\mu} \jump{ {u}} \cdot \jump{ {v}} \; ds.
\end{equation*}
Notice that
\begin{equation*}
a_H( {u}, {v}) + \sum_{E\in\mathcal{E}^H} \frac{\gamma}{h} \int_E \average{\lambda+2\mu} \jump{ {u}} \cdot \jump{ {v}} \; ds
\leq \| u\|_{\text{DG}} \, \| v\|_{\text{DG}}.
\end{equation*}
For an interior coarse edge $E \in\mathcal{E}^H$,
we let $K^{+}, K^{-}\in \mathcal{T}^H$ be the two coarse grid blocks having the edge $E$.
By the Cauchy-Schwarz ineqaulity,
we have
\begin{equation}
\int_E \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {v}} \; ds
\leq \Big( h\int_E \average{ {\sigma}( {u}) \, {n}_E}^2 \average{\lambda+2\mu}^{-1} \; ds \Big)^{\frac{1}{2}}
\Big( \frac{1}{h} \int_E \average{\lambda+2\mu} \jump{ {v}}^2 \; ds \Big)^{\frac{1}{2}}.
\label{eq:cont1}
\end{equation}
Notice that
\begin{equation*}
\begin{split}
&\: h\int_E \average{ {\sigma}( {u}) \, {n}_E}^2 \average{\lambda+2\mu}^{-1} \; ds \\
\leq &\: h\Big( \int_E ( {\sigma}( {u}^{+}) \, {n}_E)^2 (\lambda^{+} + 2\mu^{+})^{-1} \; ds
+ \int_E ( {\sigma}( {u}^{-}) \, {n}_E)^2 (\lambda^{-} + 2\mu^{-})^{-1} \; ds
\Big),
\end{split}
\end{equation*}
where $ {u}^{\pm} = {u}|_{K^{\pm}}$, $\lambda^{\pm} = \lambda|_{K^{\pm}}$
and $\mu^{\pm} = \mu|_{K^{\pm}}$.
So, we have
\begin{equation*}
h\int_E \average{ {\sigma}( {u}) \, {n}_E}^2 \average{\lambda+2\mu}^{-1} \; ds
\leq C_{\text{inv}}^2 \Big( a_H^{K^{+}}(u^{+},u^{+}) + a_H^{K^{-}}(u^{-},u^{-}) \Big).
\end{equation*}
Thus (\ref{eq:cont1}) becomes
\begin{equation}
\int_E \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {v}} \; ds \leq
C_{\text{inv}} \Big( a_H^{K^{+}}(u^{+},u^{+}) + a_H^{K^{-}}(u^{-},u^{-}) \Big)^{\frac{1}{2}}
\Big( \frac{1}{h} \int_E \average{\lambda+2\mu} \jump{ {v}}^2 \; ds \Big)^{\frac{1}{2}}.
\label{eq:cont2}
\end{equation}
When $E$ is a boundary edge, we have
\begin{equation}
\int_E \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {v}} \; ds \leq
C_{\text{inv}} a_H^{K}(u,u)^{\frac{1}{2}}
\Big( \frac{1}{h} \int_E \average{\lambda+2\mu} \jump{ {v}}^2 \; ds \Big)^{\frac{1}{2}},
\label{eq:cont3}
\end{equation}
where $K$ denotes the coarse grid block having the edge $E$.
Summing (\ref{eq:cont2}) and (\ref{eq:cont3}) for all edges $E\in\mathcal{E}^H$,
we have
\begin{equation*}
\sum_{E\in \mathcal{E}^H} \int_E \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {v}} \; ds
\leq \sqrt{2} C_{\text{inv}} a_H(u,u)^{\frac{1}{2}} \Big( \sum_{E\in \mathcal{E}^H} \frac{1}{h} \int_E \average{\lambda+2\mu} \jump{ {v}}^2 \; ds \Big)^{\frac{1}{2}}.
\end{equation*}
Similarly, we have
\begin{equation*}
\sum_{E\in \mathcal{E}^H} \int_E \average{ {\sigma}( {v}) \, {n}_E} \cdot \jump{ {u}} \; ds
\leq \sqrt{2} C_{\text{inv}} a_H(v,v)^{\frac{1}{2}} \Big( \sum_{E\in \mathcal{E}^H} \frac{1}{h} \int_E \average{\lambda+2\mu} \jump{ {u}}^2 \; ds \Big)^{\frac{1}{2}}.
\end{equation*}
Hence
\begin{equation}
\sum_{E\in \mathcal{E}^H} \int_E \Big( \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {v}}
+ \average{ {\sigma}( {v}) \, {n}_E} \cdot \jump{ {u}} \Big) \; ds
\leq \sqrt{2} C_{\text{inv}} \gamma^{-\frac{1}{2}} \| u\|_{\text{DG}} \, \|v\|_{\text{DG}}.
\label{eq:cont4}
\end{equation}
This proves the continuity.
For coercivity, we have
\begin{equation*}
a_{\text{DG}}( {u}, {u}) = \| u\|_{\text{DG}}^2
- \sum_{E\in \mathcal{E}^H} \int_E \Big( \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {u}}
+ \average{ {\sigma}( {u}) \, {n}_E} \cdot \jump{ {u}} \Big) \; ds.
\end{equation*}
By (\ref{eq:cont4}), we have
\begin{equation*}
a_{\text{DG}}( {u}, {u}) \geq (1 - \sqrt{2} C_{\text{inv}} \gamma^{-\frac{1}{2}} ) \| u\|_{\text{DG}}^2,
\end{equation*}
which gives the desired result.
\begin{flushright}
$\square$
\end{flushright}
We will now prove the convergence of the method (\ref{eq:ipdg}).
Let $u_h \in V^h_{\text{DG}}$ be the fine grid solution which satisfies
\begin{equation}
a_{\text{DG}}(u_h, v) = (f,v), \quad\forall v\in V^h_{\text{DG}}.
\label{eq:ipdgfine}
\end{equation}
It is well-known that $u_h$ converges to the exact solution $u$ in the DG-norm as the fine mesh size $h\rightarrow 0$.
Next, we define a projection $u_S\in V^{\text{snap}}$ of $u_h$ in the snapshot space
by the following construction.
For each coarse grid block $K$, the restriction of $u_S$ on $K$
is defined as the harmonic extension of $u_h$, that is,
\begin{equation}
\begin{split}
- \nabla \cdot {\sigma}( {u}_{S}) &= {0}, \quad\text{ in } \; K, \\
{u}_{S} &= u_h, \quad\text{ on } \; \partial K.
\end{split}
\label{eq:proj}
\end{equation}
Now, we prove the following estimate for the projection $u_S$.
\begin{lemma}
Let $u_h\in V^h_{\text{DG}}$ be the fine grid solution defined in (\ref{eq:ipdgfine})
and $u_S\in V^{\text{snap}}$ be the projection of $u_h$ defined in (\ref{eq:proj}).
Then we have
\begin{equation*}
\| u_h - u_S \|_{\text{DG}} \leq CH \Big( \max_{K\in\mathcal{T}^H} \eta_K \Big) \| f \|_{L^2(\Omega)},
\end{equation*}
where $\eta_K = \min_K \{\lambda+2\mu\}$.
\end{lemma}
{\it Proof}. Let $K$ be a given coarse grid block.
Since $u_S=u_h$ on $\partial K$,
the jump terms in the DG-norm vanish.
Thus, the DG-norm can be written as
\begin{equation*}
\| u_h - u_S \|_{\text{DG}}^2 = \sum_{K\in\mathcal{T}^H} a_H^K(u_h-u_S,u_h-u_S).
\end{equation*}
Since $u_S$ satisfies (\ref{eq:proj}) and $u_h-u_S=0$ on $\partial K$, we have
\begin{equation*}
a_H^K(u_S, u_h-u_S) = 0.
\end{equation*}
So,
\begin{equation*}
\| u_h - u_S \|_{\text{DG}}^2 = \sum_{K\in\mathcal{T}^H} a_H^K(u_h,u_h-u_S)
= a_{\text{DG}}(u_h, u_h-u_S)
= (f, u_h-u_S).
\end{equation*}
By the Poincare inequality, we have
\begin{equation*}
\| u_h - u_S \|_{L^2(K)} \leq C H^2 \eta^2_K a_H^K(u_h-u_S,u_h-u_S),
\end{equation*}
where $\eta_K = \min_K \{ \lambda+2\mu \}$.
Hence, we have
\begin{equation*}
\| u_h - u_S \|_{\text{DG}} \leq CH \Big( \max_{K\in\mathcal{T}^H} \eta_K \Big) \| f \|_{L^2(\Omega)}.
\end{equation*}
\begin{flushright}
$\square$
\end{flushright}
In the following theorem, we will state and prove the convergence
of the GMsFEM (\ref{eq:ipdg}).
\begin{theorem}
Let $u_h\in V^h_{\text{DG}}$ be the fine grid solution defined in (\ref{eq:ipdgfine})
and $u_H$ be the GMsFEM solution defined in (\ref{eq:ipdg}). Then we have
\begin{equation*}
\| u_h - u_H \|_{\text{DG}}^2
\leq C\Big( \sum_{i=1}^{N_E} \frac{H}{\left\langle\lambda+2\mu\right\rangle\xi_{L_i+1}} ( 1 + \frac{\gamma H}{h \xi_{L_i+1}} ) \int_{\partial K_i} ( \sigma(u_S)\cdot n_{\partial K})^2 \; ds +
H^2 \Big( \max_{K\in\mathcal{T}^H} \eta_K^2 \Big) \| f \|^2_{L^2(\Omega)} \Big),
\end{equation*}
where $u_S$ is defined in (\ref{eq:proj}).
\end{theorem}
{\it Proof}.
First, we will define a projection $\widehat{u}_S \in V^{\text{off}}$
of $u_S$ in the offline space.
Notice that, on each $K_i$, $u_S$ can be represented by
\begin{equation*}
u_S = \sum_{l=1}^{M_i} c_l \psi_l^{i,\text{off}},
\end{equation*}
where $M_i = M^{i,\text{snap}}$ and
we assume that the functions $\psi_l^{i,\text{off}}$
are normalized so that
\begin{equation*}
\int_{\partial K_i} \left\langle\lambda+2\mu\right\rangle (\psi_l^{i,\text{off}})^2 \; ds = 1.
\end{equation*}
Then the function $\widehat{u}_S$ is defined by
\begin{equation*}
\widehat{u}_S = \sum_{l=1}^{L_i} c_l \psi_l^{i,\text{off}}.
\end{equation*}
We will find an estimate of $\| u_S - \widehat{u}_S\|_{\text{DG}}$.
Let $K$ be a given coarse grid block.
Recall that the spectral problem is
\begin{equation*}
\int_{K} 2\mu {\epsilon}({u}): {\epsilon}({v})dx
+ \int_K \lambda \nabla\cdot {u} \nabla\cdot {v}
= \frac{\xi}{H} \int_{\partial K} \left\langle\lambda+2\mu\right\rangle {u} {v} \; ds.
\end{equation*}
By the definition of the flux (\ref{eq:flux}), the spectral problem can be represented as
\begin{equation*}
\int_{\partial K} (\sigma(u)\cdot n_{\partial K}) v \; ds= \frac{\xi}{H} \int_{\partial K}\left\langle\lambda+2\mu\right\rangle {u} {v} \; ds.
\end{equation*}
By the definition of the DG-norm, the error $\| u_S - \widehat{u}_S\|_{\text{DG}}$ can be computed as
\begin{equation*}
\| \widehat{{u}}_S - {u}_S \|_{\text{DG}}^2
\leq \sum_{K} \Big( \int_K 2\mu { \epsilon}(\widehat{{u}}_S - {u}_S )^2
+ \int_K \lambda (\nabla \cdot (\widehat{{u}}_S - {u}_S) )^2
+ \frac{\gamma}{h} \int_{\partial K} \average{\lambda+2\mu} (\widehat{{u}}_S - {u}_S))^2 \Big).
\end{equation*}
Note that
\begin{equation*}
\int_{K_i} 2\mu { \epsilon}(\widehat{{u}}_S - {u}_S )^2
+ \int_{K_i} \lambda (\nabla \cdot (\widehat{{u}}_S - {u}_S) )^2
\leq\frac{1}{h}\int_{\partial{K_i}}\left\langle\lambda+2\mu\right\rangle (\widehat{{u}}_S - {u}_S)^2
= \sum_{l={L_i}+1}^{M_i} \frac{\xi_l}{H} c_l^2
\leq \frac{H}{\xi_{L_i+1}} \sum_{l=L_i+1}^{M_i} (\frac{\xi_l}{H})^2 c_l^2.
\end{equation*}
Also,
\begin{equation*}
\frac{1}{h} \int_{\partial K_i} \average{\lambda+2\mu} (\widehat{{u}}_S - {u}_S)^2
= \frac{1}{h} \sum_{l=L_i+1}^{M_i} c_l^2
\leq \frac{H^2}{h \xi_{L_i+1}^2} \sum_{l=L_i+1}^{M_i} (\frac{\xi_l}{H})^2 c_l^2.
\end{equation*}
Moreover,
\begin{equation*}
\sum_{l=L_i+1}^{M_i} (\frac{\xi_l}{H})^2 c_l^2 \leq \sum_{l=1}^{M_i} (\frac{\xi_l}{H})^2 c_l^2 \leq \frac{1}{\left\langle\lambda+2\mu\right\rangle}\int_{\partial K_i} ( \sigma(u_S)\cdot n_{\partial K})^2 \; ds.
\end{equation*}
Consequently, we obtain the following bound
\begin{equation*}
\| u_S - \widehat{{u}}_S \|^2_{\text{DG}}
\leq \sum_{i=1}^{N_E} \frac{H}{\left\langle\lambda+2\mu\right\rangle\xi_{L_i+1}} ( 1 + \frac{\gamma H}{h \xi_{L_i+1}} ) \int_{\partial K_i} ( \sigma(u_S)\cdot n_{\partial K})^2 \; ds.
\end{equation*}
Next, we will prove the required error bound.
By coercivity,
\begin{equation*}
\begin{split}
a_0 \| \widehat{u}_S - u_H \|_{\text{DG}}^2
&= a_{\text{DG}}(\widehat{u}_S-u_H, \widehat{u}_S-u_H) \\
&= a_{\text{DG}}(\widehat{u}_S-u_H, \widehat{u}_S-u_S) + a_{\text{DG}}(\widehat{u}_S-u_H, u_S-u_h) + a_{\text{DG}}(\widehat{u}_S-u_H, u_h-u_H).
\end{split}
\end{equation*}
Note that $a_{\text{DG}}(\widehat{u}_S-u_H, u_h-u_H) = 0$ since $\widehat{u}-u_H \in V^{\text{off}}$.
Using the above results,
\begin{equation*}
\| \widehat{u}_S - u_H \|_{\text{DG}}^2
\leq C\Big( \sum_{i=1}^{N_E} \frac{H}{\left\langle\lambda+2\mu\right\rangle\xi_{L_i+1}} ( 1 + \frac{\gamma H}{h \xi_{L_i+1}} ) \int_{\partial K_i} ( \sigma(u_S)\cdot n_{\partial K})^2 \; ds +
H^2 \Big( \max_{K\in\mathcal{T}^H} \eta_K^2 \Big) \| f \|^2_{L^2(\Omega)} \Big).
\end{equation*}
Finally, the desired bound is obtained by the triangle inequality
\begin{equation*}
\| u_h - u_H\|_{\text{DG}} \leq \| u_h - u_S\|_{\text{DG}} + \| u_S - \widehat{u}_S\|_{\text{DG}} + \| \widehat{u}_S - u_H\|_{\text{DG}}.
\end{equation*}
\begin{flushright}
$\square$
\end{flushright}
\section{Conclusions}
In this paper, we design a multiscale model reduction method using GMsFEM
for elasticity equations in heterogeneous media.
We design a snapshot space and
an offline space based on the analysis. We present two approaches that couple
multiscale basis functions of the offline space. These are continuous Galerkin
and discontinuous Galerkin methods. Both approaches are analyzed.
We present oversampling studies where larger domains are used for
calculating the snapshot space.
Numerical
results are presented.
\bibliographystyle{plain}
|
1,477,468,751,411 | arxiv | \section{Introduction}
Industrial robots are mainly designed to perform repetitive tasks in controlled environments. Recent years have seen increasing interest in deploying service robots to human-centric environments. In such unstructured settings, object grasping is a challenging task due to the high demand for real-time and accurate responses for a vast number of objects with a wide variety of shapes and sizes under various clutter and occlusion conditions.
Although several grasping approaches have been developed successfully for 4DoF $(x, y, z, \theta)$, many challenges still remain towards robust grasping. One major obstacle to achieving a robust object grasping is the cost of collecting efficient training data by real robots through self-supervised trial and error experiments. To address this challenge, recent works in grasp synthesis have mainly focused on developing end-to-end deep convolutional learning-based approaches to plan grasp configuration directly from sensor data. Although it has been proven that these approaches can outperform hand-crafted grasping methods, the grasp planning is mainly constrained to top-down grasps from a single depth sensor~\cite{mahler2017dex,morrison2020learning}. These approaches assume a favorable global camera placement and force the robot to grasp objects from a single direction, which is perpendicular to the image plane. Such constraints bound the flexibility of the robot, and the robot will not be able to grasp a range of household objects robustly, e.g., bottles, boxes, etc. Furthermore, a certain set of objects has convenient parts to be grasped, e.g., the ear of a mug, or in some situation, it is easier to approach a household object from different directions.
In this work, we propose a multi-view deep learning approach to handle real-time object grasping. Figure~\ref{system_overview} depicts an overview of our work. In particular, our approach takes a partial point cloud of a scene as an input. We then generate multi-view depth images of the objects and import them into a view selection function. The best view is then fed to a deep network to estimate a pixel-wise grasp synthesis (grasp quality, orientation, and width of a grasp). It is worth mentioning that we trained the network end-to-end using a very small isolated object grasping dataset. We based our approach on the hypothesis that considering multi-view of the objects forces the network to capture collisions among the gripper, objects, and the environment, which is necessary for object manipulation in human-centric environments. We validate this hypothesis by performing several experiments in both simulated and real-robot settings. More specifically, we evaluate the performance of the proposed method on the isolated, pile, and packed objects scenarios. Extensive quantitative and qualitative evaluations showed that our approach significantly outperformed the state-of-the-art methods. Our approach, on average, could estimate stable grasp configurations in less than $22$ms without the need for explicit collision checking. Therefore, the proposed approach is suitable to be used in real-time robotic applications that need closed-loop grasp planning.
\section{Related work}
Traditional object grasping approaches explicitly model how to grasp different objects mainly by considering prior knowledge about object shape, and pose~\cite{bohg2013data}. It has been proven that it is hard to obtain such prior information for never-seen-before objects in human-centric environments~\cite{kalashnikov2018qt}. More recent approaches try to tackle this limitation by formulating object grasping as an \textit{object-agnostic} problem, in which grasp synthesis is detected based on learned visual features without considering prior \textit{object-specific} information. Therefore, these approaches are able to generalize the learned grasping policies to new objects without the need for having the full model of the object in advance. In this vein, much attention has been given to object grasping approaches based on Convolutional Neural Network (CNN) ~\cite{lenz2015deep,mahler2017dex,morrison2018closing,breyer2020volumetric}.
Deep-learning approaches for object grasping falling into three main categories depending on the input to the network.
\textbf{Volume-based approaches} represent objects as 3D voxel grid and then fed them to a CNN with 3D filter banks~\cite{lundell2020beyond,breyer2020volumetric,li2020learning,varley2017shape}. In particular, these approaches first estimate the complete shape of the target object using a variational autoencoder network, and then sample depth images of the object from different directions. Finally, all the obtained views are used for generating grasp synthesis for the given object. These approaches are computationally very expensive as they incorporate a shape complete network. Furthermore, training such network required enormous amount of data.
\noindent\textbf{Pointset-based methods} directly use point cloud data as input~\cite{liang2019pointnetgpd,mousavian20196,gualtieri2016high}. One of the biggest bottlenecks with these approaches is the execution time and sensitivity to calibration errors. Unlike these methods, our approach generates three virtual depth images of the object, and then generates grasp synthesis for the obtained object's views.
\noindent\textbf{View-based approaches} receive a depth image as the input of network. For instance, DexNet~\cite{mahler2017dex} and QT-Opt~\cite{kalashnikov2018qt} learn only top-down grasping based on images from a fixed static camera. GG-CNN~\cite{morrison2018closing}, DexNet~\cite{mahler2017dex}, and QT-Opt~\cite{kalashnikov2018qt}, generate a grasp map per scene. Unlike these approaches, our approach generates a grasp synthesis map per object. Furthermore, these approaches only work for top-down camera-settings and have mainly focused on solving 4DoF grasping, where the gripper is forced to approach objects from above. The major drawbacks of these approaches are inevitably restricted ways to interact with objects. Moreover, the robot is not able to immediately generalize to different task configurations without extensive retraining. We tackle these problems by proposing a multi-view approach for object grasping in highly crowded scenarios.
More recent approaches receive 3D representation of an object, e.g., in the form of point cloud, and generate multiple views of the object. Our work belongs to this category. These approaches appear to be most effective in 3D object recognition and grasping, as shown by \cite{kanezaki2018rotationnet,qi2016volumetric,shi2015deeppano,mahler2017dex,morrison2018closing,zeng2017multi,yan2018learning,de2016robotfusion}. In these approaches, 2D images are extracted from the point cloud of the object by projecting the object’s points onto 2D planes. The obtained view is then fed to a CNN to generate grasp syntheses. Finally, the robot executes the highest-ranked grasp~\cite{mahler2017dex}. Most of the approaches take a very long time to sample and rank grasp candidates individually~\cite{lenz2015deep,mahler2017dex}. Unlike our approach, these approaches are used in open-loop control scenarios and are not suitable for real-time applications.
Recent researches on multi-view grasping~\cite{morrison2019multi,ten2017grasp}, have been done using a real single camera moving over a predefined trajectory, where each point of the trajectory is considered as a view. In \cite{morrison2019multi}, the most informative view is then chosen by means of entropy measures. In our work, we instead generate multiple views for each object and select the most informative view, based on a viewpoint entropy measure, for grasping.
There are some approaches that addressed closed-loop object grasping. These methods frequently receive visual inputs and update grasp configuration during execution, mainly based on visual serving. Most of these methods considered several constraints (e.g., top grasp only) to reduce the amount of required training data. For example, Morrison et al.,~\cite{morrison2018closing} proposed Generative Grasping CNN (GG-CNN), a solution where grasp poses are generated for every pixel using a small CNN. Similar to our approach, GG-CNN is designed to use visual feedback in real-time applications. Unlike GG-CNN that uses an eye-in-hand visual feedback, our approach works in an eye-to-hand system and use a wider view of the environment rather than a narrow top-down view. In service robots, a wider view of the environment is better suited for household tasks as the robot has to consider an entire scene for motion planning.
\section{Problem Formulation}
In this work, the robot uses a single Kinect camera to perceive the world. We formulate grasp synthesis as a learning problem of planning parallel-jaw grasps for objects in clutter. In particular, we intend to learn a function that receives a collection of rendered depth images of a 3D object as input, and returns (\textit{i}) the best approaching direction, and (\textit{ii}) a grasp map representing per-pixel grasp configuration for a selected view. The grasp map is then used to find the best grasp configuration for removing a target object from the workspace. In this work, we assume that the object has been segmented from a scene, and refer the reader to our previous works for this purpose~\cite{kasaei2018towards,kasaei2019local}.
\subsection {Generating multiple views of objects}
A three-dimensional (3D) object is usually represented as point cloud, $p_i : i \in \{1,\dots,n\}$, where each point is described by their 3D coordinates $[x, y, z]$. To capture depth images from a 3D object, we need to set up a set of virtual cameras around the object, where the $\textbf{Z}$ axes of cameras are pointing towards the centroid of the object. We first calculate the geometric center of the object as the average of all points of the object. Then, a local reference frame is created by applying principal component analysis on the normalized covariance matrix, $\Sigma$, of the object, i.e., $\Sigma\textbf{V}=\textbf{EV}$, where $\textbf{E} = [e_1, e_2, e_3]$ contains the descending sorted eigenvalues, and $\textbf{V} = [\vec{v}_1, \vec{v}_2, \vec{v}_3]$ shows the eigenvectors. Therefore, $\vec{v}_1$, shows the largest variance of the points of the object. In this work, $\vec{v}_1$ and the negative of the gravity vector are considered as $\textbf{X}$ and $\textbf{Z}$ axes, respectively. We define the $\textbf{Y}$ axis as the cross product of $\vec{v}_1 \times \textbf{Z}$. The object is then transformed to be placed in the reference frame.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth, trim = 0cm 0cm 0cm 1cm clip=true]{img/multi_views.PNG}
\vspace{-7mm}
\caption{Two examples of generating bounding box, local reference frame, and three projected views for: (\textit{left}) a \texttt{glass-cleaner}; (\textit{right}) a \texttt{juice-box}.
}
\label{projections}
\vspace{-4mm}
\end{figure}
From each camera pose, we map the point cloud of the object into a virtual depth image based on z-buffering. In particular, we first project the object to a square plane, $M$, centered on the camera’s center. The projection area is then divided into $l \times l$ square grid, where each bin is considered as a pixel. Finally, the minimum $z$ of all points falling into a bin is considered as the pixel value. The size of the projection square area, $l \times l$, is an important factor for object grasping tasks. In the case of object-agnostic grasping, since the grasp configurations depend on the pose and size of the target object, a view of the object should not be scale-invariant, we consider a fixed size projection plane ($l \times l$). In our setup, the size of each bin is defined as $f_g \times f_g$, where $f_g$ represents the size of the head of the robot's finger, and the length of each side of the projection plane, $M$, is defined as $l \times f_g$.
The number of object views is an important parameter for object grasping. In this work, we consider orthographic views including XoY, XoZ, and YoZ projections. Figure \ref{projections} shows two examples of this procedure for a \texttt{\small glass-cleaner} and a \texttt{\small juice-box}\footnote{Video of this procedure is available: \href{https://youtu.be/0UWXIlg2WC8}{\cblue{\texttt{\scriptsize https://youtu.be/0UWXIlg2WC8}}}}.
The intuition behind this selection is that most of the household objects are stably graspable from \textit{top}, \textit{side}, or \textit{front} views~\cite{robocup}. The obtained projections are then normalized (i.e., mean-subtracted and divided by standard deviation) and fed to the network.
We develop a function to define which of these views is the most-informative view for approaching and grasping the target object. Therefore, we use viewpoint entropy that nicely takes into account both the number of occupied pixels and their values. We calculate the entropy of a view, $v$, as $H (M) = - \sum_{i=1}^{l^2} p_i \log_2(p_i)$, where $p_i$ represents the value of $i${th} pixel, and then sort the views based on their entropy value. The view with maximum entropy is considered as the most-informative view. We also consider the kinematic feasibility and distance that the robot would require to travel in the configuration space. In the case of large objects, or a pile of objects, there is a clear advantage (e.g., collision-free) to grasp from above, while for an isolated (cans, bottles, boxes, toys),
\begin{wrapfigure}{r}{0.45\columnwidth}
\vspace{-2mm}
\centering
\includegraphics[width=0.95\linewidth]{img/colgates.png}
\vspace{-3mm}
\caption{Example of grasping a \textit{Colgate} box in two different situations
}
\vspace{-1mm}
\label{different_grasps}
\end{wrapfigure}
it completely depends on the pose of the object relative to the camera (see Fig.~\ref{different_grasps}).
The gripper finally approaches the object in an orthogonal direction to the most-informative view. It should be noted that the view selection function can be easily adapted to any other tasks' criteria.
\vspace{-2mm}
\subsection {Network architecture}
We aim to learn a function that maps an input object's view to multiple outputs representing pixel-wise antipodal grasp configurations, $f_\theta: \mathcal{X} \rightarrow \mathbf{\mathcal{Y}}$. Towards this goal, we have designed a convolutional autoencoder that receives a depth image with height $h$ and width $w$ pixels as input, $x_{(i,j)} \in \mathbb{R}^{h \times w}$, and returns a pixel-wise grasp configuration map, $\textbf{G}$, i.e., $y_{(i,j)} = [\textbf{G}_{(i,j)}]$. Since we want to use the network in a closed-loop control scenario, we add a constraint on number of trainable parameters to be less than $120$K.
The network is parameterized by its weights $\theta$. Our model is a single-input multiple-outputs network and is constructed using three types of layers, including \texttt{convolution}, \texttt{deconvolution}, and \texttt{batch normalization}.
The encoder part is composed of six convolutional layers (C$1$ to C$6$). We use a Rectified Linear Unit (\texttt{ReLU}) as the activation function of all layers of the encoder part to force the negative values to zero and eliminating the vanishing gradient problem which is observed in other types of activation functions. We added a batch normalization layer after each convolutional layer to stabilize the learning process and reducing the number of training epochs by keeping the mean output and standard deviation close to $0$ and $1$, respectively. The decoder part is composed of six transposed convolutional layers (T$1$ to T$6$), followed by three separate output layers for predicting grasp quality, width, and rotation. Similar to the encoder part, we use the ReLU activation function for all layers and add a batch normalization after each layer. We use the same padding in all convolution and transposed convolution layers to make the input and output be of the same size (see Section~\ref{exprimental_result}).
In this work, an antipodal grasp point is represented as a tuple, $g_i = \langle (u, v), \phi_i, w_i, q_i\rangle$, where $(u, v)$ stands the center of grasp in image frame, $\phi_i$ represents the rotation of the gripper around the Z axis in the range of $[\frac{-\pi}{2}, \frac{\pi}{2}]$, $w_i$ shows the width of the gripper where $w_i \in [0, w_{max}]$, and the success probability of the grasp is represented by $q_i \in [0,1]$. Given an input view, the network generates multiple outputs, including: $(\boldmath{\phi}, \textbf{W}, \textbf{Q}) \in \mathbb{R}^{h \times w}$, where pixel value of images indicate the measure of $\phi_i, w_i, q_i$ respectively. Therefore, from $f_\theta (I_i) = \mathbf{G}_i$, the best grasp configuration, $\operatorname{g^*}$, is the one with maximum quality, and its coordinate indicate the center of grasp, i.e., $(u,v) \leftarrow \operatorname{g^*} = \operatorname*{argmax}_\mathbf{Q} ~ \mathbf{G}_i$. Given a grasp object dataset, $\mathcal{D}$, containing $n_d$ images, $D = \{(x_i, y_i) | 1 \le i \le n_d\}$, our model can be trained end-to-end to learn $f_\theta (.)$.
After obtaining the grasp map of an input view, the Cartesian position of the selected grasp point, $(u,v)$, can be transformed to object's reference frame since the transformation of the orthographic view relative to the object is known. The depth value of the grasp point is estimated based on the minimum depth value of the surrounding neighbors of $(u,v)$ that are within a radius of $\Delta$, where $\Delta = 5$mm.
\section{Experimental Results}
\label{exprimental_result}
We performed several rounds of simulation and real-robot experiments to evaluate the performance of the proposed approach. We were pursuing three goals in these experiments: (\textit{i}) evaluating the performance of object grasping in three scenarios; (\textit{ii}) investigating the usefulness of formulating object grasping as an object-agnostic problem for general purpose tasks; (\textit{iii}) determine whether the same network can be used in both simulation and real-robot systems without additional fine-tuning. Towards this goal, we employed the same code and network (trained on the Cornell dataset) in both real and simulation experiments.
\subsection{Ablation study}
We trained several networks with the proposed architecture but different parameters including filter size, dropout rate, number of units in fully connected layers, loss functions, optimizer, and various learning rates for $100$ epochs each. We used the extended version of the Cornell dataset~\cite{cornellgrasp2011} comprising $1035$ RGB-D images of $240$ household objects.
In this work, we considered the $5110$ positive grasp configurations and discard all the negative labels. Furthermore, since the Cornell dataset is a small dataset, we augmented the data by zooming, random cropping, and rotating functions to generate $51100$ images. The $80\%$ of the augmented data is used for training and the remaining $20\%$ is used as the evaluation set.
We report the obtained results based on the Intersection over Union (IoU) metric, and speed. A grasp pose is considered as a valid grasp if the intersection of the predicted grasp rectangle and the ground truth rectangle is more than $25\%$, and the orientation difference between predicted and ground truth grasp rectangles is less than $30$ degrees. The final architecture is shaped as: C$_{(9 \times 9 \times 8,~S_3)}$, C$_{(5 \times 5 \times 16,~S_2)}$, C$_{(5 \times 5 \times 16,~S_2)}$, C$_{(3 \times 3 \times 32)}$, C$_{(3 \times 3 \times 32)}$, C$_{(3 \times 3 \times 32)}$, T$_{(3 \times 3 \times 32)}$, T$_{(3 \times 3 \times 32)}$, T$_{(3 \times 3 \times 32)}$, T$_{(5 \times 5 \times 16,~S_2)}$, T$_{(5 \times 5 \times 32,~S_2)}$, T$_{(9 \times 9 \times 32,~S_3)}$, where $S$ stands for strides. We used Adam optimizer with a learning rate of $0.001$, and Mean Squared Error as a loss function. The number of trainable parameters of our network is $116$K, which makes it suitable for closed-loop control at the rate of up to $45$ Hz.
We compared our approach with six visual grasp detection baselines, including Lenz et al.~\cite{lenz2015deep}, Redmon et al.~\cite{redmon2015real}, Wang et al.~\cite{wang2016robot}, DexNet~\cite{mahler2017dex}, GG-CNN~\cite{morrison2018closing}, and GG-CNN2~\cite{morrison2020learning}. Results are reported in Table~\ref{table:isolated_exp}. Our approach achieved state-of-the-art performance and outperformed the selected approaches by a large margin. Concerning IoU metric, our approach achieved $89.51\%$ which was $15.61$, $1.51$, $4.21$, $0.51$, $16.51$, and $14.31$ percentage point (p.p) better than Lenz et al.~\cite{lenz2015deep}, Redmon et al.~\cite{redmon2015real}, Wang et al.~\cite{wang2016robot}, DexNet~\cite{mahler2017dex}, GG-CNN~\cite{morrison2018closing}, and GG-CNN2~\cite{morrison2020learning} respectively. In addition to the IoU metric, we also computed the average grasp prediction time for each of the mentioned approaches. The obtained results indicated that DexNet~\cite{mahler2017dex}, Lenz et al.~\cite{lenz2015deep}, Wang et al.~\cite{wang2016robot}, and Redmon et al.~\cite{redmon2015real} methods need a very long time to predict grasp poses for a given object. Hence, these approaches are computationally expensive, and not useful for real-time robotic applications, where a closed-loop controller is usually needed. In contrast, the inference times for the GG-CNN~\cite{morrison2018closing}, GG-CNN2~\cite{morrison2020learning}, and the proposed method was less than $25$ ms, which represents that these approaches are suitable for real-time object grasping (i.e., a closed-loop control can work steadily in $\ge 45$Hz). The underlying reason for fast inference is that these approaches use a relatively small CNN, while other selected approaches are based on very deep neural networks with millions of parameters (e.g., DexNet has 18 million
parameters). Although GG-CNN~\cite{morrison2018closing} and GG-CNN2~\cite{morrison2020learning} achieved slightly better execution time than our approach (i.e., $3$ ms, and $1$ ms faster respectively), the grasp performance of our approach is significantly better. Note that, GG-CNN~\cite{morrison2018closing} and GG-CNN2~\cite{morrison2020learning} have $62$k and $66$k network parameters, respectively, whereas the proposed network has $116$k parameters, which
is almost twice as many.
\begin{table}[!t]
\begin{center}
\caption {Result of object grasping on the Cornell dataset~\cite{cornellgrasp2011}.}
\label{table:isolated_exp}
\vspace{-1mm}
\resizebox{0.8\linewidth}{!}{
\begin{tabular}{ |c|c|c|c| }
\hline
\small
\textbf{approach} & \textbf{input data} & \textbf{IoU} (\%) & \textbf{speed} (ms)\\
\hline\hline
Lenz et al.~\cite{lenz2015deep} & RGB-D & 73.9 & 1350 \\\hline
Redmon et al.~\cite{redmon2015real} & RGB-D & 88.0 & 76\\\hline
Wang et al.~\cite{wang2016robot} & RGB-D & 85.3 & 140 \\\hline
DexNet~\cite{mahler2017dex}$^*$ & depth image & 89.0 & 2500 \\\hline
GG-CNN~\cite{morrison2018closing} & depth image & 73.0 & \textbf{19} \\\hline
GG-CNN2~\cite{morrison2020learning}$^*$ & depth image & 75.2 & 21 \\\hline
\textbf{Our approach} & depth image & \textbf{89.51} & 22 \\\hline
\end{tabular}}
\end{center}
\quad\quad\quad$*$ retrained
\vspace{-3mm}
\end{table}
\begin{figure}[!b]
\includegraphics[width=\linewidth, trim = 0cm 0cm 0cm 0cm clip=true]{img/setup3.png}\vspace{1mm}\\
\vspace{-7mm}
\caption{Our experimental setups in: (\textit{left}) simulation environment; and (\textit{right}) the real-world setting. It should be noted, both simulated and real sets of objects used for evaluations are shown in these figures.}
\label{exp_setup}
\end{figure}
\subsection{Grasp evaluation}
\label{grasp_results}
Figure \ref{exp_setup} shows our experimental setup in simulation (\textit{left}) and real-robot (\textit{right}). In this work, we have developed a simulation environment in Gazebo similar to our real-robot setup to extensively evaluate our approach. The robot and the camera in the simulated environment were placed according to the real-robot setup to obtain consistent performance.
Our setup comprises a Universal Robot (UR5e) with a two-fingered Robotiq 2F-140 gripper, a Kinect V1 camera mounted on a tripod, and a user interface to start and stop the experiments.
To evaluate the performance of the proposed approach, we designed \textit{(i) isolated}, \textit{(ii) packed}, and \textit{(iii) pile} removal tasks (see Fig.~\ref{three_grasp_scenario}). In each round, we randomly generated a new scene consisting of four to six objects in the case of packed and pile scenarios, while for the isolated object scenario, we placed a randomly selected object in an arbitrary pose inside the robot's work-space. In all experiments, the robot knows in advance about the pose of the \textit{bin} object as the placing pose, while it has to detect and track the pose of target objects to be removed from the table. Afterward, the robot needs to predict grasp synthesis and select the best graspable pose of the target object, picks it up, and puts it in the \textit{bin}. This procedure is repeated until all objects get removed from the table. Note that, at the beginning of each experiment, we set the robot to a pre-defined setting, and randomly place objects on the table. We used a set of $20$ simulated objects, imported from different sources (e.g., the YCB dataset~\cite{calli2017yale}, Gazebo repository, and etc), and $20$ real daily-life objects with different materials, shapes, sizes, and weight (see Fig.~\ref{exp_setup}). All objects were inspected to be sure that at least one side fits within the gripper. We assessed the performance of our approach by measuring success rate, i.e., $\frac{number~of~successful~grasps}{number~of~attempts}$. We benchmark our approach against Grasp Pose Detection (GPD)~\cite{gualtieri2016high} (an analytical approach) and GGCNN (learning based approach) baselines.
\begin{figure}[!t]
\includegraphics[width=\linewidth, trim = 0cm 0cm 0cm 0cm clip=true]{img/three_scenarios.png}
\vspace{-7mm}
\caption{Illustrative examples of three evaluation scenarios in Gazebo: (\textit{left}) isolated object; (\textit{center}) packed objects; (\textit{right}) pile of objects. }
\label{three_grasp_scenario}
\vspace{-3mm}
\end{figure}
\subsubsection{Isolated object scenario}
In this round of experiments, each object was tested in both real-world and simulation environments for $10$ and $50$ times, respectively. A particular grasp was recorded as a success if the object is inside the bin at the end of the experiment. The best grasp point is defined as the collision-free and kinematically feasible one with the highest grasp quality. Note that, to speed up the real-robot experiments, we randomly placed five objects on the table. In each execution cycle, the robot selected the nearest object to the base and tried to remove it from the table. Results are reported in Table~\ref{table:isolated_exps}. By comparing both real and simulation experiments, it is visible that our approach outperformed GPD by a large margin. Particularly, in the case of simulation experiments, we achieved a grasp success rate of $89.7\%$ (i.e., $897$ success out of $1000$ trials), while GPD and GGCNN obtained $78.7\%$, and $72.6\%$, respectively. We visualized the best grasp configuration for $10$ simulated objects in Fig.~\ref{isolated_exps}.
In the case of real-robot experiments, the success rate of our approach was $90.5\%$ ($181$ success out of $200$ attempts), which was $9.5\%$ and $12.0$ better than GPD and GGCNN, respectively. The underlying reason was that the proposed approach generated pixel-wise grasp configurations for the most-informative view, resulting in a diverse set of grasps for the target object, which we then selected the best kinematically feasible one. This is not the case for the GPD approach, and GGCNN. More specifically, GPD generated a few grasps for a target object, and sometimes,
\begin{wraptable}{r}{0.5\linewidth}
\vspace{-3mm}
\caption{Isolated Scenario}
\label{table:isolated_exps}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
\textbf{Method} & \textbf{Type} & \textbf{Success rate ($\%$)}\\
\hline\hline
GPD & sim & 78.7 (787/1000) \\\hline
GGCNN & sim & 72.6 (726/1000) \\\hline
Our & sim & \textbf{89.7} (897/1000) \\\hline
Our~(top-down) & sim & 73.2 (732/1000) \\\hline
Our~(random) & sim & 51.3 (513/1000) \\\hline\hline
GPD & real & 81.0 (162/200) \\\hline
GGCNN & real & 78.5 (157/200) \\\hline
Our & real & \textbf{90.5} (181/200) \\\hline
Our~(top-down) & real & 67.5 (135/200) \\\hline
Our~(random) & real & 49.0 (98/200) \\\hline
\end{tabular}}
\vspace{-2mm}
\end{wraptable}
none of the configurations led to a successful attempt.
In the case of GGCNN and our approach with top-only view selection, failures mainly happened in grasping soda-can, bottle, human toy, and mustard object since the supporting area around the selected grasp point was too small and therefore, the object slipped and fall during manipulation. In the case of our approach with random view selection, the main failures were due to collision with the table, e.g., grasping a toppled soda-can from side. Some failures also occurred when one of the fingers of the gripper was tangent to the surface of the target object, which led to pushing the object away. In the case of our approach, the failed attempts were mainly due to inaccurate bounding box estimation, some objects in specific pose had a very low grasp quality score, and collision between the object and the bin (mainly happened for large objects e.g., \textit{Pringles} and \textit{Juice box}).
\begin{figure}[!t]
\vspace{-1mm}
\includegraphics[width=\linewidth, trim = 0cm 0cm 0cm 0cm clip=true]{img/isolated_grasp_exps.png}
\vspace{-7mm}
\caption{Qualitative results for 10 never-seen-before household objects in the isolated scenario: visualizing objects in Gazebo and their best grasp configurations. These results showed that our approach learned very-well the intended object-agnostic grasp function.}
\label{isolated_exps}
\vspace{-3mm}
\end{figure}
\subsubsection{Pile and packed scenarios}
To generate a simulated scene containing a pile of objects, we randomly spawn objects into a box placed on top of the table. We wait for a couple of seconds until all objects become stable, and then remove the box, resulting in a cluttered pile of objects. To generate a packed scenario, we iteratively placed a set of objects next together in the workspace. An example for each scenario is shown in Fig.~\ref{three_grasp_scenario} (\textit{center} and \textit{right}). In the case of real-robot experiments, we randomly put four to six objects in a box, then shake the box to remove bias, and finally pour the box in front of the robot to make a pile of objects. In the case of packed experiments, we manually generate scenes by putting several objects together.
In this round of evaluation, in addition to the success rate, we report the average percentage of objects removed from the workspace. An experiment is continued until either all objects get removed from the workspace, or three failures occurred consecutively, or the quality of the grasp candidate is lower than a pre-defined threshed, $\tau$. This parameter plays an important role. We performed $100$ packed removal tests to tune the grasp quality threshold by setting $\tau \in \{ 0.8, 0.9\}$, i.e., $50$ simulation experiments for each $\tau$. In each iteration, we randomly generated a scene including four objects. In each execution cycle, the robot is instructed to execute a feasible grasp synthesis that has the maximum grasp quality. We also performed experiments with GPD and GGCNN.
\begin{figure}[!b]
\vspace{-7mm}
\includegraphics[width=\linewidth, trim = 0cm 0cm 0cm 0cm clip=true]{img/packed_exps2.png}
\vspace{-7mm}
\caption{Qualitative results on packed scenarios: visualizing the top-three grasp configurations on four different densely-packed objects. }
\label{packed_removing}
\end{figure}
\textbf{Packed experiments}: Results are reported in table~\ref{packed_exps}. It was observed that by setting $\tau=0.9$, the robot could remove fewer objects from the workspace while achieving a higher success rate, while setting $\tau$ to $0.8$ leads to a good balance between success rate and percentage of object removal.
\begin{wraptable}{r}{0.65\linewidth}
\vspace{-3mm}
\caption{Performance on Packed Scenario}
\label{packed_exps}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Type} & \textbf{Success rate} & \textbf{Percent cleared}\\
\hline\hline
GPD & sim & 0.54 (139/256) & 0.70 (139/200)\\\hline
GGCNN & sim & \textbf{0.55} (144/262) & 0.72 (144/200)\\\hline
$\tau =0.8$ & sim & 0.84 (176/210) & \textbf{0.88} (175/200)\\\hline
$\tau = 0.9$ & sim & \textbf{0.94} (141/150) & 0.71 (141/200)\\\hline\hline
GPD & real & 0.64 (31/48) & 0.76 (31/40)\\\hline
GGCNN & real & 0.46 (25/54) & 0.63 (25/40)\\\hline
$\tau =0.8$ & real & \textbf{0.91} (38/42) & \textbf{0.88} (38/40)\\\hline
\end{tabular}}
\vspace{-2mm}
\end{wraptable}
Our approach outperformed both GPD and GGCNN with a large margin in both simulated and real-robot experiments ($>25\%$). On closer inspection of real-robot experiments, the proposed method could successfully grasp $38$ objects out of $42$ attempts, resulting in $91\%$ success rate and $88\%$ percent cleared, while GPD resulted in $64\%$ success rate and $76\%$ percent cleared. In the case of GGCNN, the success rate and percent cleared degraded to $46\%$ and $63\%$ respectively by executing $29$ unsuccessful grasp attempts.
We found that mug-like objects and bottle-like objects are difficult to grasp for GPD and GGCNN respectively, as the target object mostly slipped out of the gripper during the manipulation phase. We observed that the proposed approach is able to predict robust grasp quality scores for a given object. Figure~\ref{packed_removing} illustrates four examples of packed removal experiments.
\textbf{Pile experiments}: The obtained results are summarized in Table~\ref{pile_exps}.
We use an example to explain the results. Figure~\ref{pile_removing} depicts a successful sequence of removing a pile of four objects using the proposed approach. It was observed that after removing \textit{Mustard} and \textit{Colgate} objects from the workspace (Fig.~\ref{pile_removing} \textit{a, b}), the complexity of the scene reduced significantly. Therefore, the robot could find more grasp configurations whose grasp quality exceeds the threshold (Fig.~\ref{pile_removing} \textit{c, d}).
As shown in this example, while the robot was interacting with an object, the pose of the other objects changed completely, resulting in a situation that the target object is not graspable (e.g., toppled Oreo box). Such a situation was one of the main reasons for unsuccessful attempts.
\begin{wraptable}{r}{0.65\linewidth}
\vspace{-3mm}
\caption{Performance on Pile Scenario}
\label{pile_exps}
\resizebox{\linewidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Type} & \textbf{Success rate} & \textbf{Percent cleared}\\
\hline\hline
GPD & sim & 0.61 (131/214) & 0.66 (131/200)\\\hline
GGCNN & sim & 0.64 (143/223) & 0.72 (143/200)\\\hline
$\tau =0.8$ & sim & \textbf{0.80} (153/192) & \textbf{0.77} (153/200)\\\hline\hline
GPD & real & 0.72 (31/43) & 0.78 (31/40)\\\hline
GGCNN & real & 0.78 (32/41) & 0.80 (32/40)\\\hline
$\tau =0.8$ & real & \textbf{0.92} (35/38) & \textbf{0.88} (35/40)\\\hline
\end{tabular}}
\vspace{-2mm}
\end{wraptable}
Some other failures occurred due to the lack of friction, applying limited force to the object, collision with other objects, and unstable grasps predictions.
A video summarizing these experiments is available online: \href{https://youtu.be/rBDEy5f7z0E}{\cblue{\texttt{\small https://youtu.be/rBDEy5f7z0E}}}
\begin{figure}[!t]
\includegraphics[width=\linewidth, trim = 0cm 0cm 0cm 0cm clip=true]{img/pile_exp2.png}\vspace{1mm}\\
\vspace{-7mm}
\caption{An example of successful sequence of removing a pile of objects: in this experiment, the robot successfully removed \textit{Mustard}, \textit{Colgate}, \textit{Juice box}, and \textit{Coke can} objects one by one. In each execution cycle, grasp configurations that are collision-free and kinematically feasible are shown by green color.}
\label{pile_removing}
\vspace{-3mm}
\end{figure}
\section{Conclusion}
In this paper, we proposed a deep learning approach for real-time multi-view 3D object grasping in highly cluttered environments. We trained the approach in an end-to-end manner. The proposed approach allows robots to robustly interact with the environments in both isolated and highly crowded scenarios. In particular, for a given scene, our approach first generates three orthographic views, The best view is then selected and fed to the network to predict a pixel-wise grasp configuration for the given object. The robot is finally commanded to execute the highest-ranked grasp synthesis. To validate the performance of the proposed method, we performed extensive sets of real and simulation experiments in three scenarios: \textit{isolated}, \textit{packed}, and \textit{pile of objects}. Experimental results showed that the proposed method worked very well in all three scenarios, and outperformed the selected state-of-the-art approaches. In the continuation of this work, we would like to investigate the possibility of improving grasp performance by learning a shape completion function that receives a partial point cloud of a target object and generates a complete model. We then use the full model of the object to estimate grasp synthesis map. The other direction would be extending the proposed approach by adding an eye-in-hand system to have more flexibility to reconstruct parts of the environment that we are interested in. In particular, by moving the sensor to the desired areas, we can capture significantly more details that would otherwise not be visible. This information can be very helpful and lead to better grasp planning.
\bibliographystyle{IEEEtran}
|
1,477,468,751,412 | arxiv | \section{Introduction}
According to \cite[Page~3]{16PFinHB},
most studies have found
Cattell's comprehensive 16-Personality-Factor (16PF) Profiles \cite{16PF_Questionnaire}
``to be among the top five most commonly used normal-range instruments in both research and practice''
with culturally adapted translations into over 35 languages world-wide.
Further,
``[t]he 16PF traits also appear in the PsychEval Personality Questionnaire \cite{PsychEval_Questionnaire},
a comprehensive instrument which includes both normal and abnormal personality dimensions.''
Note that according to \cite[Page~4]{16PFinHB},
``[i]nstead of being developed to measure preconceived dimensions of interest to a particular author,
the instrument was developed from the unique perspective of a scientific quest to try to discover
the basic structural elements of personality.''
Notwithstanding, and further exemplifying our general methodology introduced in \cite{arXiv:1403.2000v1},
we propose in the present paper a computable Galois-connection \cite{DaveyPriestley} between
\emph{PsychEval Personality Profiles (PPPs),} which contain the 16PFs, and
\emph{Szondi's Personality Profiles (SPPs)} \cite{Szondi:ETD:Band1},
a less well-known but, as we show, finer personality measure for
psychiatric as well as non-psychiatric populations, and
conceived as a unification \cite{Szondi:IchAnalyse} of the depth psychology of
S.\ Freud, C.G.\ Jung, and A.\ Adler.
This paper being a further illustration of our general methodology introduced in \cite{arXiv:1403.2000v1},
our presentation here thus closely follows the one in \cite{arXiv:1403.2000v1}, even in wording.
The generality of our mathematical methodology may be obvious to the (order-theoretic) mathematician, but
may well not be so to the general psychologist.
Just like \cite{arXiv:1403.2000v1},
our present result
is a contribution to \emph{mathematical psychology} in the area of
\emph{personality assessment.}
It is also meant as a contribution towards
practicing psychological research with the methods of
the exact sciences, for
obvious ethical reasons.
The practical significance of our result is that
our Galois-connection provides a pair of computable,
interpreting translations between the two personality spaces of PPPs and SPPs
(and thus hopefully also between their respective academic and non-academic communities):
one \emph{concrete} translation from PPP-space to SPP-space (because SPPs are finer than PPPs) and
one \emph{abstract} translation from SPP-space to PPP-space (because PPPs are coarser than SPPs).
Thus Cattell's and Szondi's personality-test results are
mutually interpretable and inter-translatable,
even automatically by computers.
The only restriction to this mutuality is
the subjective interpretation of the faithfulness of these translations.
In our interpretation,
we intentionally restrict the translation from SPP-space to PPP-space, and only that one,
in order to preserve (our perception of) its faithfulness.
More precisely,
we choose to map some SPPs to the empty set in PPP-space
(but every PPP to a non-empty set in SPP-space).
Of course just like in \cite{arXiv:1403.2000v1},
our readers can
experiment with their own interpretations,
as we explain again in the following paragraph.
We stress that
our Galois-connection between the spaces of PPPs and SPPs is
independent of their respective \emph{test,} which
evaluate their testees in terms of
\emph{structured result values}---the PPPs and SPPs---in the respective space.
Both tests are preference-based, more precisely,
test evaluation is based
on choices of preferred questions in the case of the PsychEval-test \cite{PsychEval_Questionnaire} and
on choices of preferred portraits in the case of the Szondi-test \cite{Szondi:ETD:Band1,SzondiTestWebApp}.
Due to the independence of our Galois-connection from these tests,
their exact nature need not concern us here.
All what we need to be concerned about is the nature of the structured result values that these tests generate.
(Other test forms can generate the same form of result values, e.g.~\cite{Kenmo:Szondi}.)
We also stress
that our proposed Galois-connection is
what we believe to be an interesting candidate brain child for adoption by the community, but
that there are other possible candidates, which our readers are empowered to explore themselves.
In fact,
not only
do we propose a candidate Galois-connection between PPP-space and SPP-space, but also
do we further illustrate the whole \emph{methodology} introduced in \cite{arXiv:1403.2000v1} for
generating such candidates.
All what
readers interested in generating such connections themselves need to do is
map their own intuition about
the meaning of PPPs to a standard interlingua,
called \emph{Logical Pivot Language (LPL)} here, and check that
their mapping has a single simple property,
namely the one stated as Fact~\ref{fact:FactsAboutip}.1 about
our mapping $\mathrm{f}$ in
Figure~\ref{figure:MappingsAndMorphisms}.
Their desired Galois-connection is then automatically induced jointly by
their chosen mapping and
a mapping, called $\mathrm{p}$, from SPP-space to LPL that
we chose in \cite{arXiv:1403.2000v1} once and for all possible Galois-connections of interest.
What is more, and as already mentioned in \cite{arXiv:1403.2000v1} and evidenced here,
our methodology is applicable even more generally to the generation of Galois-connections between
pairs of result spaces of other personality tests.
SPPs just happen to have a finer structure than
other personality-test values that we are aware of, and
so are perhaps best suited to play
the distinguished role of explanatory semantics for result values of other personality tests.
Of course our readers are still free to choose their own preferred semantic space.
An SPP can be conceived as a tuple of eight,
so-called \emph{signed factors} whose signatures can in turn
take \emph{12 partially ordered} values.
So SPPs live in an eight-dimensional space.
On the other hand,
a PPP can be conceived as a (16+12=28)-tuple of so-called \emph{personality traits,} which
can take \emph{10 totally ordered} values.
So PPPs live in an apparently finer, 28-dimensional space.
Nevertheless,
we are going to show that actually the opposite is true, that is,
SPPs are finer than PPPs.
In particular,
SPPs can account for \emph{ambiguous personality traits} thanks to the partiality of their ordering, whereas
PPPs cannot due to the totality of theirs.
Moreover,
a lot of Cattell's personality traits turn out to be definable in terms of
a combination of Szondi's signed factors, which
means that
a lot of Cattell's personality traits can be understood as
(non-atomic/-primitive) \emph{psychological syndromes.}
SPPs being finer than PPPs,
the translation from SPPs to PPPs must be a projection (and thus surjection) of SPP-space onto PPP-space.
Another insight gained in the finer referential system of SPPs is that
PPPs are confirmed to be non-orthogonal or not independent as also mentioned in \cite{16PFinHB}.
Of course our readers are still free to disagree on the value of these insights by
giving a convincing argument for why SPP-space would be an inappropriate semantics for PPP-space.
After all,
Szondi conceived his theory of human personality as
a unifying theory.
We now put forward our own argument for why we believe SPP-space is indeed
an appropriate---though surely not the only---semantics for PPP-space.
In Section~\ref{section:Structures},
we present the defining mathematical structures for each space, and
in Section~\ref{section:MappingsAndMorphisms},
the defining mathematical mappings for their translation.
No prior knowledge of either PPPs or SPPs is required to appreciate the results of this paper, but
the reader might appreciate them even more when comparing them also with those in \cite{arXiv:1403.2000v1}.
\section{The connection}
In this section,
we present
the defining mathematical structures for
PPP-space, the interlingua LPL, and SPP-space, as well as
the defining mathematical mappings for
the concrete translation of PPP-space to SPP-space and
the abstract translation of SPP-space back to PPP-space, both via LPL,
see Figure~\ref{figure:MappingsAndMorphisms}.
\begin{figure}
\caption{Mappings between personality spaces and interlingua}
$$\begin{tikzcd}
\mathcal{PPP} \arrow[swap]{ddr}{\mathrm{f}} \arrow[yshift=1ex, dashed]{rr}{\rightG{}} & & \mathcal{SPP} \arrow[tail]{ddl}{\mathrm{p}} \arrow[dashed, yshift=-0.5ex, two heads, name=T, below]{ll}{\leftG{}}\\
&&\\
& \mathcal{LPL} &
\end{tikzcd}$$
\label{figure:MappingsAndMorphisms}
\end{figure}
\subsection{Structures}\label{section:Structures}
In this section,
we present
the defining mathematical structures for
PPP-space, the interlingua LPL, and SPP-space.
We start with defining PPP-space.
\begin{definition}[The PsychEval Personality Profile Space]
Let
\begin{itemize}
\item $\mbox{$16\mathbb{PF}$}=\{\, \textsf{A}, \textsf{B}, \textsf{C}, \textsf{E}, \textsf{F}, \textsf{G}, \textsf{H}, \textsf{I}, \textsf{L}, \textsf{M}, \textsf{N}, \textsf{O}, \textsf{Q1}, \textsf{Q2}, \textsf{Q3}, \textsf{Q4}\, \}$ be the set of the \emph{16 Personality Factors} (the normal traits), with
\textsf{A}\ meaning ``warmth,''
\textsf{B}\ ``reasoning,''
\textsf{C}\ ``emotional stability,''
\textsf{E}\ ``dominance,''
\textsf{F}\ ``liveliness,''
\textsf{G}\ ``rule-consciousness,''
\textsf{H}\ ``social boldness,''
\textsf{I}\ ``sensitivity,''
\textsf{L}\ ``vigilance,''
\textsf{M}\ ``abstractness,''
\textsf{N}\ ``privateness,''
\textsf{O}\ ``apprehension,''
\textsf{Q1}\ ``openness to change,''
\textsf{Q2}\ ``self-reliance,''
\textsf{Q3}\ ``perfectionism,'' and
\textsf{Q4}\ ``tension;''
\item $\mathbb{PEPF}=\{\, \textsf{PS}, \textsf{HC}, \textsf{ST}, \textsf{AD}, \textsf{LE}, \textsf{SR}, \textsf{AW}, \textsf{PI}, \textsf{OT}, \textsf{AP}, \textsf{TS}, \textsf{TI}\, \}$ be the set of the \emph{12 PsychEval abnormal traits,} with
\textsf{PS}\ meaning ``psychological inadequacy,''
\textsf{HC}\ ``health concerns,''
\textsf{ST}\ ``suicidal thinking,''
\textsf{AD}\ ``anxious depression,''
\textsf{LE}\ ``low energy state,''
\textsf{SR}\ ``self-reproach,''
\textsf{AW}\ ``apathetic withdrawal,''
\textsf{PI}\ ``paranoid ideation,''
\textsf{OT}\ ``obsessional thinking,''
\textsf{AP}\ ``alienation/perceptual distortion,''
\textsf{TS}\ ``thrill seeking,'' and
\textsf{TI}\ ``threat immunity;''
\item $\mathbb{PF}=\mbox{$16\mathbb{PF}$}\cup\mathbb{PEPF}$\,.
\end{itemize}
Then,
$$\textrm{PPP}=\{\; \begin{array}[t]{@{}l@{}}
(\begin{array}[t]{@{}l@{}}
(\textsf{A},v_{1}),(\textsf{B},v_{2}),(\textsf{C},v_{3}),(\textsf{E},v_{4}),(\textsf{F},v_{5}),(\textsf{G},v_{6}),(\textsf{H},v_{7}),(\textsf{I},v_{8}),(\textsf{L},v_{9}),\\
(\textsf{M},v_{10}),(\textsf{N},v_{11}),(\textsf{O},v_{12}),(\textsf{Q1},v_{13}),(\textsf{Q2},v_{14}),(\textsf{Q3},v_{15}),(\textsf{Q4},v_{16}),\\
(\textsf{PS},v_{17}),(\textsf{HC},v_{18}),(\textsf{ST},v_{19}),(\textsf{AD},v_{20}),(\textsf{LE},v_{21}),(\textsf{SR},v_{22}),\\ (\textsf{AW},v_{23}),(\textsf{PI},v_{24}),(\textsf{OT},v_{25}),(\textsf{AP},v_{26}),(\textsf{TS},v_{27}),(\textsf{TI},v_{28}))\mid
\end{array}\\[12.5\jot]
v_{1},\ldots,v_{28}\in\{1,2,3,4,5,6,7,8,9,10\}\;\}
\end{array}$$
is the set of PsychEval Personality Profiles (PPPs) \cite{16PF_Questionnaire,PsychEval_Questionnaire}, and
$$\mathcal{PPP}=\langle\, 2^{\textrm{PPP}},\emptyset,\cap,\cup,\textrm{PPP},\overline{\,\cdot\,},\subseteq\,\rangle$$
defines our \emph{PsychEval Personality Profile Space,} that is,
the (inclusion-ordered, Boolean) powerset algebra \cite{DaveyPriestley} on \textrm{PPP}\
(the set of all subsets of \textrm{PPP}).
\end{definition}
\noindent
Note that
we do need to define $\mathcal{PPP}$ as the set of all \emph{subsets} of $\textrm{PPP}$ and
not simply as the set of all elements of $\textrm{PPP}$.
The reason is the aforementioned fact that
in the finer referential system of SPP-space (see Definition~\ref{definition:SPP}),
PPPs turn out to be non-orthogonal or not independent, and thus
a PPP may have to be mapped to a proper set of SPPs (see Table~\ref{table:PPPtoLPL}).
So the proper setting for SPP-space is a set of \emph{subsets} of SPPs, which
in turn, via the backward translation from SPP-space to $\mathcal{PPP}$, means that
the proper setting for $\mathcal{PPP}$, as the target of a mapping of subsets,
is also a set of subsets.
We continue to define SPP-space.
\begin{definition}[The Szondi Personality Profile Space]\label{definition:SPP}
Let us consider the Hasse-diagram \cite{DaveyPriestley} in Figure~\ref{figure:SzondiSignatures}
\begin{figure}[t]
\centering
\caption{Hasse-diagram of Szondi's signatures}
\medskip
\fbox{\begin{tikzpicture}
\node (pbbb) at (0,4) {$+!!!$};
\node (pbb) at (0,3) {$+!!$};
\node (pb) at (0,2) {$+!$};
\node (p) at (0,1) {$+$};
\node (n) at (0,0) {$0$};
\node (m) at (0,-1) {$-$};
\node (mb) at (0,-2) {$-!$};
\node (mbb) at (0,-3) {$-!!$};
\node (mbbb) at (0,-4) {$-!!!$};
\draw (mbbb) -- (mbb) -- (mb) -- (m) -- (n) -- (p) -- (pb) -- (pbb) -- (pbbb);
\node (pmub) at (1,1) {$\pm^{!}$};
\node (pm) at (1,0) {$\pm$};
\node (pmlb) at (1,-1) {$\pm_{!}$};
\draw (pmlb) -- (pm) -- (pmub);
\end{tikzpicture}}
\label{figure:SzondiSignatures}
\end{figure}
of the partially ordered set of \emph{Szondi's twelve signatures} \cite{Szondi:ETD:Band1} of
human reactions, which are:
\begin{itemize}
\item approval: from strong $+!!!$\,, $+!!$\,, and $+!$ to weak $+$\,;
\item indifference/neutrality: $0$\,;
\item rejection: from weak $-$\,, $-!$\,, and $-!!$ to strong $-!!!$\,; and
\item ambivalence: $\pm^{!}$ (approval bias), $\pm$ (no bias), and $\pm_{!}$ (rejection bias).
\end{itemize}
(Szondi calls the exclamation marks in his signatures \emph{quanta.})
Further let us call this set of signatures $\mathbb{S}$, that is,
$$\mathbb{S}=\{\,-!!!,-!!,-!,-,0,+,+!,+!!,+!!!,\pm_{!},\pm,\pm^{!}\,\}.$$
Now let us consider \emph{Szondi's eight factors and four vectors} of
human personality \cite{Szondi:ETD:Band1} as summarised in Table~\ref{table:SzondiFactors}.
\begin{table}[t]
\centering
\caption{Szondi's factors and vectors}
\medskip
{\small
\begin{tabular}{|c|c||C|C|}
\hline
\multirow{2}{12.5ex}{\centering \textbf{Vector}} & \multirow{2}{7.75ex}{\textbf{Factor}} & \multicolumn{2}{c|}{\textbf{Signature}}\\
\cline{3-4}
&& $+$ & $-$\\
\hline
\hline
\multirow{2}{12.5ex}{\centering \textsf{S} (Id)} & \textsf{h} (love) & physical love & platonic love\\
\cline{2-4}
& \textsf{s} (attitude) & (proactive) activity & (receptive) passivity\\
\hline
\multirow{2}{12.75ex}{\centering \textsf{P}\\[-\jot] (Super-Ego)} & \textsf{e} (ethics) & ethical behaviour & unethical behaviour\\
\cline{2-4}
& \textsf{hy} (morality) & immoral behaviour & moral behaviour\\
\hline
\multirow{2}{12.5ex}{\centering \textsf{Sch} (Ego)} & \textsf{k} (having) & having more & having less\\
\cline{2-4}
& \textsf{p} (being) & being more & being less\\
\hline
\multirow{2}{12.5ex}{\centering \textsf{C} (Id)} & \textsf{d} (relations) & unfaithfulness & faithfulness\\
\cline{2-4}
& \textsf{m} (bindings) & dependence & independence\\
\hline
\end{tabular}}
\label{table:SzondiFactors}
\end{table}
(Their names are of clinical origin and need not concern us here.)
And let us call the set of factors $\mathbb{F}$, that is,
$$\mathbb{F}=\{\,\F{h}{},\F{s}{},\F{e}{},\F{hy}{},\F{k}{},\F{p}{},\F{d}{},\F{m}{}\,\}.$$
Then,
$$\textrm{SPP}=\{\; \begin{array}[t]{@{}l@{}}
((\F{h}{,s_{1}}), (\F{s}{,s_{2}}), (\F{e}{,s_{3}}), (\F{hy}{,s_{4}}),
(\F{k}{,s_{5}}), (\F{p}{,s_{6}}), (\F{d}{,s_{7}}), (\F{m}{,s_{8}})) \mid\\
s_{1},\ldots,s_{8}\in\mathbb{S}\;\}
\end{array}$$
is the set of Szondi's personality profiles, and
$$\mathcal{SPP}=\langle\, 2^{\textrm{SPP}},\emptyset,\cap,\cup,\textrm{SPP},\overline{\,\cdot\,},\subseteq\,\rangle$$
defines our \emph{Szondi Personality Profile Space,} that is,
the (inclusion-ordered, Boolean) powerset algebra \cite{DaveyPriestley} on \textrm{SPP}\
(the set of all subsets of \textrm{SPP}).
\end{definition}
\noindent
As an example of an SPP,
consider the \emph{norm profile} for the Szondi-test \cite{Szondi:ETD:Band1}:
$$((\F{h}{,+}), (\F{s}{,+}), (\F{e}{,-}), (\F{hy}{,-}),
(\F{k}{,-}), (\F{p}{,-}), (\F{d}{,+}), (\F{m}{,+}))$$
Spelled out,
this norm profile describes the personality of a human being who
approves of physical love,
has a proactive attitude,
has unethical but moral behaviour,
wants to have and be less, and
is unfaithful and dependent.
We conclude this subsection with the definition of our interlingua LPL.
\begin{definition}[The Logical Pivot Language]
Let
$$\mathbb{A}=\{\,\F{h}{s_{1}}, \F{s}{s_{2}}, \F{e}{s_{3}}, \F{hy}{s_{4}}, \F{k}{s_{5}}, \F{p}{s_{6}}, \F{d}{s_{7}}, \F{m}{s_{8}} \mid
s_{1},\ldots,s_{8}\in\mathbb{S}\,\}$$
be our set of atomic logical formulas, and
$\textrm{LPL}(\mathbb{A})$ the classical propositional language over $\mathbb{A}$, that is,
the set of sentences constructed from the elements in $\mathbb{A}$ and
the classical propositional connectives
$\neg$ (negation, pronounced ``not''),
$\land$ (conjunction, pronounced ``and''),
$\lor$ (disjunction, pronounced ``or''), etc.
Then,
$$\mathcal{LPL}=\langle\,\textrm{LPL}(\mathbb{A}),\Rightarrow\,\rangle$$
defines our \emph{logical pivot language,} with
$\Rightarrow$ being logical consequence.
Logical equivalence $\equiv$ is defined in terms of $\Rightarrow$ such that
for every $\phi,\varphi\in\textrm{LPL}(\mathbb{A})$,
$\phi\equiv\varphi$ by definition if and only if
$\phi\Rightarrow\varphi$ and $\varphi\Rightarrow\phi$.
\end{definition}
\subsection{Mappings between structures}\label{section:MappingsAndMorphisms}
In this section,
we present
the defining mathematical mappings for
the concrete translation $\rightG{}$ of $\mathcal{PPP}$ to $\mathcal{SPP}$ via $\mathcal{LPL}$ and
the abstract translation $\leftG{}$ of $\mathcal{SPP}$ back to $\mathcal{PPP}$ again via $\mathcal{LPL}$ by
means of the auxiliary mappings $\mathrm{f}$ and $\mathrm{p}$.
We also prove that the ordered pair $(\,\rightG{},\leftG{}\,)$ is a Galois-connection, as promised.
\begin{definition}[Mappings]\label{definition:MappingsAndMorphisms}
Let the mapping (total function)
\begin{itemize}
\item $\mathrm{f}$ be defined in
\begin{itemize}
\item the function space $((\mathbb{PF}\times\{1,\ldots,10\})\to\textrm{LPL}(\mathbb{A}))$ as in Table~\ref{table:PPPtoLPL},
\item the function space $(\textrm{PPP}\to\textrm{LPL}(\mathbb{A}))$ such that
$$\begin{array}{@{}r@{}}
\mathrm{f}((\begin{array}[t]{@{}l@{}}
(\textsf{A},v_{1}),(\textsf{B},v_{2}),(\textsf{C},v_{3}),(\textsf{E},v_{4}),(\textsf{F},v_{5}),(\textsf{G},v_{6}),(\textsf{H},v_{7}),(\textsf{I},v_{8}),(\textsf{L},v_{9}),\\
(\textsf{M},v_{10}),(\textsf{N},v_{11}),(\textsf{O},v_{12}),(\textsf{Q1},v_{13}),(\textsf{Q2},v_{14}),(\textsf{Q3},v_{15}),(\textsf{Q4},v_{16}),\\
(\textsf{PS},v_{17}),(\textsf{HC},v_{18}),(\textsf{ST},v_{19}),(\textsf{AD},v_{20}),(\textsf{LE},v_{21}),(\textsf{SR},v_{22}),\\ (\textsf{AW},v_{23}),(\textsf{PI},v_{24}),(\textsf{OT},v_{25}),(\textsf{AP},v_{26}),(\textsf{TS},v_{27}),(\textsf{TI},v_{28})))=
\end{array}\\[13\jot]
\begin{array}[t]{@{}l@{}}
\mathrm{f}((\textsf{A},v_{1}))\land\mathrm{f}((\textsf{B},v_{2}))\land\mathrm{f}((\textsf{C},v_{3}))\land\mathrm{f}((\textsf{E},v_{4}))\,\land\\
\mathrm{f}((\textsf{F},v_{5}))\land\mathrm{f}((\textsf{G},v_{6}))\land\mathrm{f}((\textsf{H},v_{7}))\land\mathrm{f}((\textsf{I},v_{8}))\,\land\\
\mathrm{f}((\textsf{L},v_{9}))\land\mathrm{f}((\textsf{M},v_{10}))\land\mathrm{f}((\textsf{N},v_{11}))\land\mathrm{f}((\textsf{O},v_{12}))\,\land\\
\mathrm{f}((\textsf{Q1},v_{13}))\land\mathrm{f}((\textsf{Q2},v_{14}))\land\mathrm{f}((\textsf{Q3},v_{15}))\land\mathrm{f}((\textsf{Q4},v_{16}))\,\land\\
\mathrm{f}((\textsf{PS},v_{17}))\land\mathrm{f}((\textsf{HC},v_{18}))\land\mathrm{f}((\textsf{ST},v_{19}))\land\mathrm{f}((\textsf{AD},v_{20}))\,\land\\
\mathrm{f}((\textsf{LE},v_{21}))\land\mathrm{f}((\textsf{SR},v_{22}))\land\mathrm{f}((\textsf{AW},v_{23}))\land\mathrm{f}((\textsf{PI},v_{24}))\,\land\\
\mathrm{f}((\textsf{OT},v_{25}))\land\mathrm{f}((\textsf{AP},v_{26}))\land\mathrm{f}((\textsf{TS},v_{27}))\land\mathrm{f}((\textsf{TI},v_{28}))\,,
\end{array}
\end{array}$$
\begin{sidewaystable}
\centering
\caption{The translation $\mathrm{f}$ of $\mathbb{PF}\times\{1,2,3,4,5,6,7,8,9,10\}$ to $\textrm{LPL}(\mathbb{A})$}
\medskip
\resizebox{\textwidth}{!}{
$\begin{array}{|c|c|c|c|c||c||c|c|c|c|c|}
\hline
\multicolumn{5}{|c||}{\text{Low Range}} &
\multirow{2}{4ex}{\centering $\mathbb{PF}$} &
\multicolumn{5}{c|}{\text{High Range}} \\
\cline{1-5}\cline{7-11}
1 & 2 & 3 & 4 & 5 & & 6 & 7 & 8 & 9 & 10 \\
\hline
\hline
\F{h}{-!!} & \F{h}{-!} & \F{h}{-} & \F{h}{-} & \F{h}{0} & \textsf{A} &
\F{h}{0} & \F{h}{+} & \F{h}{+} & \F{h}{+!} & \F{h}{+!!}\\
\hline
\F{k}{+!!}\land\F{p}{-!!} & \F{k}{+!}\land\F{p}{-!} & \F{k}{+}\land\F{p}{-} & \F{k}{+}\land\F{p}{-} & \F{k}{0}\land\F{p}{0} & \textsf{B} &
\F{k}{0}\land\F{p}{0} & \F{k}{-}\land\F{p}{+} & \F{k}{-}\land\F{p}{+} & \F{k}{-!}\land\F{p}{+!} & \F{k}{-!!}\land\F{p}{+!!}\\
\hline
\F{d}{+!!} & \F{d}{+!} & \F{d}{+} & \F{d}{+} & \F{d}{0} & \textsf{C} &
\F{d}{0} & \F{d}{-} & \F{d}{-} & \F{d}{-!} & \F{d}{-!!}\\
\hline
\F{s}{-!!} & \F{s}{-!} & \F{s}{-} & \F{s}{-} & \F{s}{0} & \textsf{E} &
\F{s}{0} & \F{s}{+} & \F{s}{+} & \F{s}{+!} & \F{s}{+!!}\\
\hline
\F{k}{-!!} & \F{k}{-!} & \F{k}{-} & \F{k}{-} & \F{k}{0} & \textsf{F} &
\F{k}{0} & \F{k}{+} & \F{k}{+} & \F{k}{+!} & \F{k}{+!!}\\
\hline
\F{e}{-!!}\land\F{hy}{+!!}\land\F{k}{+!!} & \F{e}{-!}\land\F{hy}{+!}\land\F{k}{+!} & \F{e}{-}\land\F{hy}{+}\land\F{k}{+} & \F{e}{-}\land\F{hy}{+}\land\F{k}{+} & \F{e}{0}\land\F{hy}{0}\land\F{k}{0} & \textsf{G} &
\F{e}{0}\land\F{hy}{0}\land\F{k}{0} & \F{e}{+}\land\F{hy}{-}\land\F{k}{-} & \F{e}{+}\land\F{hy}{-}\land\F{k}{-} & \F{e}{+!}\land\F{hy}{-!}\land\F{k}{-!} & \F{e}{+!!}\land\F{hy}{-!!}\land\F{k}{-!!}\\
\hline
\F{hy}{-!!}\land\F{d}{-!!} & \F{hy}{-!}\land\F{d}{-!} & \F{hy}{-}\land\F{d}{-} & \F{hy}{-}\land\F{d}{-} & \F{hy}{0}\land\F{d}{0} & \textsf{H} &
\F{hy}{0}\land\F{d}{0} & \F{hy}{+}\land\F{d}{+} & \F{hy}{+}\land\F{d}{+} & \F{hy}{+!}\land\F{d}{+!} & \F{hy}{+!!}\land\F{d}{+!!}\\
\hline
\F{h}{-!!}\land\F{hy}{+!!}\land\F{p}{+!!} & \F{h}{-!}\land\F{hy}{+!}\land\F{p}{+!} & \F{h}{-}\land\F{hy}{+}\land\F{p}{+} & \F{h}{-}\land\F{hy}{+}\land\F{p}{+} & \F{h}{0}\land\F{hy}{0}\land\F{p}{0} & \textsf{I} &
\F{h}{0}\land\F{hy}{0}\land\F{p}{0} & \F{h}{+}\land\F{hy}{-}\land\F{p}{-} & \F{h}{+}\land\F{hy}{-}\land\F{p}{-} & \F{h}{+!}\land\F{hy}{-!}\land\F{p}{-!} & \F{h}{+!!}\land\F{hy}{-!!}\land\F{p}{-!!}\\
\hline
\F{k}{+!!}\land\F{p}{+!!} & \F{k}{+!}\land\F{p}{+!} & \F{k}{+}\land\F{p}{+} & \F{k}{+}\land\F{p}{+} & \F{k}{0}\land\F{p}{0} & \textsf{L} &
\F{k}{0}\land\F{p}{0} & \F{k}{-}\land\F{p}{-} & \F{k}{-}\land\F{p}{-} & \F{k}{-!}\land\F{p}{-!} & \F{k}{-!!}\land\F{p}{-!!}\\
\hline
\F{p}{-!!} & \F{p}{-!} & \F{p}{-} & \F{p}{-} & \F{p}{0} & \textsf{M} &
\F{p}{0} & \F{p}{+} & \F{p}{+} & \F{p}{+!} & \F{p}{+!!}\\
\hline
\F{hy}{+!!} & \F{hy}{+!} & \F{hy}{+} & \F{hy}{+} & \F{hy}{0} & \textsf{N} &
\F{hy}{0} & \F{hy}{-} & \F{hy}{-} & \F{hy}{-!} & \F{hy}{-!!}\\
\hline
\F{p}{+!!} & \F{p}{+!} & \F{p}{+} & \F{p}{+} & \F{p}{0} & \textsf{O} &
\F{p}{0} & \F{p}{-} & \F{p}{-} & \F{p}{-!} & \F{p}{-!!}\\
\hline
\F{d}{-!!} & \F{d}{-!} & \F{d}{-} & \F{d}{-} & \F{d}{0} & \textsf{Q1} &
\F{d}{0} & \F{d}{+} & \F{d}{+} & \F{d}{+!} & \F{d}{+!!}\\
\hline
\F{d}{+!!}\land\F{m}{+!!} & \F{d}{+!}\land\F{m}{+!} & \F{d}{+}\land\F{m}{+} & \F{d}{+}\land\F{m}{+} & \F{d}{0}\land\F{m}{0} & \textsf{Q2} &
\F{d}{0}\land\F{m}{0} & \F{d}{-}\land\F{m}{-} & \F{d}{-}\land\F{m}{-} & \F{d}{-!}\land\F{m}{-!} & \F{d}{-!!}\land\F{m}{-!!}\\
\hline
\F{k}{+!!} & \F{k}{+!} & \F{k}{+} & \F{k}{+} & \F{k}{0} & \textsf{Q3} &
\F{k}{0} & \F{k}{\pm} & \F{k}{\pm} & \F{k}{\pm^{!}} & \F{k}{\pm_{!}}\\
\hline
\F{e}{+!!} & \F{e}{+!} & \F{e}{+} & \F{e}{+} & \F{e}{0} & \textsf{Q4} &
\F{e}{0} & \F{e}{-} & \F{e}{-} & \F{e}{-!} & \F{e}{-!!}\\
\hline
\hline
\F{k}{0}\land\F{p}{+!!} & \F{k}{0}\land\F{p}{+!} & \F{k}{0}\land\F{p}{+} & \F{k}{0}\land\F{p}{+} & \F{k}{0}\land\F{p}{0} & \textsf{PS} & \F{k}{0}\land\F{p}{0} & \F{k}{0}\land\F{p}{-} & \F{k}{0}\land\F{p}{-} & \F{k}{0}\land\F{p}{-!} & \F{k}{0}\land\F{p}{-!!}\\
\hline
\F{hy}{+!!}\land\F{p}{+!!} & \F{hy}{+!}\land\F{p}{+!} & \F{hy}{+}\land\F{p}{+} & \F{hy}{+}\land\F{p}{+} & \F{hy}{0}\land\F{p}{0} & \textsf{HC} &
\F{hy}{0}\land\F{p}{0} & \F{hy}{-}\land\F{p}{-} & \F{hy}{-}\land\F{p}{-} & \F{hy}{-!}\land\F{p}{-!} & \F{hy}{-!!}\land\F{p}{-!!}\\
\hline
\F{s}{+!!}\land\F{k}{+!!} & \F{s}{+!}\land\F{k}{+!} & \F{s}{+}\land\F{k}{+} & \F{s}{+}\land\F{k}{+} & \F{s}{0}\land\F{k}{0} & \textsf{ST} &
\F{s}{0}\land\F{k}{0} & \F{s}{-}\land\F{k}{-} & \F{s}{-}\land\F{k}{-} & \F{s}{-!}\land\F{k}{-!} & \F{s}{-!!}\land\F{k}{-!!}\\
\hline
\F{p}{+!!}\land\F{d}{-!!} & \F{p}{+!}\land\F{d}{-!} & \F{p}{+}\land\F{d}{-} & \F{p}{+}\land\F{d}{-} & \F{p}{0}\land\F{d}{0} & \textsf{AD} &
\F{p}{0}\land\F{d}{0} & \F{p}{-}\land\F{d}{+} & \F{p}{-}\land\F{d}{+} & \F{p}{-!}\land\F{d}{+!} & \F{p}{-!!}\land\F{d}{+!!}\\
\hline
\F{s}{+!!!}\lor\F{s}{-!!!} & \F{s}{+!!!}\lor\F{s}{-!!!} & \F{s}{+!!}\lor\F{s}{-!!} & \F{s}{+!!}\lor\F{s}{-!!} & \F{s}{+!}\lor\F{s}{-!} & \textsf{LE} & \F{s}{+!}\lor\F{s}{-!} & \F{s}{+}\lor\F{s}{-} & \F{s}{+}\lor\F{s}{-} & \F{s}{0} & \F{s}{0}\\
\hline
\F{s}{+!!}\land\F{k}{+!!}\land\F{p}{-!!} & \F{s}{+!}\land\F{k}{+!}\land\F{p}{-!} & \F{s}{+}\land\F{k}{+}\land\F{p}{-} & \F{s}{+}\land\F{k}{+}\land\F{p}{-} & \F{s}{0}\land\F{k}{0}\land\F{p}{0} & \textsf{SR} &
\F{s}{0}\land\F{k}{0}\land\F{p}{0} & \F{s}{-}\land\F{k}{-}\land\F{p}{+} & \F{s}{-}\land\F{k}{-}\land\F{p}{+} & \F{s}{-!}\land\F{k}{-!}\land\F{p}{+!} & \F{s}{-!!}\land\F{k}{-!!}\land\F{p}{+!!}\\
\hline
\F{d}{+!!}\land\F{m}{+!!} & \F{d}{+!}\land\F{m}{+!} & \F{d}{+}\land\F{m}{+} & \F{d}{+}\land\F{m}{+} & \F{d}{0}\land\F{m}{0} & \textsf{AW} &
\F{d}{0}\land\F{m}{0} & \F{d}{-}\land\F{m}{-} & \F{d}{-}\land\F{m}{-} & \F{d}{-!}\land\F{m}{-!} & \F{d}{-!!}\land\F{m}{-!!}\\
\hline
(\F{k}{\pm_{!}}\lor\F{k}{\pm^{!}})\land\F{p}{0} & (\F{k}{\pm_{!}}\lor\F{k}{\pm^{!}})\land\F{p}{0} & \F{k}{\pm}\land\F{p}{0} & \F{k}{\pm}\land\F{p}{0} & \F{k}{0}\land\F{p}{0} & \textsf{PI} &
\F{k}{0}\land\F{p}{0} & \F{k}{0}\land\F{p}{\pm} & \F{k}{0}\land\F{p}{\pm} & \F{k}{0}\land(\F{p}{\pm_{!}}\lor\F{p}{\pm^{!}}) & \F{k}{0}\land(\F{p}{\pm_{!}}\lor\F{p}{\pm^{!}})\\
\hline
\F{k}{0}\land\F{p}{-!!} & \F{k}{0}\land\F{p}{-!} & \F{k}{0}\land\F{p}{-} & \F{k}{0}\land\F{p}{-} & \F{k}{0}\land\F{p}{0} & \textsf{OT} &
\F{k}{0}\land\F{p}{0} & \F{k}{\pm}\land\F{p}{+} & \F{k}{\pm}\land\F{p}{+} & (\F{k}{\pm_{!}}\lor\F{k}{\pm^{!}})\land\F{p}{+!} & (\F{k}{\pm_{!}}\lor\F{k}{\pm^{!}})\land\F{p}{+!!}\\
\hline
\F{k}{+!!}\land\F{p}{0} & \F{k}{+!}\land\F{p}{0} & \F{k}{+}\land\F{p}{0} & \F{k}{+}\land\F{p}{0} & \F{k}{0}\land\F{p}{0} & \textsf{AP} &
\F{k}{0}\land\F{p}{0} & \F{k}{-}\land\F{p}{\pm} & \F{k}{-}\land\F{p}{\pm} & \F{k}{-!}\land(\F{p}{\pm_{!}}\lor\F{p}{\pm^{!}}) & \F{k}{-!!}\land(\F{p}{\pm_{!}}\lor\F{p}{\pm^{!}})\\
\hline
\F{e}{+!!}\land\F{d}{-!!} & \F{e}{+!}\land\F{d}{-!} & \F{e}{+}\land\F{d}{-} & \F{e}{+}\land\F{d}{-} & \F{e}{0}\land\F{d}{0} & \textsf{TS} &
\F{e}{0}\land\F{d}{0} & \F{e}{-}\land\F{d}{+} & \F{e}{-}\land\F{d}{+} & \F{e}{-!}\land\F{d}{+!} & \F{e}{-!!}\land\F{d}{+!!}\\
\hline
\F{hy}{-!!}\land\F{p}{-!!}\land\F{d}{-!!} & \F{hy}{-!}\land\F{p}{-!}\land\F{d}{-!} & \F{hy}{-}\land\F{p}{-}\land\F{d}{-} & \F{hy}{-}\land\F{p}{-}\land\F{d}{-} & \F{hy}{0}\land\F{p}{0}\land\F{d}{0} & \textsf{TI} &
\F{hy}{0}\land\F{p}{0}\land\F{d}{0} & \F{hy}{+}\land\F{p}{+}\land\F{d}{+} & \F{hy}{+}\land\F{p}{+}\land\F{d}{+} & \F{hy}{+!}\land\F{p}{+!}\land\F{d}{+!} & \F{hy}{+!!}\land\F{p}{+!!}\land\F{d}{+!!}\\
\hline
\end{array}$}
\label{table:PPPtoLPL}
\end{sidewaystable}
\item the function space $(2^{\textrm{PPP}}\to\textrm{LPL}(\mathbb{A}))$ such that for every $F\in2^{\textrm{PPP}}$,
$$\mathrm{f}(F) = \bigwedge\{\,\mathrm{f}(f) \mid f\in F\,\}\,;$$
\end{itemize}
\item $\mathrm{p}$ be defined in the function space $(\textrm{SPP}\to\textrm{LPL}(\mathbb{A}))$ such that
$$\begin{array}{@{}r@{}}
\mathrm{p}(((\F{h}{,s_{1}}), (\F{s}{,s_{2}}), (\F{e}{,s_{3}}), (\F{hy}{,s_{4}}),
(\F{k}{,s_{5}}), (\F{p}{,s_{6}}), (\F{d}{,s_{7}}), (\F{m}{,s_{8}})))=\\
\F{h}{s_{1}}\land\F{s}{s_{2}}\land\F{e}{s_{3}}\land\F{hy}{s_{4}}\land\F{k}{s_{5}}\land\F{p}{s_{6}}\land\F{d}{s_{7}}\land\F{m}{s_{8}}
\end{array}$$
and in the function space $(2^{\textrm{SPP}}\to\textrm{LPL}(\mathbb{A}))$ such that for every $P\in2^{\textrm{SPP}}$,
$$ \mathrm{p}(P) = \bigvee\{\, \mathrm{p}(p) \mid p\in P\,\}\,.$$
\end{itemize}
Then, the mapping
\begin{itemize}
\item $\rightG{}:\mathcal{PPP}\to\mathcal{SPP}$ defined such that for every $F\in2^{\textrm{PPP}}$,
$$\rightG{F} = \{\,p\in\textrm{SPP} \mid \mathrm{p}(p)\Rightarrow\mathrm{f}(F)\,\}$$
is the so-called \emph{right polarity} and
\item $\leftG{}:\mathcal{SPP}\to\mathcal{PPP}$ defined such that for every $P\in2^{\textrm{SPP}}$,
$$\leftG{P} = \{\,f\in\textrm{PPP} \mid \mathrm{p}(P)\Rightarrow\mathrm{f}(f)\,\}$$
is the so-called \emph{left polarity} of the ordered pair $(\,\rightG{},\leftG{}\,)$.
\end{itemize}
\end{definition}
\noindent
Spelled out,
(1) the result of
applying the mapping $\mathrm{f}$ to
a set $F$ of PPPs $f$ as defined in Definition~\ref{definition:MappingsAndMorphisms} is
the conjunction of the results of
applying $\mathrm{f}$ to
each one of these $f$, which in turn
is the conjunction of the results of
applying $\mathrm{f}$ to
each one of the factor-value pairs in $f$ as
defined in Table~\ref{table:PPPtoLPL};
(2) the result of
applying the mapping $\mathrm{p}$ to
a set $P$ of SPPs $p$ as defined in Definition~\ref{definition:MappingsAndMorphisms} is
the disjunction of the results of
applying $\mathrm{p}$ to
each one of these $p$, which
simply is the conjunction of
all signed factors in $p$ taken each one as an atomic proposition;
(3) the result of
applying the mapping $\rightG{}$ to
a set $F$ of PPPs is
the set of all those SPPs $p$ whose
mapping under $\mathrm{p}$ implies the mapping of $F$ under $\mathrm{f}$;
(4) the result of
applying the mapping $\leftG{}$ to
a set $P$ of SPPs is
the set of all those PPPs $f$ whose
mapping under $\mathrm{f}$ is implied by the mapping of $P$ under $\mathrm{p}$.
Thus from a computer science perspective \cite[Section~7.35]{DaveyPriestley},
PPPs are specifications of SPPs and
SPPs are implementations or refinements of PPPs.
The Galois-connection then connects correct implementations to their respective specification by
stipulating that a correct implementation imply its specification.
By convention,
$\bigwedge\emptyset=\top$ and $\bigvee\emptyset=\bot$\,, that is,
the conjunction over the empty set $\emptyset$ is tautological truth $\top$\,, and
the disjunction over $\emptyset$ is tautological falsehood $\bot$\,, respectively.
Note that an example of an SPP that
maps to the empty set under $\leftG{}$ happens to be the Szondi norm profile mentioned before, because
its mapping under $\mathrm{p}$
$$\begin{array}{r}
\mathrm{p}(((\F{h}{,+}), (\F{s}{,+}), (\F{e}{,-}), (\F{hy}{,-}),
(\F{k}{,-}), (\F{p}{,-}), (\F{d}{,+}), (\F{m}{,+})))=\\
\F{h}{+}\land\F{s}{+}\land\F{e}{-}\land\F{hy}{-}\land\F{k}{-}\land\F{p}{-}\land\F{d}{+}\land\F{m}{+}\,,
\end{array}$$
does not meet our translation of Cattell's personality trait
$\textsf{B}$, $\textsf{G}$, $\textsf{H}$, $\textsf{M}$, $\textsf{Q3}$, $\textsf{PS}$, $\textsf{ST}$, $\textsf{LE}$, $\textsf{SR}$, $\textsf{PI}$, $\textsf{OT}$, $\textsf{AP}$, $\textsf{TS}$, nor $\textsf{TI}$, as can seen by inspecting Table~\ref{table:PPPtoLPL}.
As can also be seen in Table~\ref{table:PPPtoLPL},
our interpretation of Cattell's scale is mostly the following:
Cattell's value $1$ becomes Szondi's signature $-!!$,
$2$ becomes $-!$,
$3$ and $4$ become $-$,
$5$ and $6$ become $0$,
$7$ and $8$ become $+$,
$9$ becomes $+!$, and
$10$ becomes $+!!$.
This corresponds to
how Szondi accounts for
the corresponding number of
portrait choices of the same kind in his test \cite{Szondi:ETD:Band1}:
the low range $1$--$5$ corresponds to the numbers $1$--$5$ of antipathy choices (portrait dislikes), respectively, and
the high range $6$--$10$ to the numbers $1$--$5$ of sympathy choices (portrait likes), respectively.
Of course, our readers may experiment with their own interpretation and accounting.
For example,
they might want to take into account also
Szondi's signatures $-!!!$ and $+!!!$ for pathologically strong, unambiguous negative and positive choices, respectively, and
adapt the scale accordingly.
Szondi's signatures $-$ and $+$ account for normally strong, unambiguous negative and positive choices, respectively.
Szondi's test also allows for ambiguous sets of (portrait) choices
(noted---``signed'' in Szondi's terminology---as $\pm$, $\pm_{!}$, and $\pm^{!}$).
This ambiguity turns out to be also useful in our translation in Table~\ref{table:PPPtoLPL}.
Observe that
(1) in the translation of the low-high range opposition,
we have made use of
signature opposition (polarity, e.g., $\F{h}{-}$ versus $\F{h}{+}$);
(2) abnormal personality traits translate all into psychological syndromes, that is,
conjunctions of signed factors; and
(3) any conjunctive low-range translation is
the conjunction of the opposed factors of the corresponding high range translation.
This last observation makes PPPs appear quite rigid, but
is justified by the (natural-language) definition---``descriptors'' in Cattell's terminology---of \mbox{Cattell's} personality traits \cite[Table~7.1]{16PFinHB}, which
we recall by annotating them with Szondi's signed factors (Cattell's commas correspond to conjunctions here):
\begin{enumerate}
\item Reserved [\F{h}{-}], Impersonal [\F{h}{-}], Distant [\F{h}{-}]---Warmth
(\textsf{A})---Warm-\linebreak hearted [\F{h}{+}], Caring [\F{h}{+}], Attentive To Others [\F{h}{+}];
\item Concrete [\F{k}{+}, having, matter], Lower Mental Capacity [\F{p}{-}, psychological projection, subjectivity]---Reasoning
(\textsf{B})---Abstract [\F{p}{+}, being, ideas], Bright [\F{k}{-}, \F{p}{+}], Fast-Learner [\F{p}{+}, intuition];
\item Reactive [\F{s}{-}, \F{d}{+}], Affected By Feelings [\F{d}{+}, depression]---Emotional Stability
(\textsf{C})---Emotionally Stable [\F{d}{-}], Adaptive Mature [\F{d}{\pm}];
\item Deferential [\F{s}{-}], Cooperative [\F{s}{-}], Avoids Conflict [\F{s}{-}]---Dominance\linebreak
(\textsf{E})---Dominant [\F{s}{+}], Forceful [\F{s}{+}], Assertive [\F{s}{+}];
\item Serious [\F{k}{-}], Restrained [\F{k}{-}], Careful [\F{k}{-}]---Liveliness
(\textsf{F})---Enthusiastic [\F{k}{+}], Animated [\F{k}{+}], Spontaneous [\F{k}{+}];
\item Expedient [\F{e}{-}, \F{hy}{+}, \F{k}{+}], Nonconforming [\F{e}{-}, \F{hy}{+}]---Rule-Consciousness
(\textsf{G})---Rule-Conscious [\F{e}{+}, \F{hy}{-}, \F{k}{-}], Dutiful [\F{e}{+}];
\item Shy [\F{hy}{-}], Timid [\F{hy}{-}], Threat-Sensitive [\F{d}{-}]---Social Boldness
(\textsf{H})---Socially Bold [\F{hy}{+}], Venturesome [\F{d}{+}], Thick-Skinned [\F{h}{0}];
\item Tough [\F{h}{0}, \F{hy}{+}, \F{p}{+}], Objective [\F{p}{+}], Unsentimental [\F{h}{-}]---Sensitivity
(\textsf{I})---Sensitive [\F{h}{+}, \F{hy}{-}, \F{p}{-}], Aesthetic [\F{h}{+}], Tender-Minded [\F{h}{+}, \F{p}{-}];
\item Trusting [\F{p}{+}, \F{m}{+}], Unsuspecting [\F{p}{+}], Accepting [\F{k}{+}]---Vigilance
(\textsf{L})---Vigilant [\F{p}{-}], Suspicious [\F{p}{-}], Skeptical [\F{k}{-}], Wary [\F{p}{-}];
\item Practical [\F{p}{-}], Grounded [\F{p}{-}], Down-To-Earth [\F{p}{-}]---Abstractedness\linebreak
(\textsf{M})---Abstracted [\F{p}{+}], Imaginative [\F{p}{+}], Idea-Oriented [\F{p}{+}];
\item Forthright [\F{hy}{+}], Genuine [\F{hy}{+}], Artless [\F{hy}{+}]---Privateness
(\textsf{N})---Private [\F{hy}{-}], Discreet [\F{hy}{-}], Non-Disclosing [\F{hy}{-}];
\item Self-Assured [\F{p}{+}], Unworried [\F{p}{+}], Complacent [\F{p}{+}]---Apprehension\linebreak
(\textsf{O})---Apprehensive [\F{p}{-}], Self-Doubting [\F{p}{-}], Worried [\F{p}{-}];
\item Traditional [\F{d}{-}], Attached To Familiar [\F{d}{-}]---Openness to Change
(\textsf{Q1})---Open To Change [\F{d}{+}], Experimenting [\F{d}{+}];
\item Group-Oriented [\F{d}{+}, \F{m}{+}], Affiliative [\F{d}{+}, \F{m}{+}]---Self-Reliance
(\textsf{Q2})---Self-Reliant [\F{d}{-}, \F{m}{-}], Solitary [\F{d}{-}, \F{m}{-}], Individualistic [\F{d}{-}, \F{d}{-}];
\item Tolerates Disorder [\F{k}{0}], Unexacting [\F{k}{0}], Flexible [\F{k}{0}]---Perfectionism\linebreak
(\textsf{Q3})---Perfectionistic [\F{k}{\pm}], Organized [\F{k}{-}], Self-Disciplined [\F{k}{\pm}];
\item Relaxed [\F{e}{+}], Placid [\F{e}{+}], Patient [\F{e}{+}]---Tension
(\textsf{Q4})---Tense [\F{e}{-}], High Energy [\F{e}{-}], Driven [\F{e}{-}].
\end{enumerate}
Cattell's global personality factors
(Cattell's ``Big Five''),
defined as groups of 16PF primary traits \cite[Table~7.2]{16PFinHB},
can then simply be translated as
disjunctions of the translations of
the corresponding primary traits.
That is, for every value $v\in\{1,2,3,4,5,6,7,8,9,10\}:$
\begin{eqnarray*}
\text{Extraversion $v$} & = &
\begin{array}[t]{@{}l@{}}
\mathrm{f}((\textsf{A},v))\lor\mathrm{f}((\textsf{F},v))\lor\mathrm{f}((\textsf{H},v))\,\lor\\
\mathrm{f}((\textsf{N},10-v))\lor\mathrm{f}((\textsf{Q2},10-v))
\end{array}\\
\text{High Anxiety $v$} & = &
\mathrm{f}((\textsf{C},v))\lor\mathrm{f}((\textsf{L},v))\lor\mathrm{f}((\textsf{O},v))\lor\mathrm{f}((\textsf{Q4},v))\\
\text{Tough-Mindedness $v$} & = &
\mathrm{f}((\textsf{A},10-v))\lor\mathrm{f}((\textsf{I},10-v))\lor\mathrm{f}((\textsf{M},v))\lor\mathrm{f}((\textsf{Q1},v))\\
\text{Independence $v$} & = &
\mathrm{f}((\textsf{E},v))\lor\mathrm{f}((\textsf{H},v))\lor\mathrm{f}((\textsf{L},10-v))\lor\mathrm{f}((\textsf{Q1},v))\\
\text{Self-Control $v$} & = &
\mathrm{f}((\textsf{F},10-v))\lor\mathrm{f}((\textsf{G},v))\lor\mathrm{f}((\textsf{M},10-v))\lor\mathrm{f}((\textsf{Q3},v))
\end{eqnarray*}
Like in \cite{arXiv:1403.2000v1},
we now prove in two intermediate steps that
the pair $(\,\rightG{},\leftG{}\,)$ is indeed a Galois-connection.
The first step is the following announced fact, from which
the second step, Lemma~\ref{lemma:Properties}, follows, from which in turn
the desired result, Theorem~\ref{theorem:Galois}, then follows---easily.
As announced,
all that our readers need to check on their own analog of our mapping $\mathrm{f}$ is
that it has the property stated as Fact~\ref{fact:FactsAboutip}.1.
Their own Galois-connection is then automatically induced.
\begin{fact}[Some facts about $\mathrm{f}$ and $\mathrm{p}$]\label{fact:FactsAboutip}\
\begin{enumerate}
\item if $F\subseteq F'$ then $\mathrm{f}(F')\Rightarrow\mathrm{f}(F)$
\item if $P\subseteq P'$ then $\mathrm{p}(P)\Rightarrow\mathrm{p}(P')$
\item The function $\mathrm{p}$ but not the function $\mathrm{f}$ is injective, and
neither is surjective.
\end{enumerate}
\end{fact}
\begin{proof}
By inspection of Definition~\ref{definition:MappingsAndMorphisms} and Table~\ref{table:PPPtoLPL}.
\end{proof}
\noindent
Like in \cite{arXiv:1403.2000v1},
we need
Fact~\ref{fact:FactsAboutip}.1 and \ref{fact:FactsAboutip}.2 but
not Fact~\ref{fact:FactsAboutip}.3 in the following development.
Therefor, note the two macro-definitions
$\rightleftG{}:=\rightG{}\circ\leftG{}$ and
$\leftrightG{}:=\leftG{}\circ\rightG{}$ with
$\circ$ being function composition, as usual (from right to left, as usual too).
\begin{lemma}[Some useful properties of $\rightG{}$ and $\leftG{}$]\label{lemma:Properties}\
\begin{enumerate}
\item if $F\subseteq F'$ then $\rightG{F'}\subseteq\rightG{F}$\quad(\;$\rightG{}$ is antitone)
\item if $P\subseteq P'$ then $\leftG{P'}\subseteq\leftG{P}$\quad(\,$\leftG{}$ is antitone)
\item $P\subseteq\rightG{(\leftG{P})}$\quad(\;$\rightleftG{}$ is inflationary)
\item $F\subseteq\leftG{(\rightG{F})}$\quad(\,$\leftrightG{}$ is inflationary)
\end{enumerate}
\end{lemma}
\begin{proof}
Like in \cite{arXiv:1403.2000v1}.
\end{proof}
\noindent
We are ready for making the final step.
\begin{theorem}[The Galois-connection property of $(\,\rightG{},\leftG{}\,)$]\label{theorem:Galois}
The ordered pair $(\,\rightG{},\leftG{}\,)$ is an \emph{antitone} or \emph{order-reversing Galois-connection} between $\mathcal{PPP}$ and $\mathcal{SPP}$.
%
That is,
for every $F\in2^{\textrm{PPP}}$ and $P\in2^{\textrm{SPP}}$,
$$\text{$P\subseteq\rightG{F}$ if and only if $F\subseteq\leftG{P}$.}$$
\end{theorem}
\begin{proof}
Like in \cite{arXiv:1403.2000v1}.
\end{proof}
\noindent
Thus from a computer science perspective \cite[Section~7.35]{DaveyPriestley},
smaller (larger) sets of PPPs and thus less (more) restrictive specifications correspond to
larger (smaller) sets of SPPs and thus more (fewer) possible implementations.
Note that Galois-connections are
connected to \emph{residuated mappings} \cite{LatticesAndOrderedAlgebraicStructures}.
Further,
natural notions of equivalence on $\mathcal{PPP}$ and $\mathcal{SPP}$ are given by
the \emph{kernels} of $\rightG{}$ and $\leftG{}$, respectively, which are, by definition:
$$\begin{array}{rcl}
F\equiv F' &\text{if and only if}& \rightG{F}=\rightG{F'}\;;\\[\jot]
P\equiv P' &\text{if and only if}& \leftG{P}=\leftG{P'}\,.
\end{array}$$
\begin{proposition}[The computability of $(\,\rightG{},\leftG{}\,)$]\
\begin{enumerate}
\item Given $F\in2^{\textrm{PPP}}$, $\rightG{F}$ is computable.
\item Given $P\in2^{\textrm{SPP}}$, $\leftG{P}$ is computable.
\end{enumerate}
\end{proposition}
\begin{proof}
Similar to \cite{arXiv:1403.2000v1}, but
with the difference that
the Galois-connection there is efficiently computable,
whereas the one here is only so for small sets $F$ and $P$
(which in practice usually are singleton sets of only one personality profile).
\end{proof}
\section{Conclusion}
We have proposed a computable Galois-connection between
PsychEval Personality Profiles (including the 16PF Personality Profiles) and
Szondi's personality profiles,
as promised in the abstract and
as a further illustration of
our simple methodology introduced in \cite{arXiv:1403.2000v1}
for generating such Galois-connections.
\paragraph{Acknowledgements}
The \LaTeX-package TikZ was helpful for graph drawing.
\bibliographystyle{plain}
|
1,477,468,751,413 | arxiv | \section{Introduction}\label{sec:1}
As a major part of beyond 5G (B5G) or 6G network scenarios, autonomous urban aerial mobility (UAM) systems are widely and actively discussed by industry and academia~\cite{nm20saad}. Based on these huge interests, many research contributions are available nowadays in terms of UAM trajectory optimization~\cite{tvt19yin}, energy-efficient operations~\cite{tvt19shin}, and so forth.
The trajectory optimization and energy-efficient operations fundamentally control the mobility of autonomous UAM systems.
Therefore, conducting visual simulations with the proposed learning-based trajectory optimization and energy-efficient operation algorithms is essentially required in order to intuitively understand the behaviors of the UAM algorithms.
In addition, the most of trajectory optimization algorithms are designed via deep reinforcement learning (DRL) because DRL algorithms are fundamentally for sequential stochastic decision making in order to maximize cumulative expected rewards. Therefore, the UAM simulations should efficiently identify the DRL-based autonomous UAM flying trajectories and operations, and thus, it is obvious that visual representation of the UAM simulations is helpful for intuitive understanding the algorithms.
In this paper, we implement our own 3D visualization software platform which is for simulating DRL-based autonomous trajectory control using Unity. In order to conduct more precise simulations, we added buildings for urban environment because we assume urban aerial mobility for providing smart city services such as surveillance and flexible mobile access.
\section{Unity Implementation and 3D Visualization}\label{sec:2}
\subsection{Unity Implementation}
Fig.~\ref{fig:1} illustrates the system architecture for conducting DRL in Unity environment. With Unity API, it is possible to model the environment, dynamic models and features, DRL elements (\textit{i.e.}, states, actions, transitions, and rewards), and these are named to \textit{Asset} in Unity.
With \texttt{mlagents} (\textit{i.e.}, Unity library for DRL implementation), training DRL agents and visualizing the training results can be realized because 1) \textit{Communicator} exists which realizes the interaction with Python API and 2) \textit{Asset} can be loaded. For the simulations of UAM systems, Unity \textit{Asset} which is named to \textit{Drone Flight} is used.
Our considering aerial mobility system is UAM, thus the simulations should be performed in urban areas those are with numerous building and skyscrapers. Therefore, we implements the buildings and skyscrapers in \textit{Drone Flight}. Furthermore, the corresponding environment information which can be observed by agents is organized by current position, goal position, current velocity, current angular velocity, altitude vector, and building/skyscraper position vectors. The actions are for controlling the UAM motor for desired directions, thus 3D Cartesian coordinate is used, \textit{i.e.}, $(x,y,z)$. Lastly, the rewards can be positive when 1) the agent arrives at the destination whereas the rewards are negative when 1) the agent becomes far from the
goal and 2) the agent becomes closer to buildings, skyscrapers, and obstacles.
Fig.~\ref{fig:2} shows the Unity implementation results in this UAM environment. Note that several buildings are added for urban scenario construction.
\begin{figure}[t!]
\centering
\includegraphics[width=0.81\columnwidth]{PPT-1.pdf}
\caption{Software architecture for learning environment.}
\label{fig:1}
\vspace{-2mm}
\end{figure}
\subsection{Visualization}
Based on our Unity implementation on top of \textit{Drone Flight}, we conduct DRL-based agent trajectory training and performance evaluation. In addition, the results are visualized.
For the DRL training of the agent, proximal policy optimization (PPO) is used~\cite{ppo}. The policy of agent is trained by the deep neural networks with 2 dense layers where each layer is with 128 units. In addition, $\epsilon$-greedy is used for DRL training exploration where $\epsilon=0.2$.
Furthermore, multi-agent parallel processing is utilized that is supported by \texttt{mlagents}, thus parallel accelerated training computation is realized. For the DRL training, learning iteration is set to $3,000$\,K and detailed hardware/software specification is summarized in Table~\ref{tab:param1}.
\begin{table}[t
\caption{Hardware and software specification}
\footnotesize
\label{tab:param1}
\begin{center}
\centering
\begin{tabular}{l|l}
\toprule[1.0pt]
\centering
System & Specification \\
\midrule[1.0pt]
CPU & $\bullet$ Intel(R) Core(TM) i7-9700k, 3.60GHz@2 \\
& \hspace{1mm} RAM: 64\,GB \\
\midrule
GPU & $\bullet$ NVIDIA GeForce GTX 1660 super \\
& \hspace{1mm} The number of cores: $1,408$ \\
& \hspace{1mm} Frame buffer: 6GB GDDR6 \\
& \hspace{1mm} Memory speed: 14 Gbps \\
\midrule
Software & $\bullet$ Unity: 20.1.17f1 \\
Ver. & $\bullet$ \texttt{mlagents}: 0.7 \\
& $\bullet$ tensorflow: 1.12 \\
\bottomrule[1.0pt]
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.8\linewidth]{infocom_fig2.pdf}
\caption{Unity implementation in UAM environment.}
\label{fig:2}
\vspace{-2mm}
\end{figure*}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{2_fig3.png}
\caption{Rewards of autonomous aerial mobility learning.}
\label{fig:3}
\vspace{-2mm}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\columnwidth]{infocom-fig4_size_modified.pdf}
\caption{Visualization of autonomous aerial mobility learning.}
\label{fig:4}
\vspace{-2mm}
\end{figure}
Fig.~\ref{fig:3} and Fig.~\ref{fig:4} show the simulation results. In Fig.~\ref{fig:3}, the reward convergence is plotted, and then we can confirm that the reward eventually converges.
Fig.~\ref{fig:4} shows the time-series visual simulations of DRL-trained UAM agent's behaviors.
Then, it can be observed that 1) the UAM agent moves toward its own goal when $t \in [0, T]$ and 2) the UAM tries to avoid buildings, skyscrapers, and obstacles when $t \in [0, \frac{2}{8}T]$, as designed in rewards. Note that the behaviors to move to the goal can be observed during all time steps.
Finally, we observe that the DRL reward converges and the corresponding DRL-based agent controls its own trajectory based on the positive and negative reward settings. Therefore, we can confirm that our DRL-based agent works as desired and the results are simulated and visualized via our own Unity-based software visual simulation platforms.
Note that the video demonstration for our own simulation and 3D visualization results are in~\cite{youtube}.
\section{Conclusions and Future Work}\label{sec:3}
This demo abstract presents the implementation and visualization of DRL-based autonomous UAM simulations. Furthermore, various buildings can be placed for smart city urban environment simulations. As future work, various urban scenarios can be also considerable.
\section*{Acknowledgment}
This research is supported by National Research Foundation of Korea (2019R1A2C4070663 and 2019M3E4A1080391).
S. Jung, J. Kim, and J.-H. Kim are corresponding authors.
|
1,477,468,751,414 | arxiv |
\section{Introduction}\label{introduction}
As a powerful tool of machine learning, an artificial neural network (ANN) can closely approximate any function with multiple hidden-layer nodes and nonlinear activation functions \cite{Goodfellow-et-al-2016}. Owing to its capability of universal approximation \cite{HORNIK1991251}, an ANN can be trained as an excellent classifier using the famous back-propagation (BP) algorithm driven by ``big data''; thus, the ANN has in recent years been applied to various fields, like computer vision \cite{LDCT19}, natural language processing \cite{tan2019multilingual} and so on.
Efforts have also been made to use ANN for solving some signal processing problems in communications. For example, in \cite{YLJ18} and \cite{SDW19}, ANN has been used to obtain the channel state information. This kind of methods can have good performance in certain scenarios, but commonly lacks interpretability and generalization capability, probably because the {\it nonlinear} activation functions in the ANN neurons, such as sigmoid, softmax, ReLU, are defined from a mathematical perspective but lack clear physical meanings. For the same reason, an ANN is usually regarded as a data-driven tool rather than a model-based one, despite some recent efforts to combine the ANN with certain model-based domain knowledge as done in \cite{liaozhaogaoli2020,shlezinger2020viterbinet,yangdujiang2021}.
In this paper, we propose the concept of ``model-based neural network'' (MNN), which stems from the first author's previous work in relay network optimization \cite{WangJiang2020}\cite{WangJiang2022} (but termed as {\it quasi-neural network} therein). The MNN has the same layered form as the ANN but with the input, the output, and the activation functions being artfully designed to have explicit physical meanings. Hence, the MNN can be regarded as an evolution of the classic ANN towards a fully interpretable modeling tool. The MNN has the key features as follows.
\begin{itemize}
\item Different from an ANN as a data-driven classifier, which lacks interpretability and generalization capability \cite{FXLW21, ZLGW03}, the MNN is a modeling tool with clear physical meaning, and hence is fully interpretable;
\item Similar to an ANN as a universal approximator, the MNN has the layered structure and can be a universal ``modeller'' by artfully choosing the input, the output, and the nonlinear activation functions;
\item Owing to the layered structure, the MNN can be efficiently optimized using the BP algorithm based on the chain rule of derivative.
\end{itemize}
As a showcase application of the MNN,
we apply the MNN to the classic spectral estimation \cite{stoica2005spectral}.
Spectral estimation has been actively researched for several decades, due to its wide applications in radar \cite{YLSXB2010}, medical imaging \cite{LDSP2008}, wireless communications \cite{RB1998}, and autonomous-driving vehicles \cite{SPP20}, etc.
Three categories of methods have been developed for spectral estimation in the past several decades. The first category is non-parametric, among which the fast Fourier transform (FFT) is the simplest and most widely-used. This category of methods, however, often suffers from low resolution and high false alarm probability.
The second category parameterizes the signal and estimates the parameters based on the maximum likelihood (ML) criterion. The ML estimation is asymptotically statistically efficient under white Gaussian noise if the number of the sinusoids is known and the global optimum is achieved \cite{stoica2005spectral}, but it is computationally very involved due to the non-convexity of the problem. The state-of-the-art methods include the RELAX method \cite{listoica1996efficient,liuli1998implement}, the atomic norm based method \cite{bhaskar2013atomic}\cite{yang2015gridless}, and the Newtonized Orthogonal Matching Pursuit (NOMP) method \cite{MRM16}. The NOMP method also proposes an approach to determine the number of the sinusoids, i.e., the model order. Both RELAX and NOMP methods are essentially coordinate descent methods and hence can be time-consuming when the model order is large. The atomic norm based methods rely on semidefinite programming (SDP) and hence is computationally complicated especially in the high-dimension scenarios. The MUSIC algorithm \cite{R1986} and the ESPRIT algorithm \cite{RoyKailath1989}, as two famous parametric methods, cannot yield real ML estimates even with the known number of sinusoids, and hence are not statistically efficient.
The third category is the semi-parametric methods, an intermediate one between the first and the second categories. This kind of methods utilize the signal sparsity and solve a convex problem. They can often achieve higher resolution and lower false-alarm probability than the non-parametric methods. But they are not as theoretically robust as the parametric ones, because the asymptotic efficiency is not guaranteed \cite{SBL11,SZL14}. Moreover, the semi-parametric methods are grid-based, leading to performances confined to the granularity of the grid points.
In this paper, we apply the MNN as a new solution to the problem of line spectral estimation. It belongs to the second category, i.e., it is a parametric method. But it can solve the non-convex problem efficiently and outperform the state-of-the-art methods. Specifically, we use the time index as the MNN's input, use the complex amplitudes and digital angular frequencies as the network's weights, and use the complex exponential function as the nonlinear activation function. Based on the cost function of fitting residual, the BP method \cite{Goodfellow-et-al-2016} is then used to train this MNN. We first obtain the coarse initial estimates of the frequencies and the amplitudes using the simple FFT-based spectral estimation method, which usually provides good initialization for the BP algorithm to find a global optimum. The resultant optimized weights of the MNN are nothing but the optimal estimates of the frequencies and the amplitudes of the sinusoids.
The contributions of this paper are summarized as follows:
1) We introduce the concept of MNN as a universal modeler with clear physical meaning, which may motivate future researches beyond the spectral estimation and the relay communications as studied in \cite{WangJiang2022}.
2) We apply the MNN to solve the classic line spectral estimation problem. Detailed network structure and derivation of updating formula are provided in this paper. Numerical examples show the feasibility of MNN and its superior performance over some widely-used spectral estimation methods, such as the FFT, the MUSIC \cite{R1986}, and the grid-based semi-parametric methods \cite{SBL11,SZL14}.
3) Model-order selection, i.e., to determine the number of sinusoids, is a challenging aspect of spectral estimation. The classic model-order selection methods, such as the AIC \cite{A74} and the BIC \cite{SS2004}, need exhaustive searches across different model orders until finding the best one, which entails formidable computational complexity. For the MNN, each hidden node corresponds to a sinusoid; thus, model-order selection can be easily achieved via merging and pruning of the hidden-layer nodes. We present criteria of node merging and pruning that is theoretically solid.
The rest of this paper is organized as follows. In Section \ref{sec:model}, we introduce the signal model and formulate the problem. In Section \ref{sec:BP}, we apply the MNN to the classic spectral estimation problem and use the BP to train the network. We also show how to use FFT to initialize the MNN and how to determine the number of sinusoidal components in the line spectral signal by merging and pruning the network nodes. In Section \ref{sec:simulation}, we provide numerical simulation results to verify the effectiveness of our proposed method.
\textit{Notation:} We denote vectors and matrices by boldface lower-case and upper-case letter, respectively. $(\cdot)^T$ and $(\cdot)^H$ denote the transpose and conjugate transpose operation, respectively. $\|\xbf\|_2$ denotes the $\ell_2$ norm of the vector $\xbf$. $\odot$ denotes the element-wise product of two matrices or two vectors. $(\cdot)^{T}$, $(\cdot)^{*}$ and $(\cdot)^H$ denote transpose, complex conjugate and conjugate transpose, respectively. $\mathcal{CN}(0, \sigma^2)$ denotes the complex Gaussian noise with zero mean and $\sigma^2$ variance. $\mathcal{E}(1/\theta)$ denotes the exponential distribution with the mean being $\theta$. $\mathcal{X}^2_{\nu}$ is the chi-square distribution whose degree of freedom is $\nu$. $F_{\nu_1, \nu_2}$ denotes the F-distribution with two degrees of freedom parameters being $\nu_1$ and $\nu_2$.
\section{Signal Model and Model-based Neural Network}\label{sec:model}
We first introduce the concept of the MNN using an example of signal processing for relay network communications, before showing that the MNN can also be applied for line spectral estimation.
\subsection{The MNN for Modeling Relay Networks}
\begin{figure}[htb]
\centering
\includegraphics[width=3.5in]{neuralNet.pdf}
\caption{A relay network shown in the upper subplot is analogous to a four-layer ANN shown in the lower subplot. }
\label{fig.neuralNet}
\end{figure}
As an illustrative example of the MNN, we recall the nonlinear relay beamforming network studied in \cite{WangJiang2020}. As shown in the upper subplot of Fig. \ref{fig.neuralNet}, we considered in \cite{WangJiang2020} the optimization of the precoding of the transmitter, the relay beamforming weights, and the receiver beamforming, denoted by $\ubf$, $\Vbf$, and $\wbf$, respectively, according to the minimum mean squared error (MMSE) criterion. The instantaneous power constraint
per transmit antenna is modeled by the {\it nonlinear} Soft Envelop Limiter (SEL) function
\begin{equation}\label{eq.sigma}
\sigma(x)\triangleq
\begin{cases}
x & {|x|\leq1}\\
e^{j\angle(x)} & {|x|>1}.
\end{cases}
\end{equation}
Then the nonlinear SEL is analogous to a nonlinear activation function of the conventional ANN. Combining $\Vbf$ with the source-to-relay channel (denoted as $\Hbf_r$ in the upper subplot of Fig. \ref{fig.neuralNet}) and combining $\wbf$ with the relay-to-destination channel (denoted as $\Hbf_d$), we can view the relay network as a four-layer ANN as illustrated in the upper subplot of Fig. \ref{fig.neuralNet}. Such a network has the same layered form as the ANN but with activation functions artfully designed with clear physical meaning. Owing to the layered form, the network can be optimized by the classic BP algorithm based on some pilot sequences.
\subsection{The MNN for Modeling Line Spectral Signals}
The classic problem of line spectral estimation relies on the signal model\cite{stoica2005spectral}:
\begin{equation}
\ybf = \xbf + \ebf \in\mathbb{C}^{N\times 1},
\label{equ.yn}
\end{equation}
where $\xbf$ is the sum of $K$ complex-valued sinusoidal signals, i.e., $x(n) = \sum_{k=1}^K\alpha_ke^{j\omega_kn},\ n=0,\dots,N-1$; $\alpha_k$ and $\omega_k\in \left[0,2\pi\right]$ are the complex-valued amplitude and digital angular frequency of the $k$-th complex exponential component, respectively; $\ebf$ is complex i.i.d. additive white Gaussian noise (AWGN) with zero mean and unknown variance $\sigma^2$, i.e., $e(n)\sim\mathcal{CN}(0, \sigma^2)$.
We propose to model the signal $\xbf$ (\ref{equ.yn}) using a network as shown in Fig. \ref{fig.NN}, where the input is the sequence
\begin{equation}
\nbf \triangleq \left[0,1,\dots, N-1 \right]^T\in{\mathbb R}^{N\times1},
\end{equation}
the activation function of the $M$ neurons of the hidden layer is
\begin{equation} \sigma(z) = e^{jz}. \end{equation}
\begin{figure}[htb]
\centering
\includegraphics[width=3.2in]{oneHiddenLayerNNx.pdf}
\caption{The MNN for modeling the superimposed sinousoids.}
\label{fig.NN}
\end{figure}
Denote $\tilde{\omega}$'s as the weights connecting the input layer and the hidden layer, and $\tilde{\alpha}$'s as the weights connecting the hidden layer and the output layer. Then
we have
\begin{equation}
\begin{split}
\zbf_{i} = \tilde{\omega}_i\nbf,\quad &
\abf(\tilde{\omega}_i) = \sigma(\zbf_{i}) = \begin{pmatrix}
e^{jz_{i,1}} \\
e^{jz_{i,2}} \\
\vdots \\
e^{jz_{i,N}}
\end{pmatrix}
=\ \begin{pmatrix}
1 \\
e^{j \tilde{\omega}_i} \\
\vdots \\
e^{j\tilde{\omega}_i(N-1)}
\end{pmatrix}
\label{equ.ziai}
\end{split}
\end{equation}
and
\begin{equation}
\xbf = \sum_{i=1}^M\tilde{\alpha}_i\abf(\tilde{\omega}_i).
\label{equ.tildeyv1}
\end{equation}
Denote $\tilde{\alphabf} = [\tilde{\alpha}_1,\tilde{\alpha}_2,\dots,\tilde{\alpha}_M]^T\in{\mathbb C}^{M\times 1}$, and for notational simplicity, denote $\abf_{i} = \abf(\tilde{\omega}_i)$. Hence,
\begin{equation}
\Abf(\tilde{\omegabf}) = [\abf_{1},\abf_{2},\dots,\abf_{M}]\in{\mathbb C}^{N\times M}, \label{equ.Aalpha}
\end{equation}
and
\begin{equation}
\xbf = \Abf(\tilde{\omegabf})\tilde{\alphabf}.
\label{equx2}
\end{equation}
Note that $\xbf$ in (\ref{equx2}) is the same with that in (\ref{equ.yn}) except that $K$ is replaced by $M$, since the number of signals $K$ is usually unknown in practice. Indeed, the estimation of the model order is challenging, which will be addressed in Section \ref{sec:PM}.
To estimate $\alpha_k$ and $\omega_k$, we choose to adopt the cost function
\begin{equation}
C(\tilde{\omegabf},\tilde{\alphabf}) \triangleq ||\ybf-\Abf(\tilde{\omegabf})\tilde{\alphabf}||_2^{2},
\label{equ.NNCostFunc}
\end{equation}
to train the network. Once the training process converges to the global optimum, the weights, i.e., $\tilde{\omega}_i,\ \tilde{\alpha}_i,\ i= 1,2,\dots,M,$ are naturally the ML estimate of the signal model parameters and contain the complete information of the spectrum of $\ybf$.
The network is similar to a three-layer ANN (with one-hidden layer) in its layered form; thus, the MNNs in both Fig. \ref{fig.neuralNet} and \ref{fig.NN} can be optimized by the BP algorithm. But the MNN differs from a conventional ANN in that the weights and the activation functions of the MNN have perfect physical meaning; thus, the MNN can serve as a modeling tool rather than a data-driven classifier.
\section{Line Spectral Estimation Using MNN} \label{sec:BP}
\subsection{Network Optimization Using BP Algorithm}
To train the MNN, we calculate the gradients of (\ref{equ.NNCostFunc}) with respect to $\tilde{\omega}_i,\tilde{\alpha}_i$ using the BP algorithm, which is essentially a gradient descent method explained as follows.
First, by using the chain rule we have from (\ref{equ.NNCostFunc}) that
\begin{equation}
\frac{\partial C}{\partial \tilde{\alphabf}^*} = \frac{\partial \xbf}{\partial \tilde{\alphabf}^*}\frac{\partial C}{\partial \xbf}+\frac{\partial \xbf^*}{\partial \tilde{\alphabf}^*}\frac{\partial C}{\partial \xbf^*}.
\end{equation}
It follows from (\ref{equ.tildeyv1}) that
\begin{equation}
\frac{\partial \xbf}{\partial \tilde{\alphabf}^*} = {\bf 0} \;\; {\rm and} \;\; \frac{\partial \xbf^*}{\partial \tilde{\alphabf}^*} = \Abf^H;
\end{equation}
it follows from (\ref{equ.NNCostFunc}) that
\begin{equation}
\frac{\partial C}{\partial \xbf} = (\xbf - \ybf)^* \;\; {\rm and} \;\; \frac{\partial C}{\partial \xbf^*} = (\xbf - \ybf).
\end{equation}
Thus, we obtain
\begin{equation}
\frac{\partial C}{\partial \tilde{\alphabf}^*} = \Abf^H(\xbf - \ybf).
\label{equ.deralpha}
\end{equation}
Second, we have
\begin{equation}
\begin{split}
\frac{\partial C}{\partial \tilde{\omega}_i}
=& \left[\frac{\partial \abf_i^*}{\partial \tilde{\omega}_i}\frac{\partial \xbf}{\partial \abf_i^*} + \frac{\partial \abf_i}{\partial \tilde{\omega}_i}\frac{\partial \xbf}{\partial \abf_i}\right]\frac{\partial C}{\partial \xbf} \\
&+ \left[\frac{\partial \abf_i^*}{\partial \tilde{\omega}_i}\frac{\partial \xbf^*}{\partial \abf_i^*} + \frac{\partial \abf_i}{\partial \tilde{\omega}_i}\frac{\partial \xbf^*}{\partial \abf_i}\right]\frac{\partial C}{\partial \xbf^*},\ i = 0,1,\dots,M.
\end{split}
\label{equ.dertildeomegav1}
\end{equation}
Knowing from (\ref{equ.tildeyv1}) that
\begin{equation}
\frac{\partial \xbf^*}{\partial \abf_i} = {\bf 0}_M\;\; {\rm and} \; \quad \frac{\partial \xbf}{\partial \abf_i^*} = {\bf 0}_M,
\end{equation}
we can rewrite (\ref{equ.dertildeomegav1}) as
\begin{equation}
\frac{\partial C}{\partial \tilde{\omega}_i}
= \frac{\partial \abf_i}{\partial \tilde{\omega}_i}\frac{\partial \xbf}{\partial \abf_i}\frac{\partial C}{\partial \xbf}
+ \frac{\partial \abf_i^*}{\partial \tilde{\omega}_i}\frac{\partial \xbf^*}{\partial \abf_i^*}\frac{\partial C}{\partial \xbf^*}.
\label{equ.dertildeomegav2}
\end{equation}
From (\ref{equ.ziai}), (\ref{equ.tildeyv1}), we can obtain
\begin{equation} \label{eqaw}
\begin{split}
\frac{\partial \abf_i}{\partial \tilde{\omega}_i} &= \nbf^T\odot
\left[\frac{\partial a_{i,1}}{\partial z_{i,1}},\frac{\partial a_{i,2}}{\partial z_{i,2}},\dots,\frac{\partial a_{i,N}}{\partial z_{i,N}}\right], \\
\frac{\partial \abf_i^*}{\partial \tilde{\omega}_i} &= \nbf^T\odot
\left[\frac{\partial a^*_{i,1}}{\partial z_{i,1}},\frac{\partial a^*_{i,2}}{\partial z_{i,2}},\dots,\frac{\partial a^*_{i,N}}{\partial z_{i,N}}\right],
\end{split}
\end{equation}
and
\begin{equation} \label{eqxa}
\frac{\partial \xbf}{\partial \abf_i} = \tilde{\alpha}_i\Ibf_N, \frac{\partial \xbf^*}{\partial \abf_i^*} = \tilde{\alpha}^*_i\Ibf_N,
\end{equation}
where
\begin{equation}
\begin{split}
\frac{\partial a_{i,n}}{\partial z_{i,n}} = je^{jz_{i,n}},\quad &\frac{\partial a_{i,n}^*}{\partial z_{i,n}} = -je^{-jz_{i,n}}, \\
&n = 0,1,\dots,N-1.
\end{split}
\end{equation}
Thus, substituting (\ref{equ.deralpha}) (\ref{eqaw}) (\ref{eqxa}) into (\ref{equ.dertildeomegav2}) yields
\begin{equation}
\frac{\partial C}{\partial \tilde{\omegabf}} = 2{\rm Im}\left\{\tilde{\alphabf}\odot\left[\Abf^T\left[\nbf\odot(\ybf - \xbf)^*\right]\right]\right\}.
\label{equ.deromega}
\end{equation}
Given the gradients (\ref{equ.deralpha}) and (\ref{equ.deromega}), we then use the momentum method \cite{Goodfellow-et-al-2016} to choose the search direction and the learning rate, since it usually outperforms the method of steepest descent, especially for a non-convex problem. In the $t$-th iteration, the network weights are updated as
\begin{equation}
\begin{split}
\tilde{\alphabf}(t) &= \tilde{\alphabf}(t-1) - \gamma\dbf_{\tilde{\alphabf}}(t), \\
\tilde{\omegabf}(t) &= \tilde{\omegabf}(t-1) - \gamma\dbf_{\tilde{\omegabf}}(t),
\end{split}
\label{equ.alphaOmegaUpdate}
\end{equation}
where $\gamma$ is the learning rate, $\dbf_{\tilde{\alphabf}}(t)$ and $\dbf_{\tilde{\omegabf}}(t)$ are the momentums defined as
\begin{equation}
\begin{split}
\dbf_{\tilde{\alphabf}}(t) &= \lambda\dbf_{\tilde{\alphabf}}(t-1)+(1-\lambda)\frac{\partial C}{\partial \tilde{\alphabf}^*}(t), \\
\dbf_{\tilde{\omegabf}}(t) &= \lambda\dbf_{\tilde{\omegabf}}(t-1)+(1-\lambda)\frac{\partial C}{\partial \tilde{\omegabf}}(t),
\end{split}
\label{equ.momentum}
\end{equation}
with $\dbf_{\tilde{\alphabf}}(0)={\bf 0}_M$, $\dbf_{\tilde{\omegabf}}(0) = {\bf 0}_M$. Here $\lambda$ is the momentum parameter.
Note that $\tilde{\omega}_i,\ i = 1,2,\dots,M$ obtained by BP algorithm are not necessarily confined to $\left[0, 2\pi\right]$, which is fine because at the end we can simply take the $2\pi$ modulo of $\tilde{\omega}_i$, i.e., $\tilde{\omega}_i \leftarrow \text{mod}(\tilde{\omega}_i, 2\pi),\ i = 1,2,\dots, M$.
The initialization of $\tilde{\alphabf}(0)$ and $\tilde{\omegabf}(0)$ is explained in the next.
\subsection{Initialization Using FFT}\label{sec.initial}
Due to the non-convexity of (\ref{equ.NNCostFunc}), a random initialization of the weight $\tilde{\alphabf}(0)$ and $\tilde{\omegabf}(0)$ often leads to a local optimum; thus, it may require too many random initializations for the BP algorithm before finding a global optimum. To solve this issue, we consider using the FFT to obtain the initial parameter estimation.
First, apply a zero-padded FFT to the sequence $\ybf$ to obtain an $L$-point frequency-domain sequence $\ybf^f$; second, locate of the peaks of $|\ybf^f|$ and add the corresponding frequency points into the initial frequency set $\tilde{\omegabf}(0) \in {\mathbb R}^P$, where $P$ is the number of peaks. Due to the low resolution of the FFT spectrum, one peak may be due to two or more sinusoids with frequencies approximate to each other. To obtain a higher frequency resolution, we also check the frequency points adjacent to the peaks $\tilde{\omega}_i, i=1,...,P$, i.e., to compare the FFT power spectrum at frequencies $\tilde{\omega}_i \pm \frac{2\pi}{L}$ and augment to the vector $\tilde{\omegabf}(0)$ by $\tilde{\omega}_i + \frac{2\pi}{L}$ or $\tilde{\omega}_i - \frac{2\pi}{L}$ depending which one corresponds to the higher power. After removing the repeated elements, the cardinality of $\tilde{\omegabf}(0)$ is denoted by $M$. Finally, $\tilde{\alphabf}$ can be initialized by using the least squared method corresponding to the frequency points in $\tilde{\omegabf}(0)$, i.e.,
\begin{equation}\label{equ:a0}
\tilde{\alphabf}(0) = [\Abf^H(\tilde{\omegabf}(0))\Abf(\tilde{\omegabf}(0))]^{-1}\Abf^H(\tilde{\omegabf}(0)) \ybf.
\end{equation}
\subsection{Model Order Selection}\label{sec:PM}
As mentioned earlier, it is a nontrivial task to determine the number of sinusoids. As each node in the hidden layer of the MNN corresponds to a sinusoid, we can merge or prune the nodes to adjust the order of the model conveniently when conducting the BP algorithm. The guidance of model order selection is explained as follows.
\subsubsection{Criterion of nodes merging}
To determine whether two nodes should be merged is essentially a hypothesis testing problem:
\begin{equation}
\begin{aligned}
&H_0: \omega_j - \omega_j \leq \Delta\omega_{\min}\\
&H_1: \omega_j - \omega_i > \Delta\omega_{\min},
\end{aligned}
\end{equation}
with $\omega_j>\omega_i$. Here $\Delta\omega_{\min}\geq 0$ is some prescribed number.
To solve this problem, we derive the posterior probability of $\Delta\omega_{ij}\triangleq\omega_j-\omega_i$ conditioned on the ML estimate $\tilde{\omega}_i,\tilde{\omega}_j$, i.e.,
\begin{equation}
{\rm Pr}(\Delta\omega_{ij}|\tilde{\omega}_i,\tilde{\omega}_j).
\end{equation}
We first use the Cram\'er–Rao bound (CRB) to obtain the probability of the ML estimate. Consider the simplifying scenario where only two sinusoids exist, i.e.,
\begin{equation}\label{equ.2sin}
\begin{split}
& y(n) = {\alpha}_ie^{j{\omega}_in} + {\alpha}_je^{j{\omega}_jn} + e(n), \\
& \hspace{2em}{\omega}_i<{\omega}_j, n = 0, \dots, N-1, e(n)\sim\mathcal{CN}(0, \hat{\sigma}^2).
\end{split}
\end{equation}
Then the ML estimates of both frequencies should be unbiased with variance approaching the CRB \cite{ME83}. That is,
\begin{equation}
\left.\begin{pmatrix}
\tilde{\omega}_i\\
\tilde{\omega}_j
\end{pmatrix}\right|\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix} \sim \mathcal{N}\left(\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix},{\rm CRB}^{ij}\right),
\end{equation}
where
\begin{equation}\label{equ.CRBomega}
\begin{split}
&{\rm CRB}^{ij} = \frac{\sigma^2}{2}\frac{1}{|\alpha_i|^2|\alpha_j|^2\rho_1^2 - {\rm Re}\left[\alpha_i^*\alpha_j\rho_2\right]^2}\times \\
&\hspace{1em}\begin{bmatrix}
|\alpha_j|^2\rho_1 & -{\rm Re}\left[\alpha_i^*\alpha_j\rho_2\right]\\
-{\rm Re}\left[\alpha_i^*\alpha_j\rho_2\right]& |\alpha_i|^2\rho_1
\end{bmatrix},
\end{split}
\end{equation}
with $\rho_1$ and $\rho_2$ being defined as
\begin{equation}
\rho_1 = \sum_{n=0}^{N-1}n^2, \quad
\rho_2 = \sum_{n=0}^{N-1}n^2e^{j(\omega_j - \omega_i)n}.
\end{equation}
The derivation for (\ref{equ.CRBomega}) is relegated to Appendix.
Next, we assume without loss of generality that $\omega_i, \omega_j$ are independent and have a prior probability of uniform distribution in $[0,2\pi]$,
\begin{equation}
p\left(\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix}\right) = \frac{1}{4\pi ^2}, \omega_i,\omega_j\in[0,2\pi ].
\end{equation}
Then the posterior distribution of $\omega_i, \omega_j$ can be obtained by
\begin{equation}
\begin{split}
&p\left(\left.\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix}\right|\begin{pmatrix}
\tilde{\omega}_i\\
\tilde{\omega}_j
\end{pmatrix}\right) \\
&= \frac{p\left(\left.\begin{pmatrix}
\tilde{\omega}_i\\
\tilde{\omega}_j
\end{pmatrix}\right|\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix}\right)p\left(\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix}\right)}{\displaystyle\int_0^{2\pi}\int_0^{2\pi} p\left(\left.\begin{pmatrix}
\tilde{\omega}_i\\
\tilde{\omega}_j
\end{pmatrix}\right|\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix}\right)p\left(\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix}\right)d{\omega}_id{\omega}_j}\\
&=p\left(\left.\begin{pmatrix}
\tilde{\omega}_i\\
\tilde{\omega}_j
\end{pmatrix}\right|\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix}\right).
\end{split}
\end{equation}
Hence, the posteriori distribution of the frequencies conditioned on the ML estimates is
\begin{equation}
\left.\begin{pmatrix}
{\omega}_i\\
{\omega}_j
\end{pmatrix}
\right|\begin{pmatrix}
\tilde{\omega}_i\\
\tilde{\omega}_j
\end{pmatrix}\sim \mathcal{N}\left(\begin{pmatrix}
\tilde{\omega}_i\\
\tilde{\omega}_j
\end{pmatrix},{\rm CRB}^{ij}\right).
\end{equation}
It follows from (\ref{equ.CRBomega}) that the statistic $\Delta{\omega}_{ij} \triangleq {\omega}_{j}-{\omega}_{i}$
\begin{equation}
\Delta{\omega}_{ij} \sim \mathcal{N}(\tilde{\omega}_{j}-\tilde{\omega}_{i}, {\rm CRB}^{ij}_{\Delta}),
\end{equation}
where
\begin{equation}\label{equ.CRBdomega}
\begin{split}
{\rm CRB}^{ij}_{\Delta} &= \begin{bmatrix}-1, 1\end{bmatrix}{\rm CRB}^{ij}\begin{bmatrix}-1 \\ 1\end{bmatrix} \\
& = \frac{\sigma^2}{2}\frac{(|\alpha_i|^2+|\alpha_j|^2)\rho_1 + 2{\rm Re}[\alpha_i^*\alpha_j\rho_2]}{|\alpha_i|^2|\alpha_j|^2\rho_1^2 - {\rm Re}[\alpha_i^*\alpha_j\rho_2]^2}.
\end{split}
\end{equation}
If the probability of $\Delta{\omega}_{ij}\leq \Delta\omega_{\min}$ is larger than a small value $\epsilon_f$ (e.g., $1\times10^{-6}$), i.e.,
\begin{equation}\label{equ:Prmerge}
{\rm Pr}(\Delta{\omega}_{ij} \leq \Delta\omega_{\min} )> \epsilon_f,
\end{equation}
we accept the hypothesis $H_0$ and propose to merge the two ``hidden'' nodes since they can not be separated correctly with high probability (see Fig. \ref{fig.merge}).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Prmerge.pdf}
\caption{Illustration of determining merging criterion.}
\label{fig.merge}
\end{figure}
By some simple calculations, we find that if
\begin{equation}\label{equ:merge}
\tilde{\omega}_j - \tilde{\omega}_i < \Delta\omega_{\min} -\sqrt{{\rm CRB}^{ij}_{\Delta}}\mathcal{N}^{-1}(\epsilon_f),
\end{equation}
where $\mathcal{N}^{-1}(\cdot)$ denotes the inverse function of the cumulative distribution function (CDF) of the standarded Gaussian distribution, we can merge the two corresponding nodes into one, average their frequencies to be $(\tilde{\omega}_i+\tilde{\omega}_j)/2$, and combine the amplitudes into $\tilde{\alpha}_i + \tilde{\alpha}_j$ before the next iteration.
Note that although in real applications we do not know the true value of the complex amplitudes, frequencies, and the Gaussian noise variance, we can use the estimated value $\tilde{\alpha}_i$, $\tilde{\alpha}_j$, $\tilde{\omega}_i$, $\tilde{\omega}_j$ and $\hat{\sigma}^2 = \|\xbf - \ybf\|_2^2/N$ instead of the true value to calculate ${\rm CRB}^{ij}_{\Delta}$, since the ML estimate of the MNN is unbiased. Also, we commonly set $\Delta\omega_{\min}=0$ in real applications for simplicity without loss of performance.
\subsubsection{Criterion of node pruning}\label{sec.prunecriterion}
To determine whether a node should be pruned is also a hypothesis testing problem:
\begin{equation}
\begin{aligned}
&H_0: \mbox{a sinusoid does not exist at } \tilde{\omega} \\
&H_1: \mbox{a sinusoid exists at } \tilde{\omega} .
\end{aligned}
\end{equation}
The idea of deriving the node pruning criterion is that if the power of a sinusoidal component is larger than a certain threshold, we keep this component or it should be pruned.
Consider the statistic
\begin{equation}\label{equ:prune_stat}
\xi \triangleq \frac{|\abf^H(\tilde{\omega}) \ybf|^2}{\|\ybf - \Abf(\tilde{\omegabf}) {\tilde\alphabf}\|_2^2},
\end{equation}
where $\tilde{\omegabf}$ and ${\tilde\alphabf}$ are the current weights of the MNN. This statistic describes the distribution of the power at frequency $\tilde{\omega}_i$ normalized by the noise power. We propose to prune the node from the network if $\xi$ is less than some threshold $\Xi$ after some iterations. We show in the next that the distribution function of $\xi$ under $H_0$ is independent of the noise power; thus, the threshold $\Xi$ can be derived according to a prescribed constant false alarm rate (CFAR).
With this statistic, we can use the following false alarm rate criterion:
\begin{equation}
{\rm Pr}\left(\frac{|\abf^H_i\ybf|^2}{\|\ybf - \Abf{\tilde\alphabf}\|_2^2}>\Xi\right)<\epsilon_a, i = 1, \dots, M.
\end{equation}
It means that when $\ybf$ does not contain a sinusoid at frequency $\tilde{\omega}_i$, the probability of false alarm, i.e., the corresponding statistic (\ref{equ:prune_stat}) being larger than a certain threshold $\Xi$, should be less than a small value $\epsilon_a$, e.g., $1\times 10^{-6}$.
To find $\Xi$, we need to derive the distribution of (\ref{equ:prune_stat}). For simplicity, we consider the case that $\ybf$ does not contain any sinusoid, i.e., $\ybf = \ebf \sim \mathcal{CN}(0, \sigma^2)$. We first derive the distributions of the numerator and denominator of (\ref{equ:prune_stat}) separately, and find that they have forms of chi-square distribution. Thus, a scaled version of (\ref{equ:prune_stat}) obeys an F-distribution.
Because $\ybf = \ebf \sim \mathcal{CN}(0, \sigma^2)$, $|y(n)|^2$ obeys the exponential distribution whose mean is $\sigma^2$, i.e., $|y(n)|^2\sim \mathcal{E}(1/\sigma^2)$. Using the property that $\mathcal{E}(1/2)$ is equal to $\mathcal{X}_2^2$, we have
\begin{equation}
\frac{2}{\sigma^2}|y(n)|^2\sim\mathcal{X}^2_2.
\end{equation}
Then, we can obtain the distribution of $\|\ybf\|_2^2$ as follows:
\begin{equation}
\frac{2}{\sigma^2}\|\ybf\|_2^2 = \sum_{n=0}^{N-1}\frac{2}{\sigma^2}|y(n)|^2\sim\mathcal{X}^2_{2N}.
\end{equation}
Because the already-estimated $\Abf{\tilde\alphabf}$ cancels $2M$ degrees of freedom, approximately, we have
\begin{equation}
\frac{2}{\sigma^2}\|\ybf - \Abf{\tilde\alphabf}\|_2^2\sim\mathcal{X}^2_{2(N-M)}.
\end{equation}
Next, we derive the distribution of $|\abf^H_i\ybf|^2$. Because $\mathcal{CN}(0,\sigma^2)$ is rotationally invariant, $|\abf^H_i\ybf|^2$ has the same distribution as $|\abf^H_i|^2|y(0)|^2=N|y(0)|^2\sim\mathcal{E}(1/N\sigma^2)$. Thus,
\begin{equation}
\frac{2}{N\sigma^2}|\abf^H_i\ybf|^2\sim\mathcal{X}_2^2.
\end{equation}
Then, (\ref{equ:prune_stat}) can be viewed as the quotient of two chi-square distributions:
\begin{equation}
\frac{\frac{2}{N\sigma^2}|\abf^H_i\ybf|^2}{\frac{2}{\sigma^2}\|\ybf - \Abf{\tilde\alphabf}\|_2^2}=\frac{1}{N}\frac{|\abf^H_i\ybf|^2}{\|\ybf - \Abf{\tilde\alphabf}\|_2^2}\sim\frac{\mathcal{X}_2^2}{\mathcal{X}^2_{2(N-M)}}.
\end{equation}
Note that the statistic $\abf^H_i\ybf$ can be viewed as projecting the Gaussian statistic $\ybf$ onto a space spanned by $\abf_i$, i.e., ${\rm span}(\abf_i)$ and apparently, ${\rm span}(\abf_i)\in{\rm span}(\Abf)$. Also note that $\ybf - \Abf{\tilde\alphabf}$ can be viewed as projecting $\ybf$ onto the complement space of ${\rm span}(\Abf)$. Thus, $|\abf^H_i\ybf|^2$ and $\|\ybf - \Abf{\tilde\alphabf}\|_2^2$ are statistically independent.
Using the property that when $\mathcal{X}_2^2/2$ and $\mathcal{X}^2_{2(N-M)}/2(N-M)$ are independent,
\begin{equation}
\frac{\mathcal{X}_2^2/2}{\mathcal{X}^2_{2(N-M)}/2(N-M)} = F_{2, 2(N-M)},
\end{equation}
we can obtain the distribution of a scaled version of (\ref{equ:prune_stat}) and it has the following form:
\begin{equation}\label{equ.xi_dist}
\frac{N-M}{N}\frac{|\abf^H_i\ybf|^2}{\|\ybf - \Abf{\tilde\alphabf}\|_2^2}\sim F_{2, 2(N-M)}.
\end{equation}
Given the distribution of (\ref{equ:prune_stat}), the threshold can be easily obtained as
\begin{equation}
\Xi=\frac{N}{N-M}F^{-1}_{2,2(N-M)}(1-\epsilon_a),
\end{equation}
where $F^{-1}_{2,2(N-M)}(\cdot)$ is the inverse function of the CDF of $F_{2, 2(N-M)}$. If the statistic (\ref{equ:prune_stat}) of the estimated frequency $\tilde{\omega}_i$ is smaller than $\Xi$, we prune the corresponding network node because mostly probably there is only noise at frequency $\tilde{\omega}_i$.
Note that the performance of the pruning criterion can be theoretically analyzed by plotting the corresponding receiver operating characteristic (ROC) curves, for which we need to compute the false alarm rate (FAR) against the probability of detection (PD). Considering the MNN with one hidden-layer node ($M=1$), we define the PD as the probability of keeping the node when there exists a corresponding sinusoid, and define the FAR as the probability of keeping the node when the signal does not contain any sinusoid. Next, we derive PD and FAR separately. To derive FAR, we consider a signal only containing the white Gaussian noise. It is easy to know that when $\ybf = \ebf$,
\begin{equation}
{\rm FAR} = {\rm Pr}\left(\frac{|\abf^H_i\ybf|^2}{\|\ybf - \Abf{\tilde\alphabf}\|_2^2}\geq\Xi\right) = \epsilon_a.
\end{equation}
Then we derive the PD and consider the signal containing one sinusoid with frequency $\omega_i$ and amplitude $\alpha_i$, i.e., $\ybf = \alpha_i\abf_i+\ebf$. We again derive the distribution of the numerator and denominator of $\xi$ separately first. Because the sinusoid in $\ybf$ can be subtracted by the unbiased estimate $\tilde{\alpha}_1,\tilde{\omega}_1$, we still have
\begin{equation}
\frac{2}{\sigma^2}\|\ybf -\Abf{\tilde\alphabf}\|_2^2 =\frac{2}{\sigma^2}\|\ybf -\tilde{\alpha}_1\tilde{\abf}_i\|_2^2 \sim\mathcal{X}^2_{2(N-M)}.
\end{equation}
Next, because we have $\abf^H_i\ybf =\abf^H_i(\xbf + \ebf) =N\alpha_1 + \abf^H_i\ebf\sim\mathcal{CN}(N\alpha_1, N\sigma^2)$, $\frac{2}{N\sigma^2}|\abf^H_i\ybf|^2$ obeys the non-central chi-square distribution whose degree of freedom is $2$, and the non-centrality parameter is $2N|\alpha_1|^2/\sigma^2$. Similar to (\ref{equ.xi_dist}), the scaled version of the statistic, i.e.,
\begin{equation}
\frac{N-M}{N}\xi \triangleq \frac{N-M}{N}\frac{|\abf^H_i\ybf|^2}{\|\ybf - \Abf{\tilde\alphabf}\|_2^2},
\end{equation}
obeys the non-central F-distribution with the degrees of freedom being $2$ and $2(N-M)$, and the non-centrality parameter is $2N|\alpha_1|^2/\sigma^2$. The probability of detection can be easily obtained by calculating ${\rm Pr}(\xi \geq \Xi)$. Denote the signal-to-noise ratio (SNR) as $10\log_{10}\frac{|\alpha_1|^2}{\sigma^2}$, the ROC curves under different SNR cases are show in Fig. \ref{fig.pruneROC_theory}. The large area under the ROC curves shows the good performance of our pruning method. More numerical analysis can be found in Section \ref{sec:simulation}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{pruneROC_theory.pdf}
\caption{Theoretical ROC of the pruning method under different SNR cases}
\label{fig.pruneROC_theory}
\end{figure}
After the merging and pruning, the corresponding weights of the reduced number of nodes, i.e., the amplitudes and frequencies, will further be updated by the BP algorithm until convergence.
With FFT initialization and merging and pruning steps, the whole procedure of training the neural network is summarized in Algorithm \ref{Algo.1}.
\begin{algorithm}[ht]
\caption{Training of MNN}
\label{Algo.1}
\begin{algorithmic}[1]
\Require Received data sequence $\ybf$; learning rate $\gamma$, momentum parameter $\lambda$; \Ensure The optimized weight of the MNN $\tilde{\alphabf},\tilde{\omegabf}$;
\State Apply an $L$-point FFT to $\ybf$ to obtain $\ybf^f$, $L$ is commonly set as $4N$;
\State Locate the peaks of $|\ybf^f|$ as $\omega_1,..., \omega_P$, put these frequency points into the initial frequency set $\tilde{\omegabf}^0(0)$.
\State For $\omega_i, i=1,\dots, P$, also add its adjacent frequency point with larger amplitudes obtained by FFT into the initial frequency set $\tilde{\omegabf}^0(0)\in\mathbb{R}^M$.
\State Obtain the initial amplitude set $\tilde{\alphabf}^0(0)$ by using the least square method corresponding to the frequency points in $\tilde{\omegabf}^0(0)$ (see (\ref{equ:a0})).
\State $\tau=1$;
\Do
\State Initialize the NN with $\tilde{\alphabf}^{\tau-1}(0)$ and
$\tilde{\omegabf}^{\tau-1}(0)$ (when\Statex\quad\ $\tau>1$, $\tilde{\alphabf}^{\tau-1}(0) = \tilde{\alphabf}^{\tau-1}, \tilde{\omegabf}^{\tau-1}(0) = \tilde{\omegabf}^{\tau-1}$);
\State $t = 0$;
\Do
\State Calculate $\zbf_i^{\tau}(t)$, $\abf_i^{\tau}(t)$ and $\xbf^{\tau}(t)$ by (\ref{equ.ziai})-(\ref{equx2});
\State Calculate $(\frac{\partial C}{\partial \tilde{\alphabf}^*})^{\tau}(t+1)$ and $(\frac{\partial C}{\partial \tilde{\omegabf}})^{\tau}(t+1)$ using \Statex \quad \quad \quad (\ref{equ.deralpha}) and (\ref{equ.deromega});
\State Update $\tilde{\alphabf}^{\tau}(t+1)$ and $\tilde{\omegabf}^{\tau}(t+1)$ using (\ref{equ.alphaOmegaUpdate}) and \Statex \quad \quad \quad (\ref{equ.momentum});
\State $t = t + 1$;
\State ${\bar C} =\frac{1}{N}||\ybf-\xbf^{\tau}(t)||_2^{2}$;
\doWhile{the change in ${\bar C}$ from the previous iteration is
\Statex \quad \ \ less than a pre-set tolerance $\epsilon$ (e.g., $\epsilon = 10^{-5}$)}
\State {\it Merging}: If $\tilde{\omega}_j^{\tau}-\tilde{\omega}_i^{\tau}<\Delta\omega_{\min}-\sqrt{{\rm CRB}^{ij}_{\Delta}}\mathcal{N}^{-1}(\epsilon_f)$,
\Statex \quad \ \ merge them into one, average their frequencies to be
\Statex \quad \ \ $(\tilde{\omega}_i^{\tau}+\tilde{\omega}_j^{\tau})/2$, and combine the amplitudes into $\tilde{\alpha}_i^{\tau}+\tilde{\alpha}_j^{\tau}$.
\State {\it Pruning}: If $\xi$ of $\tilde{\omega}_i$ is smaller than $\Xi$, prune the
\Statex \quad \ \ corresponding node.
\State Taking modulo $\tilde{\omega}_i^{\tau}\!\leftarrow\!\text{mod}(\tilde{\omega}_i^{\tau}, 2\pi)$.
\State Result: $\tilde{\bm\omega}^{\tau}, \tilde{\bm\alpha}^{\tau}$ after merging and pruning.
\State $\tau=\tau+1$.
\doWhile{no sinusoidal components are merged or pruned.}
\end{algorithmic}
\end{algorithm}
\subsection{Complexity Analysis}
From Line 10 - 12 in Algorithm \ref{Algo.1}, (\ref{equ.ziai}) needs $MN$ multiplications. The complexity of (\ref{equx2}) and (\ref{equ.deralpha}) are both ${\cal O}(MN)$. In (\ref{equ.deromega}), $\nbf\odot(\ybf- \xbf)^*$ needs $N$ multiplications, the complexity of multiplying $\Abf^T$ and $[\nbf\odot(\ybf- \xbf)^*]$ is ${\cal O}(MN)$, and the complexity of the element-wise multiplication between $2\tilde{\alphabf}$ and $\left[\Abf^T[\nbf\odot(\ybf- \xbf)^*]\right]$ is ${\cal O}(2M)$. Thus, the total computational complexity of (\ref{equ.deromega}) is ${\cal O}(2M+N+MN)$. Finally, (\ref{equ.alphaOmegaUpdate}) and (\ref{equ.momentum}) need $2M$ and $4M$ multiplications, respectively. Thus, the whole process takes ${\cal O}(I(4MN+8M+N))$ with $I$ being the total number of iterations. Moreover, it can be envisioned that the neural network-like structure of the MNN allows for ultra-efficient implementation of parallel computation conducted on a GPU, which is out of the scope of this paper and is left to future investigation.
\section{Numerical Simulation}\label{sec:simulation}
In this section, we provide several simulation examples to verify the effectiveness of the line spectral estimation using the MNN. For all the cases, the SNR is defined as follows:
\begin{equation}
{\rm SNR} = 10\log_{10}\frac{\|\xbf\|_2^2}{\|\ebf\|_2^2}{\rm (dB)}.
\end{equation}
\subsection{Comparison of Estimation Precision}
We first compare the performance of different spectral estimation methods including our MNN and several other widely-used methods. We simulate a $N=32$ point signal which contains $K=3$ complex sinusoidal components with normalized digital frequency $0.1, 0.115$ and $0.37$. The signal is contaminated by zero-mean white Gaussian noise, and the signal-to-noise ratio (SNR) is 10dB. The amplitudes of the signal are marked by red dots in Fig. \ref{fig:spectral30db}. For comparison, we also provide the results of other spectral estimation algorithms, including the FFT, the MUSIC and two grid-based methods, i.e., SPICE and IAA \cite{SZL14, SPP20}. The number of the grid points in the frequency domain used by the grid-based methods is equal to the signal length, i.e., $32$. Fig. \ref{fig:spectral30db} shows the spectral estimation results obtained by different algorithms with $M=6$. It is clear that FFT and MUSIC cannot distinguish the two complex sinusoids with frequencies $0.1$ and $0.115$. Although the grid-based methods can distinguish these two signals, their performance are limited by the granularity of the grids. Our MNN-based method has higher estimation accuracy of the frequencies and amplitudes of all three cosine waves. Moreover, only our MNN-based method does not need to know the number of sinusoidal signals before-hand. When we assume that $M=6$ instead of $3$, the extra components can be merged and pruned by our MNN-based method and the number of sinusoidal components is correctly estimated, which means lower false-alarm probability and no need to know exact $K$ in advance.
\begin{figure}[htb]
\centering
\includegraphics[width=3.2in]{result_fig_2021_12_4_20_16_13.pdf}
\caption{Spectrum estimation results of different algorithms when SNR is 10 dB with $M=6$.}
\label{fig:spectral30db}
\end{figure}
We then compare the Cram\'er–Rao bound (CRB) and the normalized mean square error (MSE) of amplitudes and frequencies obtained by different algorithms under different SNR cases. Under each SNR case, we adopt a 1000 times Monte Carlo simulation. We set a 32-point time series which contains three complex sinusoidal components with the digital angular frequencies at $2\pi\times[0.1, 0.22, 0.37]$ and the amplitudes are randomly generated in each Monte Carlo run. Fig. \ref{fig:MSE} shows that when SNR increases, the performance of SPICE and IAA\cite{SZL14, SPP20} will not improve. It is because the performance of the grid-based methods are greatly limited by the not-good-enough grid, while our MNN-based method eliminates this problem and outperforms its four counterparts. Additionally, when SNR increases, only the MSE of our MNN-based method come close to the CRB.
\begin{figure}[htbp]
\centering
\subfloat[]{\includegraphics[width=0.45\textwidth]{montecarlo2_amp.pdf}}\\
\subfloat[]{\includegraphics[width=0.45\textwidth]{montecarlo2_freq.pdf}}\\
\caption{Normalized a) amplitude and b) frequency MSE versus SNR for the parameter estimation problem.}\label{fig:MSE}
\end{figure}
\subsection{Validation of Merging and Pruning}
In this subsection, we validate the performance of our merging and pruning method. We first investigate the performance of the merging and pruning method separately by plotting the corresponding ROC curves. The necessary PD and FAR values are obtained by 1000 Monte Carlo trials.
To plot the ROC curve of the merging criterion, we consider a MNN with two hidden-layer nodes corresponding to two close frequencies and see if they will be merged by our method. The threshold $\Delta\omega_{\min}$ is set as zero for all the cases. The PD of the ROC curve is defined as the probability of correctly keeping the two nodes when there are two sinusoids, and the FAR is defined as the probability of erroneously keeping the two nodes when there is only one sinusoid. To numerically obtain PD, we simulate the $32$-point input signal which contains two complex sinusoidal components with the frequency $[0.5, 0.5+\frac{1}{16N}]\times2\pi$ and the same power level. Note that the two sinusoids are extremely close in the frequency domain and the frequency difference is much smaller than the resolution of FFT. We initialized the corresponding MNN with two hidden-layer nodes by the method in the Section \ref{sec.initial}. To obtain FAR, we simulate the input signal which contains only one sinusoid with the frequency $0.5\times2\pi$, and the MNN is also initialized with two hidden-layer nodes. By varying $\epsilon_f$ from $0$ to $1$, we can plot the ROC curve under different SNR cases (shown in Fig. \ref{fig.mergeROC}). It is clear that the area under the ROC curves is close to one, which shows the good performance of our merging criterion. Also, when SNR increases, our merging criterion performs better, which conforms our intuition.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{mergeResultROC_2021_12_17_14_35_17.pdf}
\caption{ROC curve of the merging criterion under different SNR cases.}\label{fig.mergeROC}
\end{figure}
Next, we investigate the performance of the pruning method. We consider two different scenarios. We first consider the scenario that the MNN has only one hidden-layer nodes and see if it will be pruned by our pruning methods. In this scenario, we define the PD as the probability of correctly keeping the nodes when there is a sinusoid, and define the FAR as the probability of erroneously keeping the node when the signal only contains the white Gaussian noise. To obtain the PD, we simulate the $32$-point time series which contains a sinusoid with the normalized frequency $0.5$. To obtain FAR, we simulate the time series which is just the white Gaussian noise. The MNN is initialized with only one hidden-layer node, and PD and FAR are obtained by Monte Carlo trials. By varying $\epsilon_a$ from $0$ to $1$, we can plot the ROC curves under different SNR cases (see Fig. \ref{fig.pruneROC1}). Note that in this scenario, SNR is only defined for the signal with one sinusoid. Fig. \ref{fig.pruneROC1} shows that the area under the ROC curves is $1$, which means the extremely good performance of our pruning method when SNR $>10$dB. This conforms to our theoretically analysis in Section \ref{sec.prunecriterion}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{pruneResultROC2_2021_12_20_14_0_56.pdf}
\caption{ROC curve of pruning the only one hidden-layer node under different SNR cases}\label{fig.pruneROC1}
\end{figure}
We next consider the scenario that the MNN has two hidden-layer nodes and investigate if the node corresponding to the weaker sinusoidal component will be pruned. We define the PD as the probability of correctly keeping the two nodes when there are two sinusoids, and define the FAR as the probability of erroneously keeping the node corresponding to the weaker sinusoid when is only one sinusoid. To obtain PD, we simulate a $32$-point signal with two sinusoids. The normalized frequencies $\omega_1, \omega_2$ are $0.5, 0.8$, respectively, and the absolute amplitudes $|\alpha_1|, |\alpha_2|$ are $1, 0.1$, respectively. Note that the second sinusoid is much weaker than the first. To obtain FAR, we simulate the signal which contains one sinusoid with the frequency $\omega_1 = 0.5\times2\pi$ and the absolute amplitude $|\alpha_1|=1$. The MNN is initialized with two hidden-layer nodes, and the PD and the FAR are obtained by Monte Carlo trials. By varying $\epsilon_a$, we can plot the ROC curves under different SNR cases (shown in Fig. \ref{fig.pruneROC}). It shows than even when $|\alpha_2|\ll|\alpha_1|$, our pruning method can have good performance when SNR $>10$dB.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{pruneResultROC_2021_12_17_21_33_22.pdf}
\caption{ROC curve of pruning the hidden-layer node corresponding to the weaker sinusoid under different SNR cases}\label{fig.pruneROC}
\end{figure}
To investigate the performance of our merging and pruning method when they are used together, we consider different cases with the number of sinusoids varying from $1$ to $5$, and under each case, we adopt a $1000$ times Monte Carlo run. We keep $N=32, \epsilon_f =1\times10^{-6}, \epsilon_a=1\times10^{-6}$ and SNR $=10$dB in each Monte Carlo run. We use two widely-used model-order selection methods, i.e., AIC and BIC \cite{SS2004}, as benchmarks. Specifically, we assume different model-orders and design the MNN with different numbers of hidden-layered nodes. We train these MNNs and substitute the results into the corresponding AIC and BIC metrics. The estimated model-order is the one that minimizes the AIC or BIC metric. The results are shown in Fig. \ref{fig:model_order}. It is clear that the performance of AIC is the inferior one among all the methods and BIC slightly outperforms our merging and pruning method. But our method is computationally much more efficient that the BIC, which needs to try different number of sinusoids before finding the one minimizing the BIC metric.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{orderMC_2021_12_10_21_43_3.pdf}
\caption{Results of different methods of estimating the number of sinusoids.}\label{fig:model_order}
\end{figure}
\subsection{Convergence Performance}
To investigate the convergence property of our MNN-based method for the non-convex spectral estimation problem, we also show the variation of the cost function value during the iterations with different learning rates $\gamma$ (see Fig. \ref{fig:convergence1}) under different SNR cases. It is clear that our MNN-based method can always achieve the convergence with different learning rates. When fixing the momentum parameter, larger learning rate provides faster convergence, which conforms our intuition.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{fixLambda.pdf}
\caption{The variation of the cost function value during the iterations with different learning rates and fixed momentum parameter under different SNR cases.}\label{fig:convergence1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{fixGamma.pdf}
\caption{The variation of the cost function value during the iterations with fixed learning rate and varying momentum parameters under different SNR cases.}\label{fig:convergence2}
\end{figure}
We also show the variation of the cost function value with with fixed learning rate and varying momentum parameters (see Fig. \ref{fig:convergence2}). When using different momentum parameters, our method can also achieve the convergence. We can also find that the the cost function will descend more smoothly with small momentum parameters, while large momentum parameters may sometimes lead to the increase of the cost function during the iterations, and that is why the momentum method can help to jump out of the local minima.
\subsection{A Challenging Case}
In the last example we simulate the more challenging scenario where there are clusters of closely-spaced sinousoids.
We consider a $N=128$ time series with two clusters. Each cluster contains five sinusoids with the digital angular frequencies at $2\pi\times[0.3, 0.3+0.75/N, 0.3-0.75/N, 0.3+1.8/N, 0.3-1.8/N]$ and $2\pi\times [0.7, 0.7+0.8/N, 0.7-0.8/N, 0.7+2/N, 0.7-2/N]$. The amplitudes of the sinusoids are shown in Fig. \ref{fig:clusterEst}, and the SNR of the input signal is $20$ dB. Note that to estimate the close sinusoids in frequency domain of each cluster, we initialize the MNN with the frequency $\tilde{\omega}_i, \tilde{\omega}_i+\frac{2\pi}{L}, \tilde{\omega}_i-\frac{2\pi}{L}$, where $\tilde{\omega}_i, i=1,\dots,4$ denote the FFT spectrum peaks as mentioned in Section \ref{sec.initial}. Thus, the initialized number of the hidden-layered nodes is $12$ instead of the true value $10$. Fig. \ref{fig:clusterEst} shows that our MNN-based method can correctly estimate all the sinusoids in the two clusters with relatively low estimation error. Also, by using the merging and pruning method, the model order is correctly estimated. The performance of MNN on this complicated scenario shows its validity and wide applications in the future.
\begin{figure}[htb]
\centering
\includegraphics[width=3.2in]{figure_cluster_test_2022_1_3_15_46_36.pdf}
\caption{Spectrum estimation results of different algorithms of the signal with two clusters when SNR $=20$ dB. }
\label{fig:clusterEst}
\end{figure}
\section{Conclusions} \label{sec:con}
In this paper, we present a novel signal modeling tool named model-based neural network (MNN) and solve the classic line spectral estimation problem as a showcase of MNN. By choosing the complex exponential function as the activation function and reviewing the complex amplitude and digital angular frequency as the network weights, we model the signal by a three-layered neural network and use the back-propagation (BP) algorithm to train this network. To overcome the non-convexity of the line spectral estimation problem, we use the momentum method to jump out of the local minima and use FFT to obtain a good initialization. To determine the number of sinusoids in the signal, we also artfully design the rules of merging and pruning the hidden-layer nodes of MNN.
The simulations show that our proposed method is performance-wise optimal and computation-wise simple compared to many existing and widely-used spectral estimation methods.
\section*{Appendix: The derivation of (\ref{equ.CRBomega})}
According to \cite{stoica2005spectral}, the Fisher Information Matrix (FIM) of (\ref{equ.2sin}) is
\begin{equation}
{\rm FIM}^{ij} = \frac{2}{\sigma^2}{\rm Re}\begin{bmatrix}
\Pi_{ii} & \Pi_{ij} \\
\Pi_{ji} & \Pi_{jj}
\end{bmatrix},
\end{equation}
where
\begin{equation}
\begin{split}
&\Pi_{ii} = \left(\frac{\partial\ybf}{\partial\omega_i}\right)^H\left(\frac{\partial\ybf}{\partial\omega_i}\right) = |\alpha_i|^2\sum_{n=0}^{N-1}n^2,\\
&\Pi_{jj} = \left(\frac{\partial\ybf}{\partial\omega_j}\right)^H\left(\frac{\partial\ybf}{\partial\omega_j}\right) = |\alpha_j|^2\sum_{n=0}^{N-1}n^2,\\
&\Pi_{ij} = \left(\frac{\partial\ybf}{\partial\omega_i}\right)^H\left(\frac{\partial\ybf}{\partial\omega_j}\right) = \alpha_i^*\alpha_j\sum_{n=0}^{N-1}n^2e^{j(\omega_j - \omega_i)n},\\
&\Pi_{ji} = \left(\frac{\partial\ybf}{\partial\omega_j}\right)^H\left(\frac{\partial\ybf}{\partial\omega_i}\right) = \alpha_i\alpha_j^*\sum_{n=0}^{N-1}n^2e^{j(\omega_i - \omega_j)n}.
\end{split}
\end{equation}
Then
\begin{equation}
\begin{split}
&{\rm Re}[\Pi_{ii}] = |\alpha_i|^2\rho_1,\\
&{\rm Re}[\Pi_{jj}] = |\alpha_j|^2\rho_1,\\
&{\rm Re}[\Pi_{ij}] = {\rm Re}[C_{ji}] = {\rm Re}\left[\alpha_i^*\alpha_j\rho_2\right].
\end{split}
\end{equation}
with $\rho_1 = \sum_{n=0}^{N-1} n^2$ and $\rho_2 = \sum_{n=0}^{N-1}n^2e^{j(\omega_j - \omega_i)n}$.
Thus, the CRB matrix of (\ref{equ.2sin}) is
\begin{equation}
\begin{split}
&{\rm CRB}^{ij} = [{\rm FIM}^{ij}]^{-1} \\
&= \frac{\sigma^2}{2}\frac{1}{{\rm Re}[\Pi_{ii}]{\rm Re}[\Pi_{jj}] - {\rm Re}[\Pi_{ij}]^2}\begin{bmatrix}
{\rm Re}[\Pi_{jj}] & -{\rm Re}[\Pi_{ij}] \\
-{\rm Re}[\Pi_{ji}] & {\rm Re}[\Pi_{ii}]
\end{bmatrix}.
\end{split}
\end{equation}
\bibliographystyle{ieeetr}
|
1,477,468,751,415 | arxiv | \section{Introduction}
We consider the usual fixed-design linear regression model:
\[Y = X\beta + \epsilon,\]
where $X$ is the fixed design matrix and $(\epsilon_{i})_{i \in \mathbb{Z}}$ is a stationary process. This model is commonly used in time series regression.
Our work is based on the paper by Hannan \cite{hannan73clt}, who proved a Central Limit Theorem for the usual least square estimator under general conditions on the design and on the error process.
Most of short-range dependent processes satisfy the conditions on the error process, for instance the class of linear processes with summable coefficients and square integrable innovations, a large class of functions of linear processes, and many processes under various mixing conditions (see for instance \cite{dmv2007weak}, and also \cite{dedecker2015optimality} for the optimality of Hannan’s condition).
In this paper, it is shown that for a large class of designs satisfying the conditions of Hannan, the covariance matrix of the limit distribution of the least square estimator is the same as in the i.i.d. case, up to the usual error variance term, which should be replaced by the covariance series of the error process.
We shall refer to this very large class of designs as « regular designs » (see Section $2.3$ for the precise definition).
It includes many interesting examples, for instance the ANOVA type designs or the designs whose columns are regularly varying (such as the polynomial regression type designs).
For this class of regular designs, any consistent estimator of the covariance series of $(\epsilon_{i})_{i \in \mathbb{Z}}$ may be used to obtain a Gaussian limit distribution with explicit covariance matrix for the normalized least square estimator. Doing so, it is then possible to obtain confidence regions and test procedures for the unknown parameter $\beta$. In this paper, assuming only that Hannan's condition on $(\epsilon_{i})$ is satisfied, we propose a consistent estimator of the spectral density of $(\epsilon_{i})$ (as a byproduct, we get an estimator of the covariance series).
Wu and Liu \cite{wuspectraldensity} considered the problem of estimating the spectral density for a large class of short-range dependent processes. They proposed a consistent estimator for the spectral density, and gave some conditions under which the centered estimator satisfies a Central Limit Theorem.
These results are based on the asymptotic theory of stationary processes developed by Wu \cite{wudependence}.
This framework enables to deal with most of the statistical procedures from time series, including the estimation of the spectral density. However the class of processes satisfying the $\mathbb{L}^{2}$ "physical dependence measure" introduced by Wu is included in the class of processes satisfying Hannan’s condition.
In this paper, we prove the consistency of an estimator of the spectral density of the error process under Hannan’s condition.
Compared to Wu’s precise results on the estimation of the spectral density (Central Limit Theorem, rates of convergence, deviation inequalities), our result is only a consistency result, but it holds under Hannan’s condition, that is for most of short-range dependent processes.
Finally, we use these general results to modify the usual Fischer tests in cases where $(\epsilon_{i})_{i \in \mathbb{Z}}$ and the design verify the conditions of Hannan, and we perform simulations with different models.
For these simulations, we need to choose how many covariance terms have to be estimated. In this paper this number is chosen by considering only the autocovariance graph of the residuals.
Developing a data driven criterion would be more satisfying. This is probably a very difficult question in such a general context; for this reason it is left out of the scope of the present paper.
The paper is organized as follows. In Section $2$, we recall Hannan’s Central Limit Theorem for the least square estimator, and we define the class of « regular designs » (we also give many examples of such designs).
In Section $3$, we focus on the estimation of the spectral density of the error process under Hannan's condition.
In Section $4$, some examples of stationary processes satisfying Hannan's condition are presented.
Finally, Section $5$ is devoted to the correction of the usual Fischer tests in our dependent context, and some simulations are realized.
\section{Hannan's theorem and regular design}
\subsection{Notations and definitions}
Let us recall the equation of the linear regression model:
\begin{equation}
Y = X\beta + \epsilon,
\label{-1}
\end{equation}
where $X$ is a deterministic design matrix and $\epsilon$ is an error process defined on a probability space ($\Omega, \mathcal{F}, \mathbb{P}$). Let $X_{.,j}$ be the column $j$ of the matrix $X$, and $x_{i,j}$ the real number at the row $i$ and the column $j$, where $j$ is in $\{1, \ldots, p\}$ and $i$ in $\{1, \ldots, n\}$. The random vectors $Y$ and $\epsilon$ belong to $\mathbb{R}^{n}$ and $\beta$ is a $p \times 1$ vector of unknown parameters.
Let $\left \| . \right \|_{2}$ be the usual euclidean norm on $\mathbb{R}^{n}$, and $\left \| . \right \|_{\mathbb{L}^{p}}$ be the $\mathbb{L}^{p}$-norm on $\Omega$, defined for all random variable $Z$ by: $\left \| Z \right \|_{\mathbb{L}^{p}} = \left[ \mathbb{E} \left( Z^{p} \right) \right]^{\frac{1}{p}}$ . We say that $Z$ is in $\mathbb{L}^{p}(\Omega)$ if $\left[ \mathbb{E} \left( Z^{p} \right) \right]^{\frac{1}{p}} < \infty$.
The error process $(\epsilon_{i})_{i \in \mathbb{Z}}$ is assumed to be strictly stationary with zero mean. Moreover, for all $i$ in $\mathbb{Z}$, $\epsilon_{i}$ is supposed to be in $\mathbb{L}^{2}(\Omega)$. More precisely, the error process satisfies, for all $i$ in $\mathbb{Z}$:
\[\epsilon_{i} = \epsilon_{0} \circ \mathbb{T}^{i},\]
where $\mathbb{T}: \Omega \rightarrow \Omega$ is a bijective bimeasurable transformation preserving the probability measure $\mathbb{P}$. Note that any strictly stationary process can be represented in this way.
Let ($\mathcal{F}_{i}$)$_{i \in \mathbb{Z}}$ be a non-decreasing filtration built as follows, for all $i$:
\[\mathcal{F}_{i} = \mathbb{T}^{-i}(\mathcal{F}_{0}).\]
where $\mathcal{F}_{0}$ is a sub-$\sigma$-algebra of $\mathcal{F}$ such that $\mathcal{F}_{0} \subseteq \mathbb{T}^{-1}(\mathcal{F}_{0})$. For instance, one can choose the past $\sigma$-algebra before time $0$: $\mathcal{F}_{0} = \sigma(\epsilon_{k}, k \leq 0)$, and then $\mathcal{F}_{i} = \sigma(\epsilon_{k}, k \leq i)$. In that case, $\epsilon_{0}$ is $\mathcal{F}_{0}$-measurable.
As in Hannan, we shall always suppose that $\mathcal{F}_{-\infty} = \underset{i \in \mathbb{Z}}{\bigcap} \mathcal{F}_{i}$ is trivial. Moreover $\epsilon_{0}$ is assumed $\mathcal{F}_{\infty}$-measurable. These implie that the $\epsilon_{i}$'s are all regular random variables in the following sense:
\begin{defi}[Regular random variable]
Let $Z$ be a random variable in $L^{1}(\Omega)$. We say that $Z$ is regular with respect to the filtration $(\mathcal{F}_{i})_{i \in \mathbb{Z}}$ if $\mathbb{E}(Z | \mathcal{F}_{-\infty}) = \mathbb{E}(Z)$ almost surely and if $Z$ is $\mathcal{F}_{\infty}$-measurable.
\end{defi}
This implies that there exists a spectral density $f$ for the error process, defined on $[-\pi, \pi]$. The autocovariance function $\gamma$ of the process $\epsilon$ then satisfies:
\[\gamma(k) = \mathrm{Cov} (\epsilon_{m}, \epsilon_{m+k}) = \mathbb{E}(\epsilon_{m}\epsilon_{m+k}) = \int_{-\pi}^{\pi} e^{i k \lambda} f(\lambda) d\lambda.\]
\subsection{Hannan's Central Limit Theorem}
Let $\hat{\beta}$ be the usual least square estimator for the unknown vector $\beta$.
Hannan \cite{hannan73clt} has shown a Central Limit Theorem for $\hat{\beta}$ when the error process is stationary. In this section, the conditions for applying this theorem are recalled.
Let $(P_{j})_{j \in \mathbb{Z}}$ be a family of projection operators, defined for all $j$ in $\mathbb{Z}$ and for any $Z$ in $\mathbb{L}^{2}(\Omega)$ by:
\[P_{j}(Z) = \mathbb{E}(Z | \mathcal{F}_{j}) - \mathbb{E}(Z | \mathcal{F}_{j-1}).\]
We shall always assume that Hannan's condition on the error process is satisfied:
\begin{equation}
\sum_{i \in \mathbb{Z}} \left \| P_{0}(\epsilon_{i}) \right \|_{\mathbb{L}^{2}} < +\infty.
\tag{C1}
\label{0}
\end{equation}
Note that this condition implies that:
\begin{equation}
\sum_{k \in \mathbb{Z}} \left| \gamma(k) \right| < \infty,
\label{0bis}
\end{equation}
(see for instance \cite{dmv2007weak}).
Hannan's condition provides a very general framework for stationary processes. The hypothesis~\eqref{0} is a sharp condition to have a Central Limit Theorem for the partial sum sequence (see the paper of Dedecker, Merlevède and Voln\'y \cite{dmv2007weak} for more details). Notice that the condition~\eqref{0bis} implies that the error process is short-range dependent.
However, Hannan's condition is satisfied for most short-range dependent stationary processes. In particular, it is less restrictive that the well-known condition of Gordin \cite{gordin1969central}. Moreover the property of $2$-strong stability introduced by Wu \cite{wu2005nonlinear} is more restrictive than Hannan's condition. This property of $2$-strong stability will be recalled in Section $4$, where large classes of examples will be fully described.
Let us now recall Hannan’s assumptions on the design. Let us introduce:
\begin{equation}
d_{j}(n) = \left \| X_{.,j} \right \|_{2} = \sqrt{\sum_{i=1}^{n} x_{i, j}^{2}},
\end{equation}
and let $D(n)$ be the diagonal matrix with diagonal term $d_{j}(n)$ for $j$ in $\{1, \ldots, n\}$.
Following Hannan, we also require that the columns of the design $X$ satisfy the following conditions:
\begin{equation}
\forall j \in \{1, \ldots,p\}, \qquad \lim_{n \rightarrow \infty} d_{j}(n) = \infty,
\tag{C2}
\label{1}
\end{equation}
and:
\begin{equation}
\forall j, l \in \{1, \ldots, p\}, \qquad \lim_{n \rightarrow \infty} \frac{\sup_{1 \leq i \leq n} \left | x_{i,j} \right |}{d_{j}(n)} = 0.
\tag{C3}
\label{2}
\end{equation}
Moreover, we assume that the following limits exist:
\begin{equation}
\forall j, l \in \{1, \ldots, p\}, \qquad \rho_{j,l}(k) = \lim_{n \rightarrow \infty} \sum_{m=1}^{n-k} \frac{x_{m, j} x_{m+k,l}}{d_{j}(n)d_{l}(n)}.
\tag{C4}
\label{3}
\end{equation}
Notice that there is a misprint in Hannan’s paper (the supremum is missing on
condition~\eqref{2}). Note that Conditions~\eqref{1} and~\eqref{2} correspond to the usual Lindeberg’s conditions for linear statistics in the i.i.d. case. In the dependent case, we also need the Condition~\eqref{3}.
The $p \times p$ matrix formed by the coefficients $\rho_{j,l}(k)$ is called $R(k)$:
\begin{equation}
R(k) = [\rho_{j,l}(k)] = \int_{-\pi}^{\pi} e^{i k \lambda} F_{X}(d\lambda),
\label{4}
\end{equation}
where $F_{X}$ is the spectral measure associated with the matrix $R(k)$. The matrix $R(0)$ is supposed to be positive definite:
\begin{equation}
R(0) > 0.
\tag{C5}
\label{4bis}
\end{equation}
Let then $F$ and $G$ be the matrices:
\begin{equation}
F = \frac{1}{2\pi} \int_{-\pi}^{\pi} F_{X}(d\lambda),
\label{5}
\end{equation}
\begin{equation}
G = \frac{1}{2\pi} \int_{-\pi}^{\pi} F_{X}(d\lambda) \otimes f(\lambda).
\label{6}
\end{equation}
The Central Limit Theorem for the regression parameter, due to Hannan \cite{hannan73clt}, can be stated as follows:
\begin{theo}
Let $(\epsilon_{i})_{i \in \mathbb{Z}}$ be a stationary process with zero mean. Assume that $\mathcal{F}_{-\infty}$ is trivial, $\epsilon_{0}$ is $\mathcal{F}_{\infty}$-measurable, and that the sequence $(\epsilon_{i})_{i \in \mathbb{Z}}$ satisfies Hannan's condition~\eqref{0}. Assume that the design $X$ satisfies the conditions~\eqref{1},~\eqref{2},~\eqref{3} and~\eqref{4bis}.
Then:
\begin{equation}
D(n)(\hat{\beta} - \beta) \xrightarrow[n \rightarrow \infty]{\mathcal{L}} \mathcal{N}(0, F^{-1}GF^{-1}).
\label{7}
\end{equation}
Furthermore, there is the convergence of second order moment: \footnote{The transpose of a matrix $X$ is denoted by $X^{t}$.}
\begin{equation}
\mathbb{E} \left( D(n) (\hat{\beta} -\beta) (\hat{\beta} -\beta)^{t} D(n)^{t} \right) \xrightarrow[n \rightarrow \infty]{} F^{-1}GF^{-1}.
\label{9}
\end{equation}
\label{8}
\end{theo}
\subsection{Regular design}
Theorem~\ref{8} is very general because it includes a very large class of designs. In this paper, we will focus on the case where the design is regular in the following sense:
\begin{defi}[Regular design]
A fixed design $X$ is called regular if, for any $j, l$ in $\{1, \ldots, p\}$, the coefficients $\rho_{j,l}(k)$ do not depend on $k$.
\end{defi}
A large class of regular designs is the one for which the columns are regularly varying sequences. Let us recall the definition of regularly varying sequences :
\begin{defi}[Regularly varying sequence \cite{seneta2006regularly}]
A sequence $S(\cdot)$ is regularly varying if and only if it can be written as:
\[S(i) = i^{\alpha} L(i),\]
where $-\infty < \alpha < \infty$ and $L(\cdot)$ is a slowly varying sequence.
\end{defi}
This includes the case of polynomial regression, where the columns are of the form: $x_{i,j} = i^{j}$.
\begin{prop}
Assume that each column $X_{.,j}$ is regularly varying with parameter $\alpha_{j}$.
If the parameters $\alpha_{j}$ are all strictly greater than $-\frac{1}{2}$, then Conditions~\eqref{1}, \eqref{2} and \eqref{3} on the design are satisfied. Moreover, for all $j$ and $l$ in $\{1, \ldots, p\}$, the coefficients $\rho_{j,l}(k)$ do not depend on $k$ and are equal to $\frac{\sqrt{2\alpha_{j}+1} \sqrt{2\alpha_{l}+1}}{\alpha_{j}+\alpha_{l}+1}$. Thereby, the design is regular, and~\eqref{4bis} is satisfied provided $\alpha_{j} \neq \alpha_{l}$ for any distinct $j, l$ in $\{1, \ldots, p\}$.
\label{9ajout}
\end{prop}
An other important class of regular designs are the ANOVA type designs. An ANOVA design is represented by a matrix whose column vectors are orthogonal to one another. Each coordinate of the columns are either $0$ or $1$, with consecutive sequences of $1$. The number of $0$ and $1$ of each column tends to infinity as $n$ tends to infinity.
Note that a design whose columns are either ANOVA or regularly varying is again a regular design.
\subsection{The asymptotic covariance matrix for regular design}
For regular design, the asymptotic covariance matrix is easy to compute. Actually, we shall see that it is the same as in the case where the errors are independent up to a multiplicative factor.
More precisely, the usual variance term $\sigma^{2} = \mathbb{E}(\epsilon_{0}^{2})$ should be replaced by the sum of covariances : $\sum_{k} \gamma(k)$.
Since the coefficients $\rho_{j,l}(k)$ are constant, the spectral measure $F_{X}$ is the product of a Dirac mass at $0$, denoted $\delta_{0}$, with the matrix $R(k)$; consequently the spectral measure $F_{X}$ is equal to $\delta_{0}R(0)$. Notice that, in the case of regular design, the matrix $R(k) = [\rho_{j,l}(k)]$ is equal to $R(0) = [\rho_{j,l}(0)]$.
Thereby the matrix $F$ and $G$ can be computed explicitly:
\begin{equation}
F = \frac{1}{2\pi} \int_{-\pi}^{\pi} F_{X}(d\lambda) = \frac{1}{2\pi} \int_{-\pi}^{\pi} R(0) \delta_{0}(d\lambda) = \frac{1}{2\pi} R(0),
\label{10}
\end{equation}
\begin{equation}
G = \frac{1}{2\pi} \int_{-\pi}^{\pi} F_{X}(d\lambda) \otimes f(\lambda) = \frac{1}{2\pi} \int_{-\pi}^{\pi} R(0) \otimes f(\lambda) \delta_{0}(d\lambda) = \frac{1}{2\pi} R(0) \otimes f(0) = f(0)F.
\label{11}
\end{equation}
Thus, using~\eqref{10} and~\eqref{11}, the covariance matrix can be written as:
\[F^{-1}GF^{-1} = f(0)F^{-1}.\]
The connection between the spectral density and the autocovariance function is known:
\[f(\lambda) = \frac{1}{2\pi} \sum_{k=-\infty}^{\infty} \gamma(k) e^{- i k \lambda}, \qquad \lambda \in [-\pi, \pi].\]
and at the point $0$:
\[f(0) =\frac{1}{2\pi} \sum_{k=-\infty}^{\infty} \gamma(k).\]
Thereby the covariance matrix can be written:
\[f(0) F^{-1} = \left( \frac{1}{2\pi} \sum_{k=-\infty}^{\infty} \gamma(k) \right) F^{-1} = \left( \sum_{k=-\infty}^{\infty} \gamma(k) \right) R(0)^{-1},\]
since $F = \frac{R(0)}{2 \pi}$ and $F^{-1} = 2 \pi R(0)^{-1}$.\\
In conclusion, for regular design the following corollary holds:
\begin{Cor}
Under the assumptions of Theorem~\ref{8}, if moreover the design $X$ is regular, then:
\begin{equation}
D(n)(\hat{\beta} - \beta) \xrightarrow[n \rightarrow \infty]{\mathcal{L}} \mathcal{N} \left( 0, \left( \sum_{k=-\infty}^{\infty} \gamma(k) \right) R(0)^{-1} \right),
\label{15}
\end{equation}
and we have the convergence of the second order moment:
\begin{equation}
\mathbb{E} \left( D(n) (\hat{\beta} - \beta) (\hat{\beta} - \beta)^{t} D(n)^{t} \right) \xrightarrow[n \rightarrow \infty]{} \left( \sum_{k=-\infty}^{\infty} \gamma(k) \right) R(0)^{-1}.
\label{15bis}
\end{equation}
\label{15ter}
\end{Cor}
One can see that, in the case of regular design, the asymptotic covariance matrix is similar to the one in the case where the random variables ($\epsilon_{i}$) are i.i.d.; the variance term $\sigma^{2}$ is replaced by the series of covariances.
Actually the matrix $R(0)^{-1}$ is the normalised limit of the matrix $(X^{t}X)^{-1}$. It is formed by the coefficients $\rho_{j,l}(0)$, which are, in this case, the limit of the normalised scalar products between the columns of the design.
Thus, to obtain confidence regions and tests for $\beta$, an estimator of the covariance matrix is needed. More precisely, it is necessary to estimate the quantity:
\begin{equation}
\sum_{k=-\infty}^{\infty} \gamma(k).
\label{16}
\end{equation}
\section{Estimation of the series of covariances}
Properties of spectral density estimates have been discussed in many classical textbooks on time series; see, for instance, Anderson \cite{anderson2011statistical}, Brillinger \cite{brillinger2001time}, Brockwell and Davis \cite{brockwell2013time}, Grenander and Rosenblatt \cite{grenander2008statistical}, Priestley \cite{priestley1981spectral} and Rosenblatt \cite{rosenblatt2012stationary} among others. But many of the previous results require restrictive conditions on the underlying processes (linear structure or strong mixing conditions). Wu \cite{wuspectraldensity} has developed an asymptotic theory for the spectral density estimate $f_{n}(\lambda)$, defined at~\eqref{17}, which extends the applicability of spectral analysis to nonlinear and/or non-strong mixing processes. In particular, he also proved a Central Limit Theorem and deviation inequalities for $f_{n}(\lambda)$. However, to show his results, Wu uses a notion of dependence that is more restrictive than Hannan's.
In this section, we propose an estimator of the spectral density under Hannan's dependence condition. Here, contrary to the precise results of Wu (Central Limit Theorem, deviation inequalities), we shall only focus on the consistency of the estimator.
Let us first consider a preliminary random function defined as follows, for $\lambda$ in $[-\pi,\pi]$:
\begin{equation}
f_{n}(\lambda) = \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) \hat{\gamma}_{k} e^{i k \lambda},
\label{17}
\end{equation}
where:
\begin{equation}
\hat{\gamma}_{k} = \frac{1}{n} \sum_{j=1}^{n-|k|} \epsilon_{j} \epsilon_{j+|k|}, \qquad 0 \leq |k| \leq (n-1),
\label{18}
\end{equation}
and $K$ is the kernel defined by:
\[
\left\{
\begin{array}{r c l}
K(x) &=& 1 \phantom{- |x| 1 1} \quad if\ |x| \leq 1\\
K(x) &=& 2 - |x| \phantom{1\quad} if\ 1 \leq |x| \leq 2\\
K(x) &=& 0 \phantom{- |x| 1 1} \quad if\ |x| > 2.\\
\end{array}
\right.
\]
The sequence of positive integers $c_{n}$ is such that $c_{n}$ tends to infinity and $\frac{c_{n}}{n}$ tends to $0$ when $n$ tends to infinity.
In our context, $(\epsilon_{i})_{i \in \{1, \ldots, n\}}$ is not observed. Only the residuals are available:
\[\hat{\epsilon}_{i} = Y_{i} - (x_{i})^{t} \hat{\beta} = Y_{i} - \sum_{j=1}^{p} x_{i,j} \hat{\beta}_{j},\]
because only the data $Y$ and the design $X$ are observed. Consequently, we consider the following estimator:
\begin{equation}
f_{n}^{\ast}(\lambda) = \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) \hat{\gamma}_{k}^{\ast} e^{i k \lambda}, \qquad \lambda \in [-\pi,\pi],
\label{19}
\end{equation}
where:
\[\hat{\gamma}_{k}^{\ast} = \frac{1}{n} \sum_{j=1}^{n-|k|} \hat{\epsilon}_{j} \hat{\epsilon}_{j+|k|}, \qquad 0 \leq |k| \leq (n-1).\]
Theorem~\ref{50} concludes this section:
\begin{theo}
Let $c_{n}$ be a sequence of positive integers such that $c_{n} \rightarrow \infty$ as $n$ tends to infinity, and:
\begin{equation}
c_{n} \mathbb{E} \left( \left| \epsilon_{0} \right|^{2} \left( 1 \wedge \frac{c_{n}}{n} \left| \epsilon_{0} \right|^{2} \right) \right) \xrightarrow[n \rightarrow \infty]{} 0.
\label{48}
\end{equation}
Then, under the assumptions of Theorem~\ref{8}:
\begin{equation}
\sup_{\lambda \in [-\pi,\pi]} \left \| f_{n}^{\ast}(\lambda) - f(\lambda) \right \|_{\mathbb{L}^{1}} \xrightarrow[n \rightarrow \infty]{} 0.
\label{49}
\end{equation}
\label{50}
\end{theo}
\begin{Rem}
If $\epsilon_{0}$ is in $\mathbb{L}^{2}$, then there exists $c_{n} \rightarrow \infty$ such that~\eqref{48} holds.
\end{Rem}
\begin{Rem}
Let us suppose that the random variable $\epsilon_{0}$ is such that $\mathbb{E} \left( \left | \epsilon_{0} \right |^{\delta+2} \right) < \infty$, with $\delta \in ]0,2]$. Since for all real $x$, $1 \wedge |x|^{2} \leq |x|^{\delta}$, we have:
\[c_{n} \mathbb{E} \left( \left| \epsilon_{0} \right|^{2} \left( 1 \wedge \frac{c_{n}}{n} \left| \epsilon_{0} \right|^{2} \right) \right) \leq c_{n} \mathbb{E} \left( \left| \epsilon_{0} \right|^{2} \frac{c_{n}^{\delta/2}}{n^{\delta/2}} |\epsilon_{0}|^{\delta} \right) \leq \frac{c_{n}^{1+\delta/2}}{n^{\delta/2}} \mathbb{E} \left( \left | \epsilon_{0} \right |^{\delta+2} \right).\]
Thus if $c_{n}$ satisfies $\frac{c_{n}^{1+\delta/2}}{n^{\delta/2}} \xrightarrow[n \rightarrow \infty]{} 0$, then~\eqref{48} holds.
In particular, if the random variable $\epsilon_{0}$ has a fourth order moment, then the condition on $c_{n}$ is $\frac{c_{n}^{2}}{n} \xrightarrow[n \rightarrow \infty]{} 0$.
\end{Rem}
Theorem~\ref{8} implies the following result:
\begin{Cor}
Under the assumptions of Corollary \ref{15ter}, and if $f(0) > 0$, then:
\begin{equation}
\frac{R(0)^{\frac{1}{2}}}{\sqrt{2\pi f_{n}^{\ast}(0) }} D(n)(\hat{\beta} - \beta) \xrightarrow[n \rightarrow \infty]{\mathcal{L}} \mathcal{N}(0,I_{p}),
\label{51}
\end{equation}
where $I_{p}$ is the $p \times p$ identity matrix.
\label{52}
\end{Cor}
\section{Examples of stationary processes}
In this section, we present some classes of stationary processes satisfying Hannan's condition.
\subsection{Functions of Linear processes}
A large class of stationary processes for which one can check Hannan's condition is the class of smooth functions of linear processes generated by i.i.d. random variables.
Let us take $\Omega = \mathbb{R}^{\mathbb{Z}}$ and $\mathbb{P} = \mu^{\otimes \mathbb{Z}}$, where $\mu$ is a probability measure on $\mathbb{R}$. Let ($\eta_{i}, i \in \mathbb{Z}$) be a sequence of i.i.d. random variables with marginal distribution $\mu$. Let $(a_{i})_{i \in \mathbb{Z}}$ be a sequence of real numbers in $l^{1}$, and assume that $\sum_{i \in \mathbb{Z}} a_{i} \eta_{i}$ is defined almost surely. The random variable $\epsilon_{0}$ is square integrable and is regular with respect to the $\sigma$-algebras : $\mathcal{F}_{i} = \sigma (\eta_{j}, j \leq i)$. We focus on functions of real-valued linear processes:
\[\epsilon_{k} = f \left( \sum_{i \in \mathbb{Z}} a_{i} \eta_{k-i} \right) - \mathbb{E} \left( f \left( \sum_{i \in \mathbb{Z}} a_{i} \eta_{k-i} \right) \right).\]
Let us define the modulus of continuity of $f$ on the interval $[-M, M]$ by:
\[\omega_{\infty,f}(h,M) = \sup_{|t| \leq h, |x| \leq M, |x+t| \leq M} \left | f(x+t) -f(x) \right | .\]
Let $(\eta_{i}')_{i \in \mathbb{Z}}$ be an independent copy of $(\eta_{i})_{i \in \mathbb{Z}}$, and let:
\[M_{k} = \max \left\{ \left | \sum_{i \in \mathbb{Z}} a_{i} \eta_{i}' \right |, \left | a_{k} \eta_{0} + \sum_{i \neq k} a_{i} \eta_{i}' \right | \right\}.\]
According to Section $5$ in the paper of Dedecker, Merlevède, Voln\'y \cite{dmv2007weak}, if the following condition holds:
\begin{equation}
\sum_{k \in \mathbb{Z}} \Big \| \omega_{\infty,f}(|a_{k}| |\eta_{0}|, M_{k}) \wedge \left \| \epsilon_{0} \right \|_{\infty} \Big \|_{\mathbb{L}^{2}} < \infty,
\label{80}
\end{equation}
then Hannan's condition holds.
We have an interesting application if the function $f$ is $\gamma$-Hölder on any compact set; if $\omega_{\infty,f}(h,M) \leq C h^{\gamma} M^{\alpha}$ for some $C > 0$, $\gamma \in ]0,1]$ and $\alpha \geq 0$, then~\eqref{80} holds as soon as $\sum |a_{k}|^{\gamma} < \infty$ and $\mathbb{E}(|\eta_{0}|^{2(\alpha + \gamma)}) < \infty$.
\subsection{$2$-strong stability}
Let us recall in this section the framework used by Wu. We consider stationary processes of the form:
\[\epsilon_{i} = H(\ldots, \eta_{i-1}, \eta_{i}),\]
where $\eta_{i}$, $i$ in $\mathbb{Z}$, are i.i.d. random variables and $H$ is a measurable function.
Assume that $\epsilon_{0}$ belongs to $\mathbb{L}^{2}$, and let $\eta'_{0}$ be distributed as $\eta_{0}$ and independent of $(\eta_{i})$. Let us define the physical dependence measure in $\mathbb{L}^{2}$ \cite{wudependence}, for $j \geq 0$:
\[\delta_{2}(j) = \left \| \epsilon_{j} - \epsilon_{j}^{\ast} \right \|_{\mathbb{L}^{2}},\]
where $\epsilon_{j}^{\ast}$ is a coupled version of $\epsilon_{j}$ with $\eta_{0}$ in the latter being replaced by $\eta'_{0}$:
\[\epsilon_{j}^{\ast} = H(\ldots, \eta_{-1}, \eta'_{0}, \eta_{1}, \ldots, \eta_{j-1}, \eta_{j}).\]
The sequence $(\epsilon_{i})_{i \in {\mathbb Z}}$ is said to be $2$-strong stable if:
\[\Delta_{2} = \sum_{j=0}^{\infty} \delta_{2}(j) < \infty.\]
As a consequence of Theorem $1$, $(i)-(ii)$ of Wu \cite{wu2005nonlinear}, we infer that if
$(\epsilon_{i})_{i \in {\mathbb Z}}$ is $2$-strong stable, then it satisfies Hannan's condition with respect to the filtration $\mathcal{F}_{i} = \sigma(\eta_{j}, j \leq i)$.
Many examples of $2$-strong stable processes are presented in the paper by Wu \cite{wu2005nonlinear}. We also refer to \cite{wudependence} for other examples.
\subsection{Conditions in the style of Gordin}
According to Proposition $5$ of Dedecker, Merlevède, Voln\'y \cite{dmv2007weak}, Hannan's condition holds if the error process satisfies the two following conditions:
\begin{eqnarray}
\sum_{k=1}^{\infty} \frac{1}{\sqrt{k}} \left \| \mathbb{E}(\epsilon_{k} | \mathcal{F}_{0}) \right \|_{\mathbb{L}^{2}} < \infty \label{81} \\
\sum_{k=1}^{\infty} \frac{1}{\sqrt{k}} \left \| \epsilon_{-k} - \mathbb{E}(\epsilon_{-k} | \mathcal{F}_{0}) \right \|_{\mathbb{L}^{2}} < \infty. \label{82}
\end{eqnarray}
These conditions are weaker than the well-known conditions of Gordin \cite{gordin1969central}, under which a martingale + coboundary decomposition holds in $\mathbb{L}^{2}$.
An application is given in the next subsection.
\subsection{Weak dependent coefficients}
Hannan's condition holds if the error process is weakly dependent. In this case, the $(\epsilon_{i})_{i \in \mathbb{Z}}$ process is $\mathcal{F}$-adapted and Condition~\eqref{82} is always true.
Let us recall the definitions of weak dependence coefficients, introduced by Dedecker and Prieur \cite{dedecker_prieur}; for all integer $k \geq 0$:
\[\tilde{\phi}(k) = \tilde{\phi}(\mathcal{F}_{0}, \epsilon_{k}) = \sup_{t \in \mathbb{R}} \left \| \mathbb{P}(\epsilon_{k} \leq t | \mathcal{F}_{0}) - \mathbb{P}(\epsilon_{k} \leq t) \right \|_{\infty},\]
and:
\[\tilde{\alpha}(k) = \tilde{\alpha}(\mathcal{F}_{0}, \epsilon_{k}) = \sup_{t \in \mathbb{R}} \left \| \mathbb{P}(\epsilon_{k} \leq t | \mathcal{F}_{0}) - \mathbb{P}(\epsilon_{k} \leq t) \right \|_{\mathbb{L}^{1}}.\]
If $(\epsilon_{i})_{i \in \mathbb{Z}}$ is $\tilde{\phi}$-dependent and is in $\mathbb{L}^{p}$ with $p \in [2, +\infty[$, then by Hölder's inequality:
\[\left \| \mathbb{E}(\epsilon_{k} | \mathcal{F}_{0}) \right \|_{\mathbb{L}^{2}} \leq \left \| \mathbb{E}(\epsilon_{k} | \mathcal{F}_{0}) \right \|_{\mathbb{L}^{p}} \leq \sup_{Z \in B_{\frac{p}{p-1}}(\mathcal{F}_{0})} \mathbb{E}(Z \epsilon_{k}) \leq 2 \tilde{\phi}(k)^{\frac{p-1}{p}} \left \| \epsilon_{0} \right \|_{\mathbb{L}^{p}},\]
where for all $q \in ]1,2]$, $B_{q}(\mathcal{F}_{0})$ is the set of random variables Z, $\mathcal{F}_{0}$-measurable such that $\left \| Z \right \|_{\mathbb{L}^{q}} \leq 1$.
Consequently, if:
\begin{equation}
\sum_{k=1}^{\infty} \frac{1}{\sqrt{k}} \tilde{\phi}(k)^{\frac{p-1}{p}} < \infty,
\end{equation}
then the condition~\eqref{81} holds, and Hannan's condition is satisfied.\\
Now we look at the $\tilde{\alpha}$-weakly dependent sequence.
We denote $Q_{\epsilon}$ the generalized inverse function of $x \rightarrow \mathbb{P}(|\epsilon| > x)$. If $(\epsilon_{i})_{i \in \mathbb{Z}}$ is $\tilde{\alpha}$-mixing and verifies that there exists $r \in ]2, +\infty[$, such that $\mathbb{P}(|\epsilon| \geq t) \leq t^{-r}$, then, by Cauchy-Schwarz's inequality and Rio's inequality (Theorem $1.1$ \cite{rio1999theorie}), we get:
\[\left \| \mathbb{E}(\epsilon_{k} | \mathcal{F}_{0}) \right \|_{\mathbb{L}^{2}} = \sup_{Z \in B_{2}(\mathcal{F}_{0})} \mathbb{E}(Z \epsilon_{k}) \leq 2 \left( \int_{0}^{\tilde{\alpha}(k)} Q_{\epsilon_{k}}^{2}(u) du \right)^{\frac{1}{2}}.\]
But:
\[\int_{0}^{\tilde{\alpha}(k)} Q_{\epsilon_{k}}^{2}(u) du \leq \int_{0}^{\tilde{\alpha}(k)} \frac{1}{u^{\frac{2}{r}}} du \leq \tilde{\alpha}(k)^{1-\frac{2}{r}}.\]
Hence, if:
\begin{equation}
\sum_{k=1}^{\infty} \frac{\tilde{\alpha}(k)^{\frac{1}{2} - \frac{1}{r}}}{\sqrt{k}} < \infty,
\label{16bis}
\end{equation}
then~\eqref{81} is true, and Hannan's condition is satisfied.\\
Notice that all we have written for $\tilde{\alpha}$-dependent sequences is also true for $\alpha$-mixing processes in the sense of Rosenblatt \cite{rosenblatt2012stationary}.
\section{Tests and Simulations}
We consider the linear regression model~\eqref{-1}, and we assume that Hannan's condition~\eqref{0} as well as the conditions~\eqref{1} to~\eqref{4bis} on the design are satisfied.
We also assume that $\epsilon_{0}$ is $\mathcal{F}_{\infty}$-measurable and that $\mathcal{F}_{-\infty}$ is trivial.
With these conditions, the usual Fischer tests can be modified and adapted to the case where the errors are short-range dependent.
As usual, the null hypothesis $H_{0}$ means that the parameter $\beta$ belongs to a vector space with dimension strictly smaller than $p$, and we denote by $H_{1}$ the alternative hypothesis (meaning that $H_{0}$ is not true, but~\eqref{-1} holds).
In the case of regular design, thanks to Corollary~\ref{52}, the usual Fischer tests to test $H_{0}$ versus $H_{1}$, can be corrected by replacing the estimator of $\sigma^{2} = \mathbb{E}(\epsilon_{0}^{2})$ by an estimator of: $\sum_{k} \gamma(k)$.
Recall that if the errors are i.i.d. Gaussian random variables, the test statistic is:
\begin{equation}
F = \frac{1}{p-p_{0}} \times \frac{RSS_{0} - RSS} {\hat{\sigma}^{2}_{\epsilon}}.
\label{53}
\end{equation}
In this expression, the integer $p_{0}$ is the dimension of the model under the $H_{0}$-hypothesis, $RSS$ is the sum of the squares of the residuals for the complete model~\eqref{-1} (equal to $\left \| \hat{\epsilon} \right \|_{2}^{2}$), $RSS_{0}$ is the corresponding quantity under $H_{0}$, and $\hat{\sigma}^{2}_{\epsilon}$ is the estimator of the variance of $\epsilon_{0}$ (equal to $\frac{RSS}{n-p}$).
Under $H_{0}$, the quantity $F$ follows a Fischer distribution with parameters $(p-p_{0}, n-p)$.
In the case where the design satisfies Hannan's conditions, if the random variables $(\epsilon_{i})$ are i.i.d. but do not necessarily follow a gaussian distribution, the test statistic is the same as~\eqref{53} and converges to a $\chi^{2}$-distribution under the $H_{0}$-hypothesis:
\[F \xrightarrow[n \rightarrow \infty]{\mathcal{L}} \frac{\chi^{2} (p-p_{0})}{p-p_{0}}.\]
Now if the error process $(\epsilon_{i})_{i \in \mathbb{Z}}$ is stationary, the test statistic must be corrected as follows:
\begin{equation}
\tilde{F}_{c} = \frac{1}{p-p_{0}} \times \frac{RSS_{0} - RSS} {2 \pi f_{n}^{\ast}(0)},
\label{53bis}
\end{equation}
where $f_{n}^{\ast}$ is defined in~\eqref{19}. Thanks to Corollary~\ref{52}, it converges to a $\chi^{2}$-distribution:
\[\tilde{F}_{c} \xrightarrow[n \rightarrow \infty]{\mathcal{L}} \frac{\chi^{2} (p-p_{0})}{p-p_{0}}.\]
In practice, we shall only estimate a finite number of $\gamma(k)$, say $a_{n}$.
For the simulations, we shall use the graph of the empirical autocovariance of the residuals to choose $a_{n}$, and instead of~\eqref{53bis}, we shall consider the statistics:
\begin{equation}
F_{c} = \frac{1}{p-p_{0}} \times \frac{RSS_{0} - RSS} {\hat{\gamma}_{0} + \sum_{k=1}^{a_{n}} \hat{\gamma}_{k}},
\label{53ter}
\end{equation}
with $\hat{\gamma}_{k}$ defined in~\eqref{18}.
\subsection{Example 1: A non-mixing autoregressive process}
The process ($\epsilon_{1}, \ldots, \epsilon_{n}$) is simulated, according to the AR(1) equation:
\[\epsilon_{k+1} = \frac{1}{2}(\epsilon_{k} + \eta_{k+1}),\]
where $\epsilon_{1}$ is uniformly distributed over $[-\frac{1}{2},\frac{1}{2}]$, and $(\eta_{i})_{i \geq 2}$ is a sequence of i.i.d. random variables, independent of $\epsilon_{1}$, such that $\mathbb{P}(\eta_{i} = -\frac{1}{2}) = \mathbb{P}(\eta_{i} = \frac{1}{2}) = \frac{1}{2}$. In this example, $\mathcal{F}_{i} = \sigma(\eta_{k}, k \leq i)$, and the $\sigma$-algebra $\mathcal{F}_{-\infty}$ is trivial.
The transition kernel of the chain $(\epsilon_{i})_{i \geq 1}$ is:
\[K(f)(x) = \frac{1}{2} \left( f \left( \frac{x}{2} + \frac{1}{4} \right) + f \left( \frac{x}{2} - \frac{1}{4} \right) \right),\]
and the uniform distribution on $[-\frac{1}{2},\frac{1}{2}]$ is the unique invariant distribution by $K$. Hence, the chain $(\epsilon_{i})_{i \geq 1}$ is strictly stationary.
Furthermore, it is not $\alpha$-mixing in the sense of Rosenblatt \cite{bradley1985basic}, but it is $\tilde{\phi}$-dependent. Indeed, one can prove that the coefficient $\tilde{\phi}$ of the chain $(\epsilon_{i})_{i \geq 1}$ decreases geometrically \cite{dedecker_prieur}:
\[\tilde{\phi}(k) \leq 2^{-k}.\]
Consequently Hannan's conditions are satisfied and the Fischer tests can be corrected as indicated above.\\
The first model simulated with this error process is the following linear regression model, for all $i$ in $\{1, ..., n\}$:
\[Y_{i} = \beta_{0} + \beta_{1} i + 10 \epsilon_{i}.\]
The random variables $\epsilon_{i}$ are multiplied by $10$ to increase the variance. The coefficient $\beta_{0}$ is chosen equal to $3$.
We test the hypothesis $H_{0}$: $\beta_{1} = 0$, against the hypothesis $H_{1}$: $\beta_{1} \neq 0$.
The estimated level of the Fischer test will be studied for different choices of $n$ and $a_{n}$, which is the number of covariance terms considered.
Under the hypothesis $H_{0}$, the same Fischer test is carried out $2000$ times. Then we look at the frequency of rejection of the test when we are under $H_{0}$, that is to say the estimated level of the test. Let us specify that we want an estimated level close to $5\%$.\\
$\bullet$ Case $\beta_{1} = 0$ and $a_{n} = 0$ (no correction): \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & $200$ & $400$ & $600$ & $800$ & $1000$ \\
\hline
Estimated level & $0.2745$ & $0.2655$ & $0.2615$ & $0.2845$ & $0.2445$ \\
\hline
\end{tabular} \\
\end{center}
Here, since $a_{n} = 0$, we do not estimate any of the covariance terms. The result is that the estimated levels are too large. This means that the test will reject the null hypothesis too often. \\
The quantities $a_{n}$ may be chosen by analyzing the graph of the empirical autocovariances, Figure~\ref{fig_reg_simple_AR1}, obtained with $n = 1000$. For this example, this graph suggests a choice of $a_{n} = 2$ or $3$. \\
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{reg_simple_AR1.png}
\end{center}
\caption{Empirical autocovariances for the first model of Example 1, n = 600.}
\label{fig_reg_simple_AR1}
\end{figure}
$\bullet$ Case $\beta_{1} = 0$, $a_{n} = 2$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & $200$ & $400$ & $600$ & $800$ & $1000$ \\
\hline
Estimated level & $0.0805$ & $0.086$ & $0.0745$ & $0.0675$ & $0.077$ \\
\hline
\end{tabular} \\
\end{center}
As suggested by the graph of the empirical autocovariances, the choice $a_{n} = 2$ gives a better estimated level than $a_{n}=0$. \\
$\bullet$ Case $\beta_{1} = 0$, $a_{n} = 3$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & $200$ & $400$ & $600$ & $800$ & $1000$ \\
\hline
Estimated level & $0.078$ & $0.0725$ & $0.074$ & $0.059$ & $0.0625$ \\
\hline
\end{tabular} \\
\end{center}
Here, we see that the choice $a_{n} = 3$ works well also, and seems even slightly better than the choice $a_{n} = 2$. If one increases the size of the samples $n$, and the number of estimated covariance terms $a_{n}$, we are getting closer to the estimated level $5$ \%. If $n = 5000$ and $a_{n} = 4$, the estimated level is around $0.05$.\\
$\bullet$ Case $\beta_{1} = 0.005$, $a_{n} = 3$: \\
In this example, $H_{0}$ is not satisfied. We choose $\beta_{1}$ equal to $0.005$, and we perform the same tests as above ($N=2000$) to estimate the power of the test.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & $200$ & $400$ & $600$ & $800$ & $1000$ \\
\hline
Estimated power & $0.2255$ & $0.728$ & $0.9945$ & $1$ & $1$ \\
\hline
\end{tabular} \\
\end{center}
As one can see, the estimated power is always greater than $0.05$, as expected.
Still as expected, the estimated power increases with the size of the samples. For $n = 200$, the power of the test is around $0.2255$, and for $n = 800$, the power is around $1$.
As soon as $n = 800$, the test always rejects the $H_{0}$-hypothesis.\\
The second model considered is the following linear regression model, for all $i$ in $\{1, ..., n\}$:
\[Y_{i} = \beta_{0} + \beta_{1} i + \beta_{2} i^{2} + 10 \epsilon_{i}.\]
Here, we test the hypothesis $H_{0}$: $\beta_{1} = \beta_{2} = 0$ against $H_{1}$: $\beta_{1} \neq 0$ or $\beta_{2} \neq 0$. The coefficient $\beta_{0}$ is equal to $3$, and we use the same simulation scheme as above. \\
$\bullet$ Case $\beta_{1} = \beta_{2} = 0$ and $a_{n} = 0$ (no correction): \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & $200$ & $400$ & $600$ & $800$ & $1000$ \\
\hline
Estimated level & $0.402$ & $0.378$ & $0.385$ & $0.393$ & $0.376$ \\
\hline
\end{tabular} \\
\end{center}
As for the first simulation, if $a_{n} = 0$ the test will reject the null hypothesis too often.\\
As suggested by the graph of the estimated autocovariances figure~\ref{fig_reg_mult_AR1}, the choice $a_{n} = 4$ should give a better result for the estimated level.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{reg_mult_AR1.png}
\end{center}
\caption{Empirical autocovariances for the second model of Example 1, n = 600.}
\label{fig_reg_mult_AR1}
\end{figure}
$\bullet$ Case $\beta_{1} = \beta_{2} = 0$, $a_{n} = 4$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & $200$ & $400$ & $600$ & $800$ & $1000$ \\
\hline
Estimated level & $0.103$ & $0.076$ & $0.069$ & $0.056$ & $0.063$ \\
\hline
\end{tabular} \\
\end{center}
Here, we see that the choice $a_{n} = 4$ works well. For $n=1000$, the estimated level is around $0.06$. If $n = 2000$ and $a_{n} = 4$, the estimated level is around $0.05$.\\
$\bullet$ Case $\beta_{1} = 0.005$, $\beta_{2} = 0$, $a_{n} = 4$: \\
Now, we study the estimated power of the test. The coefficient $\beta_{1}$ is chosen equal to $0.005$ and $\beta_{2}$ is zero.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$n$ & $200$ & $400$ & $600$ & $800$ & $1000$ \\
\hline
Estimated power & $0.2145$ & $0.634$ & $0.9855$ & $1$ & $1$ \\
\hline
\end{tabular} \\
\end{center}
As expected, the estimated power increases with the size of the samples, and it is around $1$ as soon as $n = 800$.\\
The third model that we consider is the following linear regression model, for all $i$ in $\{1, ..., n\}$:
\[Y_{i} = \beta_{0} + \beta_{1} \sqrt i + \beta_{2} \log(i) + 10 \epsilon_{i}.\]
We test again the hypothesis $H_{0}$: $\beta_{1} = \beta_{2} = 0$ against $H_{1}$: $\beta_{1} \neq 0$ or $\beta_{2} \neq 0$. The coefficient $\beta_{0}$ is equal to $3$. The conditions of the simulation are the same as above except for the size of the samples. Indeed, for this model, the size of the samples $n$ must be greater than previously to have an estimated level close to $5$\% with the correction. \\
$\bullet$ Case $\beta_{1} = \beta_{2} = 0$ and $a_{n} = 0$ (no correction): \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.4435$ & $0.4415$ & $0.427$ & $0.3925$ & $0.397$ & $0.4075$ \\
\hline
\end{tabular} \\
\end{center}
As for the first and second simulation, if $a_{n} = 0$ the test will reject the null hypothesis too often.\\
As suggested by the graph of the estimated autocovariances figure~\ref{fig_reg_mult_2_AR1}, the choice $a_{n} = 4$ should give a better result for the estimated level.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{reg_mult_2_AR1.png}
\end{center}
\caption{Empirical autocovariances for the third model of Example 1, n = 2000.}
\label{fig_reg_mult_2_AR1}
\end{figure}
$\bullet$ Case $\beta_{1} = \beta_{2} = 0$, $a_{n} = 4$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.106$ & $0.1$ & $0.078$ & $0.072$ & $0.077$ & $0.068$ \\
\hline
\end{tabular} \\
\end{center}
For $a_{n} = 4$ and $n = 5000$, the estimated level is around $0.07$. If $n = 10000$, it is around $5$\%. \\
Then, we study the estimated power of the test for $\beta_{0}$ or $\beta_{1}$ non equal to $0$. \\
$\bullet$ Case $\beta_{1} = 0$, $\beta_{2} = 0.2$, $a_{n} = 4$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated power & $0.2505$ & $0.317$ & $0.4965$ & $0.6005$ & $0.725$ & $0.801$ \\
\hline
\end{tabular} \\
\end{center}
As expected, the estimated power increases with the size of the samples, and it is around $0.8$ as soon as $n = 5000$.\\
\subsection{Example 2: Intermittent maps}
For $\gamma$ in $]0,1[$, we consider the intermittent map $\theta_{\gamma}$ from $[0,1]$ to $[0,1]$, introduced by Liverani, Saussol and Vaienti \cite{liverani1999probabilistic}:
\[\theta_{\gamma}(x) =
\left\{
\begin{array}{r c l}
x(1 + 2^{\gamma} x^{\gamma}) \qquad \text{if} \ x \in [0, 1/2[ \\
2x - 1 \qquad \text{if} \ x \in [1/2, 1].\\
\end{array}
\right.\]
It follows from \cite{liverani1999probabilistic} that there exists a unique absolutely continuous $\theta_{\gamma}$-invariant probability measure $\nu_{\gamma}$, with density $h_{\gamma}$.
Let us briefly describe the Markov chain associated with $\theta_{\gamma}$, and its properties. Let first $K_{\gamma}$ be the Perron-Frobenius operator of $\theta_{\gamma}$ with respect to $\nu_{\gamma}$, defined as follows: for any functions $u$, $v$ in $\mathbb{L}^{2}([0,1], \nu_{\gamma})$:
\[\nu_{\gamma}(u \cdot v \circ \theta_{\gamma}) = \nu_{\gamma}(K_{\gamma}(u) \cdot v).\]
The operator $K_{\gamma}$ is a transition kernel, and $\nu_{\gamma}$ is invariant by $K_{\gamma}$. Let now $(\xi_{i})_{i \geq 1}$ be a stationary Markov chain with invariant measure $\nu_{\gamma}$ and transition kernel $K_{\gamma}$. It is well-known that on the probability space ($[0,1], \nu_{\gamma}$), the random vector ($\theta_{\gamma}, \theta_{\gamma}^{2}, \ldots, \theta_{\gamma}^{n}$) is distributed as ($\xi_{n}, \xi_{n-1}, \ldots, \xi_{1}$). Now it is proved in \cite{dedecker2010some} that there exists two positive constants $A, B$ such that:
\[\frac{A}{(n+1)^{\frac{1-\gamma}{\gamma}}} \leq \tilde{\alpha}_{\xi}(n) \leq \frac{B}{(n+1)^{\frac{1-\gamma}{\gamma}}}\]
Moreover, the chain $(\xi_{i})_{i \geq 1}$ is not $\alpha$-mixing in the sense of Rosenblatt \cite{rosenblatt1956central}.\\
In the following simulations, we consider linear regression models, where $\epsilon_{i} = \theta_{\gamma}^{i}$. But, in our context, the coefficient $\gamma$ must belong to $]0,\frac{1}{2}[$. Indeed, if $\gamma$ is lower than $\frac{1}{2}$, then Condition~\eqref{16bis} is verified. Consequently, Hannan's condition is satisfied and we can apply our results. Note that if $\gamma$ is greater than $\frac{1}{2}$, then the chain $(\xi_{i})$ is long-range dependent (see the introduction in \cite{dedecker2015weak}).
Recall that our results apply only in the short-range dependent case, so we shall only consider the case where $\gamma < \frac{1}{2}$.
For the simulations, the coefficient $\gamma$ is chosen equal to $\frac{1}{4}$. Consequently, $\tilde{\alpha}(n)$ is of order $n^{-3}$, which is quite slow. In addition, if $\mathcal{F}_{i} = \sigma(\xi_{k}, k \leq i)$ then $\mathcal{F}_{-\infty}$ is trivial (see for instance \cite{dedecker2010some}). \\
Note that, in this example, the mean of the errors is not equal to $0$, but this is not an issue because, it will only modified the intercept term in our different models.\\
For the first simulation, we consider the following linear regression model, for all $i$ in $\{1, \ldots, n\}$:
\[Y_{i} = \beta_{0} + \beta_{1} i + 10 \epsilon_{i},\]
where the hypothesis $H_{0}$ is: $\beta_{1} = 0$, and the hypothesis $H_{1}$ is: $\beta_{1} \neq 0$. Again the coefficient $\beta_{0}$ is equal to $3$ and the random variables $\epsilon_{i}$ are multiplied by $10$ to increase the variance.
We shall study the estimated level of the test for different choices of $n$ and $a_{n}$, which is the number of covariance terms considered.
With intermittent maps the convergence is slower; the coefficient $\tilde{\alpha}$ do not decrease geometrically. Thereby we consider larger samples ($n = 500$ to $n = 5000$, sometimes $n = 10000$ or $20000$).
Under the hypothesis $H_{0}$, the same Fischer test is carried out $2000$ times. Then we look at the frequency of rejection of the test when we are under $ H_ {0} $ (i.e. the level of the test). Let us specify that we want an estimated level close to $5\%$.\\
$\bullet$ Case $\beta_{1} = 0$ and $a_{n} = 0$ (no correction): \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.361$ & $0.365$ & $0.3685$ & $0.371$ & $0.3645$ & $0.349$ \\
\hline
\end{tabular} \\
\end{center}
Here, since $a_{n} = 0$, we do not estimate any of the covariance terms. The result is that the estimated levels are too large. This means that the test will reject the null hypothesis too often.\\
The quantities $a_{n}$ may be chosen by analyzing the graph of the empirical autocovariances (see Figure~\ref{fig_syst_dyn}). In the case of intermittent maps, the number $a_{n}$ should be larger than for the previous example. \\
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{syst_dyn.png}
\end{center}
\caption{Empirical autocovariances for the first model of Example 2, n = 2000.}
\label{fig_syst_dyn}
\end{figure}
$\bullet$ Case $\beta_{1} = 0$, $a_{n} = 5$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.101$ & $0.0805$ & $0.0755$ & $0.073$ & $0.0705$ & $0.0805$ \\
\hline
\end{tabular} \\
\end{center}
$\bullet$ Case $\beta_{1} = 0$, $a_{n} = 6$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.086$ & $0.076$ & $0.0705$ & $0.0635$ & $0.066$ & $0.0675$ \\
\hline
\end{tabular} \\
\end{center}
$\bullet$ Case $\beta_{1} = 0$, $a_{n} = 7$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.09$ & $0.072$ & $0.074$ & $0.0585$ & $0.061$ & $0.06$ \\
\hline
\end{tabular} \\
\end{center}
For small samples ($n=500$), $a_{n}$ equal to $5$ is enough. The estimated level does not change a lot, and is around $0.095$.
But for large samples, $a_{n} = 7$ is better. Indeed, with $n = 5000$ and $a_{n} = 7$, the estimated level is around $0.06$, and if $n = 10000$, this is around $0.05$.
We see here that an automatic criterion to choose $a_{n}$ would be useful. \\
Then, we study the estimated power of the test for $\beta_{1}$ non equal to $0$.\\
$\bullet$ Case $\beta_{1} = 0.0005$, $a_{n} = 6$ : \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated power & $0.1195$ & $0.1865$ & $0.663$ & $0.979$ & $1$ & $1$ \\
\hline
\end{tabular} \\
\end{center}
As one can see, the estimated power is always greater than $0.05$. As expected, the estimated power increases with the size of the samples. For $n = 500$, the power of the test is around $0.12$, and for $n = 4000$, the power is around $1$.
As soon as $n \geq 4000$, the test always rejects the $H_{0}$-hypothesis. \\
The second model considered is the following linear regression model, for all $i$ in $\{1, \ldots, n\}$:
\[Y_{i} = \beta_{0} + \beta_{1} i + \beta_{2} i^{2} + 10 \epsilon_{i}.\]
We test here the hypothesis $H_{0}$: $\beta_{1} = \beta_{2} = 0$ against $H_{1}$: $\beta_{1} \neq 0$ or $\beta_{2} \neq 0$.
The conditions of the simulation are the same as above. \\
$\bullet$ Case $\beta_{1} = \beta_{2} = 0$ and $a_{n} = 0$ (no correction): \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.536$ & $0.506$ & $0.5275$ & $0.5165$ & $0.5055$ & $0.4925$ \\
\hline
\end{tabular} \\
\end{center}
As for the first simulation, if $a_{n} = 0$ the test will reject the null hypothesis too often.\\
As suggested by the graph of the estimated autocovariances, the choice $a_{n} = 6$ or $7$ should give a better result for the estimated level.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.5]{syst_dyn_2.png}
\end{center}
\caption{Empirical autocovariances for the second model of Example 2, n = 2000.}
\label{fig_syst_dyn_2}
\end{figure}
$\bullet$ Case $\beta_{1} = \beta_{2} = 0$, $a_{n} = 5$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.1265$ & $0.0905$ & $0.078$ & $0.079$ & $0.079$ & $0.085$ \\
\hline
\end{tabular} \\
\end{center}
$\bullet$ Case $\beta_{1} = \beta_{2} = 0$, $a_{n} = 6$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.1065$ & $0.1$ & $0.0795$ & $0.08$ & $0.0705$ & $0.0685$ \\
\hline
\end{tabular} \\
\end{center}
$\bullet$ Case $\beta_{1} = \beta_{2} = 0$, $a_{n} = 7$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated level & $0.112$ & $0.0815$ & $0.071$ & $0.07$ & $0.0725$ & $0.0615$ \\
\hline
\end{tabular} \\
\end{center}
As for the first example, for small samples, $a_{n}$ equal to $5$ is enough and it is not necessary to increase the value of $a_{n}$. But for large samples, larger values of $a_{n}$ are required. So for $n = 5000$ and $a_{n} = 7$, the estimated level is around $0.06$. If $n = 20000$ and $a_{n} = 9$, we approach the level $0.05$. \\
Then, we study the estimated power of the test for $\beta_{0}$ or $\beta_{1}$ non equal to $0$. \\
$\bullet$ Case $\beta_{1} = 0.0005$, $\beta_{2} = 0$, $a_{n} = 7$: \\
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$n$ & $500$ & $1000$ & $2000$ & $3000$ & $4000$ & $5000$ \\
\hline
Estimated power & $0.13$ & $0.1675$ & $0.5685$ & $0.964$ & $1$ & $1$ \\
\hline
\end{tabular} \\
\end{center}
As expected, the estimated power increases with the size of the samples, and it is around $1$ as soon as $n \geq 4000$.\\
\newpage
\section{Proofs}
\subsection{Proposition~\ref{9ajout}}
\begin{proof}
Let us define:
\[d_{j}(n) = || X_{.,j} ||_{2} = \sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}}.\]
The condition~\eqref{1} is verified if:
\begin{equation}
\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2} \rightarrow \infty.
\label{proof1}
\end{equation}
When $2 \alpha_{j} < -1$, it is known that~\eqref{proof1} converges.
However, for $2 \alpha_{j} > -1$, thanks to Proposition $2.2.1$ of Pipiras and Taqqu \cite{pipiras2017long}, we have the following equivalence:
\[\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2} \sim \frac{n^{2 \alpha_{j}+1} L(n)^{2}}{2 \alpha_{j}+1},\]
and this quantity diverges as $n$ tends to infinity.
Thus the condition~\eqref{1} is satisfied if $\alpha_{j}$ is strictly greater than $-\frac{1}{2}$. We also immediately check that~\eqref{2} is satisfied. \\
Now let us compute the coefficients $\rho_{j,l}(k)$ and prove that they do not depend on $k$. For $j, l$ belonging to $\{1, \ldots, p\}$:
\[\sum_{m=1}^{n-k} \frac{x_{m,j}x_{m+k,l}}{d_{j}(n)d_{l}(n)} = \frac{\sum_{m=1}^{n-k} m^{\alpha_{j}} L(m) (m+k)^{\alpha_{l}} L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}},\]
and we have:
\begin{multline}
\frac{\sum_{m=1}^{n-k} m^{\alpha_{j}} L(m) (m+k)^{\alpha_{l}} L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
= \frac{\sum_{m=1}^{n-k} (m^{\alpha_{j}} ((m+k)^{\alpha_{l}} - m^{\alpha_{l}})) L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
+ \frac{\sum_{m=1}^{n-k} m^{\alpha_{j}} m^{\alpha_{l}} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}}.
\label{9bis}
\end{multline}
Let us deal with the first term of the right-hand side in~\eqref{9bis}. If $\alpha_{l} \geq 1$, we get:
\begin{multline*}
\frac{\sum_{m=1}^{n-k} (m^{\alpha_{j}} ((m+k)^{\alpha_{l}} - m^{\alpha_{l}})) L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
\leq \frac{\sum_{m=1}^{n-k} (m^{\alpha_{j}} (k \alpha_{l} (m+k)^{\alpha_{l}-1})) L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
\leq \frac{(k \alpha_{l})\sum_{m=1}^{n-k} m^{\alpha_{j}} (m(1+ \frac{k}{m}))^{\alpha_{l}-1} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}},
\end{multline*}
and because $\frac{k}{m}$ is smaller or equal to $k$:
\begin{multline*}
\frac{(k \alpha_{l})\sum_{m=1}^{n-k} m^{\alpha_{j}} (m(1+ \frac{k}{m}))^{\alpha_{l}-1} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
\leq \frac{(k \alpha_{l})\sum_{m=1}^{n-k} m^{\alpha_{j}} m^{\alpha_{l}-1} (1+ k)^{\alpha_{l}-1} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
\leq \frac{(k \alpha_{l}) (1+ k)^{\alpha_{l}-1} \sum_{m=1}^{n} m^{\alpha_{j} + \alpha_{l} - 1} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2\alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}}.
\end{multline*}
Using again the proposition of Pipiras and Taqqu, we have:
\begin{multline*}
\frac{(k \alpha_{l}) (1+ k)^{\alpha_{l}-1} \sum_{m=1}^{n} m^{\alpha_{j} + \alpha_{l} - 1} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
\sim \frac{(k \alpha_{l}) (1+ k)^{\alpha_{l}-1} \frac{n^{\alpha_{j} + \alpha_{l}}}{\alpha_{j}+\alpha_{l}} L(n) L'(n+k)}{\sqrt{\frac{n^{2 \alpha_{j}+1}}{2 \alpha_{j}+1} L(n)^{2}} \sqrt{\frac{n^{2 \alpha_{l}+1}}{2 \alpha_{l}+1} L'(n)^{2}}} \\
\sim \frac{\sqrt{2 \alpha_{j}+1} \sqrt{2 \alpha_{l}+1} (k \alpha_{l}) (1+ k)^{\alpha_{l}-1}}{\alpha_{j}+\alpha_{l}} \frac{1}{n} \frac{L'(n+k)}{L'(n)},
\end{multline*}
and this quantity tends to $0$ as $n$ tends to infinity.
With the same idea, if $0 < \alpha_{l} < 1$ and again for the first term on the right-hand side in~\eqref{9bis}, we have:
\begin{multline*}
\frac{\sum_{m=1}^{n-k} m^{\alpha_{j}} ((m+k)^{\alpha_{l}} - m^{\alpha_{l}}) L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
\leq \frac{\sum_{m=1}^{n-k} (m^{\alpha_{j}} (k \alpha_{l} m^{\alpha_{l}-1})) L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
\leq \frac{(k \alpha_{l}) \sum_{m=1}^{n} m^{\alpha_{j} + \alpha_{l} - 1} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} .
\end{multline*}
If $\alpha_{j}+\alpha_{l} > 0$, we can use the equivalence of Pipiras and Taqqu and show that it converges to $0$:
\begin{eqnarray*}
\frac{(k \alpha_{l}) \sum_{m=1}^{n} m^{\alpha_{j} + \alpha_{l} - 1} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}}
&\sim & \frac{(k \alpha_{l}) \sqrt{2 \alpha_{j}+1} \sqrt{2\alpha_{l}+1}}{\alpha_{j}+\alpha_{l}} \frac{1}{n} \frac{L'(n+k)}{L'(n)}.
\end{eqnarray*}
If $\alpha_{j}+\alpha_{l} < 0$, the quantity converges to $0$, because the numerator is summable and the denominator tends to infinity. Furthermore, if $\alpha_{j}+\alpha_{l} = 0$, the quantity converges to $0$ too.
Finally, if $-\frac{1}{2} < \alpha_{l} < 0$, we have:
\begin{eqnarray*}
\frac{\sum_{m=1}^{n-k} (m^{\alpha_{j}} ((m+k)^{\alpha_{l}} - m^{\alpha_{l}})) L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}}
&\leq & \frac{\sum_{m=1}^{n-k} (m^{\alpha_{j}} \left| (m+k)^{\alpha_{l}} - m^{\alpha_{l}} \right| ) L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
&\leq & \frac{\sum_{m=1}^{n-k} (m^{\alpha_{j}} (k | \alpha_{l} | m^{\alpha_{l}-1})) L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}} \\
&\leq & \frac{(k | \alpha_{l} |) \sum_{m=1}^{n} m^{\alpha_{j}+\alpha_{l}-1} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}},
\end{eqnarray*}
and we get the same results as above.\\
For the second term on the right-hand side in~\eqref{9bis}, we use again the proposition of Pipiras and Taqqu:
\begin{eqnarray*}
\frac{\sum_{m=1}^{n-k} m^{\alpha_{j}+\alpha_{l}} L(m) L'(m+k)}{\sqrt{\sum_{i=1}^{n} i^{2 \alpha_{j}} L(i)^{2}} \sqrt{\sum_{q=1}^{n} q^{2 \alpha_{l}} L'(q)^{2}}}
&\sim & \frac{\frac{(n-k)^{\alpha_{j}+\alpha_{l}+1}}{\alpha_{j}+\alpha_{l}+1} L(n-k) L'(n)}{\sqrt{\frac{n^{2\alpha_{j}+1}}{2 \alpha_{j}+1} L(n)^{2}} \sqrt{\frac{n^{2 \alpha_{l}+1}}{2 \alpha_{l}+1} L'(n)^{2}}} \\
&\sim & \frac{\sqrt{2 \alpha_{j}+1} \sqrt{2\alpha_{l}+1}}{\alpha_{j}+\alpha_{l}+1} \frac{(n-k)^{\alpha_{j}+\alpha_{l}+1}}{n^{\alpha_{j}+1/2} n^{\alpha_{l}+1/2}} \frac{L(n-k)}{L(n)},
\end{eqnarray*}
and this quantity converges to $\frac{\sqrt{2\alpha_{j}+1} \sqrt{2 \alpha_{l}+1}}{\alpha_{j}+\alpha_{l}+1}$.
Thereby the coefficients $\rho_{j,l}(k)$ are constants and equal to $\frac{\sqrt{2\alpha_{j}+1} \sqrt{2\alpha_{l}+1}}{\alpha_{j}+\alpha_{l}+1}$.
\end{proof}
\subsection{Theorem~\ref{50}}
\begin{proof}
The proof of Theorem~\ref{50} is splitted in two parts. Indeed, notice that:
\[\left \| f_{n}^{\ast}(\lambda) - f(\lambda) \right \|_{\mathbb{L}^{1}} \leq \left \| f_{n}^{\ast}(\lambda) - f_{n}(\lambda) \right \|_{\mathbb{L}^{1}} + \left \| f_{n}(\lambda) - f(\lambda) \right \|_{\mathbb{L}^{1}}\]
The proof is complete with Propositions~\ref{200bis} and~\ref{201bis}:
\begin{prop}
Under the assumptions of Theorem~\ref{50}, we have:
\begin{equation}
\lim_{n \rightarrow \infty} \sup_{\lambda \in [-\pi,\pi]} \left \| f_{n}(\lambda) - f(\lambda) \right \|_{\mathbb{L}^{1}} = 0
\label{200}
\end{equation}
\label{200bis}
\end{prop}
\begin{prop}
Under the assumptions of Theorem~\ref{50}, we have:
\begin{equation}
\lim_{n \rightarrow \infty} \sup_{\lambda \in [-\pi,\pi]} \left \| f_{n}^{\ast}(\lambda) - f_{n}(\lambda) \right \|_{\mathbb{L}^{1}} = 0
\label{201}
\end{equation}
\label{201bis}
\end{prop}
\end{proof}
\subsubsection{Proposition~\ref{200bis}}
\begin{proof}
Without loss of generality, $c_{n}$ is chosen such that $2c_{n} \leq n-1$.
Let $m$ be an integer such that: $1 \leq 2m \leq 2c_{n} \leq n-1$. For all $i \in \mathbb{Z}$, define:
\begin{equation}
\tilde{\epsilon}_{i,m} = \mathbb{E}(\epsilon_{i} | \mathcal{F}_{i+m}) - \mathbb{E}(\epsilon_{i} | \mathcal{F}_{i-m}).
\label{202}
\end{equation}
and notice that $\mathbb{E}(\tilde{\epsilon}_{i,m}) = 0$. The associated spectral density estimate is defined as follows:
\[\tilde{f}_{n}^{m}(\lambda) = \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left(\frac{|k|}{c_{n}} \right) \hat{\tilde{\gamma}}_{k,m} e^{i k \lambda}\text{,} \ \quad \lambda \in [-\pi,\pi],\]
where :
\[\hat{\tilde{\gamma}}_{k,m} = \frac{1}{n} \sum_{j=1}^{n-|k|} \tilde{\epsilon}_{j,m} \tilde{\epsilon}_{j+|k|,m}, \ \quad |k| \leq n-1.\]
By the triangle inequality, it follows that:
\begin{eqnarray*}
\left \| f_{n} \left( \lambda \right) - f \left( \lambda \right) \right \|_{\mathbb{L}^{1}}
&\leq & \left \| f_{n}(\lambda) - \tilde{f}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{1}} + \left \| \tilde{f}_{n}^{m}(\lambda) - \mathbb{E}(\tilde{f}_{n}^{m}(\lambda)) \right \|_{\mathbb{L}^{1}} \\
&& +\: \left | \mathbb{E}(\tilde{f}_{n}^{m}(\lambda)) - \mathbb{E}(f_{n}(\lambda)) \right | + \left \| \mathbb{E}(f_{n}(\lambda)) - f(\lambda) \right \|_{\mathbb{L}^{1}} \\
&\leq & 2\left \| \tilde{f}_{n}^{m}(\lambda) - f_{n}(\lambda) \right \|_{\mathbb{L}^{1}} + \left \| \tilde{f}_{n}^{m}(\lambda) - \mathbb{E}(\tilde{f}_{n}^{m}(\lambda)) \right \|_{\mathbb{L}^{1}} + \left \| \mathbb{E}(f_{n}(\lambda)) - f(\lambda) \right \|_{\mathbb{L}^{1}}
\end{eqnarray*}
because $\left | \mathbb{E}(\tilde{f}_{n}^{m}(\lambda)) - \mathbb{E}(f_{n}(\lambda)) \right | \leq \left \| \tilde{f}_{n}^{m}(\lambda)) - f_{n}(\lambda) \right \|_{\mathbb{L}^{1}}$.\\
The proof is complete using Lemmas~\ref{205bis},~\ref{206bis} and~\ref{207bis}:
\begin{lem}
Under the assumptions of Theorem~\ref{50}, we have:
\begin{equation}
\lim_{n \rightarrow \infty} \sup_{\lambda \in [-\pi,\pi]} \left \| \mathbb{E}(f_{n}(\lambda)) - f(\lambda) \right \|_{\mathbb{L}^{1}} = 0
\label{205}
\end{equation}
\label{205bis}
\end{lem}
\begin{lem}
Under the assumptions of Theorem~\ref{50}, we have:
\begin{equation}
\lim_{m \rightarrow \infty} \limsup_{n \rightarrow \infty} \sup_{\lambda \in [-\pi,\pi]} \left \| \tilde{f}_{n}^{m}(\lambda) - f_{n}(\lambda) \right \|_{\mathbb{L}^{1}} = 0
\label{206}
\end{equation}
\label{206bis}
\end{lem}
\begin{lem}
Under the assumptions of Theorem~\ref{50}, we have:
\begin{equation}
\lim_{m \rightarrow \infty} \limsup_{n \rightarrow \infty} \sup_{\lambda \in [-\pi,\pi]} \left \| \tilde{f}_{n}^{m}(\lambda) - \mathbb{E}(\tilde{f}_{n}^{m}(\lambda)) \right \|_{\mathbb{L}^{1}} = 0
\label{207}
\end{equation}
\label{207bis}
\end{lem}
\end{proof}
\begin{proof}[\textbf{Proof of Lemma~\ref{205bis}}]
By the properties of expectation and by stationarity:
\[\mathbb{E} \left( f_{n}(\lambda) \right) = \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) \mathbb{E} (\hat{\gamma}_{k}) e^{i k \lambda} = \frac{1}{2\pi} \sum_{|k| \leq n-1} \left( \frac{n-|k|}{n} \right) K \left( \frac{|k|}{c_{n}} \right) \gamma_{k} e^{i k \lambda}.\]
Since $c_{n} \xrightarrow[n \rightarrow \infty]{} \infty$ and $\lim_{u \rightarrow 0} K(u) = 1$, thanks to dominated convergence theorem and because $\sum_{k} | \gamma(k) | < + \infty$, it is clear that~\eqref{205} is true.
\end{proof}
\begin{proof}[\textbf{Proof of Lemma~\ref{206bis}}]
Let $S_{n}$ and $\tilde{S}_{n}^{m}$ be defined as:
\[S_{n}(\lambda) = \sum_{k=1}^{n} \epsilon_{k} e^{i k \lambda}\]
\[\tilde{S}_{n}^{m}(\lambda) = \sum_{k=1}^{n} \tilde{\epsilon}_{k,m} e^{i k \lambda}.\]
Because $(a+b)^{2} \leq 2a^{2} + 2b^{2}$, we have:
\begin{eqnarray*}
\frac{1}{n} \left \| S_{n}(\lambda) - \tilde{S}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{2}}^{2}
&=& \frac{1}{n} \left \| \sum_{k=1}^{n} \epsilon_{k} e^{i k \lambda} - \sum_{k=1}^{n} \tilde{\epsilon}_{k,m} e^{i k \lambda} \right \|_{\mathbb{L}^{2}}^{2} \\
&=& \frac{1}{n} \left \| \sum_{k=1}^{n} \epsilon_{k} e^{i k \lambda} - \left( \sum_{k=1}^{n} \mathbb{E}(\epsilon_{k} | \mathcal{F}_{k+m}) e^{i k \lambda} - \mathbb{E}(\epsilon_{k} | \mathcal{F}_{k-m}) e^{i k \lambda} \right) \right \|_{\mathbb{L}^{2}}^{2} \\
&=& \frac{1}{n} \left \| \sum_{k=1}^{n} (\epsilon_{k} - \mathbb{E}(\epsilon_{k} | \mathcal{F}_{k+m})) e^{i k \lambda} + \sum_{k=1}^{n} \mathbb{E}(\epsilon_{k} | \mathcal{F}_{k-m}) e^{i k \lambda} \right \|_{\mathbb{L}^{2}}^{2} \\
&\leq & \frac{2}{n} \left \| \sum_{k=1}^{n} (\epsilon_{k} - \mathbb{E}(\epsilon_{k} | \mathcal{F}_{k+m})) e^{i k \lambda} \right \|_{\mathbb{L}^{2}}^{2} + \frac{2}{n} \left \| \sum_{k=1}^{n} \mathbb{E}(\epsilon_{k} | \mathcal{F}_{k-m}) e^{i k \lambda} \right \|_{\mathbb{L}^{2}}^{2}.
\end{eqnarray*}
We get for the first term of the right-hand side:
\begin{eqnarray*}
\frac{1}{n} \left \| \sum_{k=1}^{n} (\epsilon_{k} - \mathbb{E}(\epsilon_{k} | \mathcal{F}_{k+m})) e^{i k \lambda} \right \|_{\mathbb{L}^{2}}^{2}
&=& \frac{1}{n} \left \| \sum_{k=1}^{n} \sum_{j=k+m+1}^{\infty} P_{j}(\epsilon_{k}) e^{i k \lambda} \right \|_{\mathbb{L}^{2}}^{2} \\
&=& \frac{1}{n} \left \| \sum_{j=m+2}^{\infty} \sum_{k=1}^{n} P_{j}(\epsilon_{k}) e^{i k \lambda} \textbf{1}_{\{j \geq k+m+1\}} \right \|_{\mathbb{L}^{2}}^{2} \\
&=& \frac{1}{n} \sum_{j=m+2}^{\infty} \left \| \sum_{k=1}^{n} P_{j}(\epsilon_{k}) e^{i k \lambda} \textbf{1}_{\{k-j \leq -(m+1)\}} \right \|_{\mathbb{L}^{2}}^{2} \\
&\leq & \frac{1}{n} \sum_{j=m+2}^{\infty} \left( \sum_{k=1}^{n} \left \| P_{j}(\epsilon_{k}) \right \|_{\mathbb{L}^{2}} \textbf{1}_{\{k-j \leq -(m+1)\}} \right)^{2}, \\
\end{eqnarray*}
using the Pythagoras's theorem and the triangle inequality. It follows:
\begin{eqnarray}
\frac{1}{n} \sum_{j=m+2}^{\infty} \left( \sum_{k=1}^{n} \left \| P_{j}(\epsilon_{k}) \right \|_{\mathbb{L}^{2}} \textbf{1}_{\{k-j \leq -(m+1)\}} \right)^{2}
&\leq & \frac{1}{n} \sum_{j=m+2}^{\infty} \left( \sum_{k=1}^{n} \left \| P_{0}(\epsilon_{k-j}) \right \|_{\mathbb{L}^{2}} \textbf{1}_{\{k-j \leq -(m+1)\}} \right)^{2} \notag \\
&\leq & \frac{1}{n} \sum_{j=m+2}^{\infty} \left( \sum_{r=-\infty}^{-(m+1)} \left \| P_{0}(\epsilon_{r}) \right \|_{\mathbb{L}^{2}} \textbf{1}_{\{1-j \leq r \leq n-j\}} \right)^{2} \notag \\
&\leq & \frac{1}{n} \sum_{j=m+2}^{\infty} \left( \textbf{1}_{\{1-r \leq j \leq n-r\}} \sum_{r=-\infty}^{-(m+1)} \left \| P_{0}(\epsilon_{r}) \right \|_{\mathbb{L}^{2}} \right)^{2} \notag \\
&\leq & \left( \sum_{r=-\infty}^{-(m+1)} \left \| P_{0}(\epsilon_{r}) \right \|_{\mathbb{L}^{2}} \right)^{2}. \label{210}
\end{eqnarray}
With the same arguments, the second term of the right-hand side satisfies the inequality:
\begin{equation}
\frac{1}{n} \left \| \sum_{k=1}^{n} \mathbb{E}(\epsilon_{k} | \mathcal{F}_{k-m}) e^{i k \lambda} \right \|_{\mathbb{L}^{2}}^{2} \leq \left( \sum_{r=m}^{\infty} \left \| P_{0}(\epsilon_{r}) \right \|_{\mathbb{L}^{2}} \right)^{2}.
\label{211}
\end{equation}
Consequently, combining~\eqref{210} and~\eqref{211}, we obtain that:
\[\sup_{\lambda \in [-\pi,\pi]} \frac{1}{n} \left \| S_{n}(\lambda) - \tilde{S}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{2}}^{2} \leq 2 \left( \sum_{r=-\infty}^{-(m+1)} \left \| P_{0}(\epsilon_{r}) \right \|_{\mathbb{L}^{2}} \right)^{2} + 2 \left( \sum_{r=m}^{\infty} \left \| P_{0}(\epsilon_{r}) \right \|_{\mathbb{L}^{2}} \right)^{2}.\]
Then, since $\sum_{i=-\infty}^{\infty} \left \| P_{0}(\epsilon_{i}) \right \|_{\mathbb{L}^{2}} < +\infty$, we have this first result:
\begin{equation}
\lim_{m \rightarrow \infty} \limsup_{n \rightarrow \infty} \sup_{\lambda \in [-\pi,\pi]} \frac{1}{n} \left \| S_{n}(\lambda) - \tilde{S}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{2}}^{2} = 0.\\
\label{212}
\end{equation}
Define now the two periodograms corresponding to the quantities $S_{n}$ and $\tilde{S}_{n}^{m}$:
\[I_{n}(\lambda) = \frac{1}{2 \pi n} \left| S_{n}(\lambda) \right|^{2} = \frac{1}{2\pi} \sum_{k=1-n}^{n-1} \hat{\gamma}_{k} e^{i k \lambda}\]
\[\tilde{I}_{n}^{m}(\lambda) = \frac{1}{2 \pi n} \left| \tilde{S}_{n}^{m}(\lambda) \right|^{2} = \frac{1}{2\pi} \sum_{k=1-n}^{n-1} \hat{\tilde{\gamma}}_{k,m} e^{i k \lambda}.\]
By Cauchy-Schwarz's inequality and the triangle inequality:
\begin{eqnarray*}
\left \| I_{n}(\lambda) - \tilde{I}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{1}}
&=& \left \| \frac{1}{2 \pi n} \left| S_{n}(\lambda) \right|^{2} - \frac{1}{2 \pi n} \left| \tilde{S}_{n}^{m}(\lambda) \right|^{2} \right \|_{\mathbb{L}^{1}} \\
&=& \frac{1}{2 \pi n} \left \| \left| S_{n}(\lambda) \right|^{2} - \left| \tilde{S}_{n}^{m}(\lambda) \right|^{2} \right \|_{\mathbb{L}^{1}} \\
&=& \frac{1}{2 \pi n} \left \| \left( \left| S_{n}(\lambda) \right| - \left| \tilde{S}_{n}^{m}(\lambda) \right| \right) \left( \left| S_{n}(\lambda) \right| + \left| \tilde{S}_{n}^{m}(\lambda) \right | \right|) \right \|_{\mathbb{L}^{1}} \\
&\leq & \frac{1}{2 \pi n} \left \| \left| S_{n}(\lambda) \right| - \left| \tilde{S}_{n}^{m}(\lambda) \right| \right \|_{\mathbb{L}^{2}} \left \| \left| S_{n}(\lambda) \right| + \left| \tilde{S}_{n}^{m}(\lambda) \right | \right \|_{\mathbb{L}^{2}} \\
&\leq & \frac{1}{2 \pi } \frac{1}{\sqrt{n}} \left \| S_{n}(\lambda) - \tilde{S}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{2}} \left( \frac{\left \| S_{n}(\lambda) \right \|_{\mathbb{L}^{2}}}{\sqrt{n}} + \frac{ \left \| \tilde{S}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{2}}}{\sqrt{n}} \right). \\
\end{eqnarray*}
Thus, thanks to~\eqref{212} and the following inequality for $S_{n}$ and $\tilde{S}_{n}^{m}$:
\begin{equation}
\frac{1}{\sqrt{n}} \left \| S_{n}(\lambda) \right \|_{\mathbb{L}^{2}} \leq \sum_{k \in \mathbb{Z}} \left \| P_{0}(\epsilon_{k}) \right \|_{\mathbb{L}^{2}} < \infty,
\label{215}
\end{equation}
we get:
\begin{equation}
\lim_{m \rightarrow \infty} \limsup_{n \rightarrow \infty} \sup_{\lambda \in [-\pi,\pi]} \left \| I_{n}(\lambda) - \tilde{I}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{1}} = 0.\\
\label{216}
\end{equation}
Then, let $\hat{K}(.)$ be the Fourier transform of $K$:
\begin{eqnarray*}
f_{n}(\lambda) - \tilde{f}_{n}^{m}(\lambda)
&=& \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) e^{i k \lambda} \left( \hat{\gamma}_{k} - \hat{\tilde{\gamma}}_{k,m} \right) \\
&=& \frac{1}{2\pi} \sum_{|k| \leq n-1} \frac{1}{2\pi} \left( \int_{\mathbb{R}} \hat{K}(u) e^{i u \frac{k}{c_{n}}} du \right) e^{i k \lambda} \left( \hat{\gamma}_{k} - \hat{\tilde{\gamma}}_{k,m} \right) \\
&=& \frac{1}{2\pi} \int_{\mathbb{R}} \hat{K}(u) \frac{1}{2\pi} \sum_{|k| \leq n-1} \left( \hat{\gamma}_{k} - \hat{\tilde{\gamma}}_{k} \right) e^{i k (\frac{u}{c_{n}} + \lambda)} du \\
&=& \frac{1}{2\pi} \int_{\mathbb{R}} \hat{K}(u) \left( I_{n} \left( \frac{u}{c_{n}} + \lambda \right) - \tilde{I}_{n}^{m} \left( \frac{u}{c_{n}} + \lambda \right) \right) du, \\
\end{eqnarray*}
using the definition of $I_{n}$ and $\tilde{I}_{n}^{m}$. Hence, by the triangle inequality:
\begin{eqnarray*}
\left \| f_{n}(\lambda) - \tilde{f}_{n}^{m}(\lambda) \right \|_{\mathbb{L}^{1}}
&=& \left \| \frac{1}{2\pi} \int_{\mathbb{R}} \hat{K}(u) \left( I_{n} \left( \frac{u}{c_{n}} + \lambda \right) - \tilde{I}_{n}^{m} \left( \frac{u}{c_{n}} + \lambda \right) \right) du \right \|_{\mathbb{L}^{1}} \\
&\leq & \frac{1}{2\pi} \int_{\mathbb{R}} \left| \hat{K}(u) \right| \left \| \left( I_{n} \left( \frac{u}{c_{n}} + \lambda \right) - \tilde{I}_{n}^{m}\left( \frac{u}{c_{n}} + \lambda \right) \right) \right \|_{\mathbb{L}^{1}} du \\
&\leq & \frac{1}{2\pi} \sup_{\theta} \left \| I_{n}(\theta) - \tilde{I}_{n}^{m}(\theta) \right \|_{\mathbb{L}^{1}} \int_{\mathbb{R}} \left| \hat{K}(u) \right| du. \\
\end{eqnarray*}
Using~\eqref{216} and the fact that $\hat{K}$ is integrable, Lemma~\ref{206bis} is proved.
\end{proof}
\begin{proof}[\textbf{Proof of Lemma~\ref{207bis}}]
Without loss of generality, suppose $\theta=0$. We have:
\begin{eqnarray*}
\tilde{f}_{n}^{m}(0)
&=& \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) \hat{\tilde{\gamma}}_{k,m} \\
&=& \frac{2}{2\pi} \sum_{k=1}^{n-1} K \left( \frac{k}{c_{n}} \right) \hat{\tilde{\gamma}}_{k,m} + \frac{1}{2 \pi} \hat{\tilde{\gamma}}_{0,m} \\
&=& \frac{2}{2\pi} \sum_{k=1}^{n-1} K \left( \frac{k}{c_{n}} \right) \frac{1}{n} \sum_{j=1}^{n-k} \tilde{\epsilon}_{j,m} \tilde{\epsilon}_{j+k,m}+ \frac{1}{2 \pi n} \sum_{j=1}^{n} \tilde{\epsilon}_{j,m}^{2}.
\end{eqnarray*}
By the triangle inequality again and a change of variables, we have:
\begin{eqnarray*}
&\phantom{=}&
\left \| \tilde{f}_{n}^{m}(0) - \mathbb{E} \left( \tilde{f}_{n}^{m}(0) \right) \right \|_{\mathbb{L}^{1}} \\
&=& \left \| \frac{2}{2\pi} \sum_{k=1}^{n-1} K \left( \frac{k}{c_{n}} \right) \frac{1}{n} \sum_{j=1}^{n-k} \tilde{\epsilon}_{j,m} \tilde{\epsilon}_{j+k,m} -\mathbb{E}(\tilde{\epsilon}_{j,m} \tilde{\epsilon}_{j+k,m}) + \frac{1}{2 \pi n} \sum_{j=1}^{n} \tilde{\epsilon}_{j,m}^{2} - \mathbb{E}(\tilde{\epsilon}_{j,m}^{2}) \right \|_{\mathbb{L}^{1}} \\
&\leq & \frac{2}{2\pi} \left \| \sum_{k=1}^{n-1} K \left( \frac{k}{c_{n}} \right) \frac{1}{n} \sum_{j=1}^{n-k} \left( \tilde{\epsilon}_{j,m} \tilde{\epsilon}_{j+k,m} - \mathbb{E}(\tilde{\epsilon}_{j,m} \tilde{\epsilon}_{j+k,m}) \right) \right \|_{\mathbb{L}^{1}} \\
&& +\: \frac{1}{2\pi} \left \| \frac{1}{n} \sum_{i=1}^{n} \tilde{\epsilon}_{i,m}^{2} - \mathbb{E}(\tilde{\epsilon}_{0,m}^{2}) \right \|_{\mathbb{L}^{1}} \\
&\leq & \frac{2}{2\pi} \left \| \frac{1}{n} \sum_{i=2}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-1} K \left( \frac{i-j}{c_{n}} \right) (\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} - \mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m})) \right \|_{\mathbb{L}^{1}} \\
&& +\: \frac{1}{2\pi} \left \| \frac{1}{n} \sum_{i=1}^{n} \tilde{\epsilon}_{i,m}^{2} - \mathbb{E}(\tilde{\epsilon}_{0,m}^{2}) \right \|_{\mathbb{L}^{1}} .
\end{eqnarray*}
By the $\mathbb{L}^{1}$-ergodic theorem, it is known that, at $m$ fixed:
\[\lim_{n \rightarrow \infty} \left \| \frac{1}{n} \sum_{i=1}^{n} \tilde{\epsilon}_{i,m}^{2} - \mathbb{E}(\tilde{\epsilon}_{0,m}^{2}) \right \|_{\mathbb{L}^{1}} = 0.\]
Consequently, it remains to prove:
\[\lim_{m \rightarrow \infty} \limsup_{n \rightarrow \infty} \left \| \frac{1}{n} \sum_{i=2}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-1} K \left( \frac{i-j}{c_{n}} \right) (\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} - \mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m})) \right \|_{\mathbb{L}^{1}} = 0.\]
We know that:
\begin{equation}
\frac{1}{n} \sum_{i=2m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m}) = 0,
\label{-100}
\end{equation}
Indeed, \[\mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m}) = \mathbb{E} \left( \left( \mathbb{E}(\epsilon_{j} | \mathcal{F}_{j+m}) - \mathbb{E}(\epsilon_{j} | \mathcal{F}_{j-m}) \right) \left( \mathbb{E}(\epsilon_{i} | \mathcal{F}_{i+m}) - \mathbb{E}(\epsilon_{i} | \mathcal{F}_{i-m}) \right) \right).\]
But $\mathbb{E}(\epsilon_{i} | \mathcal{F}_{i+m}) - \mathbb{E}(\epsilon_{i} | \mathcal{F}_{i-m})$ is orthogonal to $\mathbb{L}^{2}(\mathcal{F}_{i-m})$, and $\mathbb{E}(\epsilon_{j} | \mathcal{F}_{j+m}) - \mathbb{E}(\epsilon_{j} | \mathcal{F}_{j-m})$ belongs to $\mathbb{L}^{2}(\mathcal{F}_{i-m})$ if $j+m \leq i-m$. Thus $\mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m})$ is equal to zero if $j \leq i-2m$ and~\eqref{-100} is true.
Thereby we have:
\begin{eqnarray}
&\phantom{=}&
\left \| \frac{1}{n} \sum_{i=2}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-1} K \left( \frac{i-j}{c_{n}} \right) (\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} - \mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m})) \right \|_{\mathbb{L}^{1}} \notag \\
&\leq & \left \| \frac{1}{n} \sum_{i=2}^{n} \sum_{j=(i-2m+1) \vee 1}^{i-1} K \left( \frac{i-j}{c_{n}} \right) (\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} - \mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m})) \right \|_{\mathbb{L}^{1}} \notag \\
&& +\: \left \| \frac{1}{n} \sum_{i=2m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} \notag \\
&\leq & \left \| \frac{1}{n} \sum_{k=1}^{2m-1} \sum_{i=1}^{n-k} K \left( \frac{k}{c_{n}} \right) (\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{i+k,m} - \mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{i+k,m})) \right \|_{\mathbb{L}^{1}} \notag \\
&& +\: \left \| \frac{1}{n} \sum_{i=2m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} \label{217}.
\end{eqnarray}
For the first term of the right-hand side of~\eqref{217}, since the kernel $K$ is bounded by $1$, we have by the triangle inequality and the stationarity of the error process:
\[\left \| \frac{1}{n} \sum_{k=1}^{2m-1} \sum_{i=1}^{n-k} K \left( \frac{k}{c_{n}} \right) (\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{i+k,m} - \mathbb{E}(\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{i+k,m})) \right \|_{\mathbb{L}^{1}} \leq \sum_{k=1}^{2m-1} \left \| \frac{1}{n} \sum_{i=1}^{n-k} (\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{i+k,m} - \mathbb{E}(\tilde{\epsilon}_{0,m} \tilde{\epsilon}_{k,m})) \right \|_{\mathbb{L}^{1}}.\]
Using the $\mathbb{L}^{1}$-ergodic theorem, for all $k$ fixed, we deduce that:
\[\sum_{k=1}^{2m-1} \left \| \frac{1}{n} \sum_{i=1}^{n-k} (\tilde{\epsilon}_{i,m} \tilde{\epsilon}_{i+k,m} - \mathbb{E}(\tilde{\epsilon}_{0,m} \tilde{\epsilon}_{k,m})) \right \|_{\mathbb{L}^{1}} \xrightarrow[n \rightarrow \infty]{} 0.\]
It remains to be shown that:
\[\lim_{m \rightarrow \infty} \limsup_{n \rightarrow \infty} \left \| \frac{1}{n} \sum_{i=2m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} = 0.\]
We have:
\begin{multline*}
\left \| \frac{1}{n} \sum_{i=2m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} \\
= \left \| \frac{1}{n} \sum_{i=2m+1}^{2[n/2m]m} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} + \frac{1}{n} \sum_{i=2[n/2m]m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m}\right \|_{\mathbb{L}^{1}},
\end{multline*}
then by triangle inequality:
\begin{multline*}
\left \| \frac{1}{n} \sum_{i=2m+1}^{2[n/2m]m} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} + \frac{1}{n} \sum_{i=2[n/2m]m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m}\right \|_{\mathbb{L}^{1}} \\
\leq \left \| \frac{1}{n} \sum_{i=2m+1}^{2[n/2m]m} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} \\
+ \left \| \frac{1}{n} \sum_{i=2[n/2m]m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m}\right \|_{\mathbb{L}^{1}},
\end{multline*}
and using a change of variable:
\begin{multline}
\left \| \frac{1}{n} \sum_{i=2m+1}^{2[n/2m]m} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} \\
+ \left \| \frac{1}{n} \sum_{i=2[n/2m]m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m}\right \|_{\mathbb{L}^{1}} \\
\leq \sum_{l=1}^{2m} \left \| \frac{1}{n} \sum_{r=1}^{[n/2m]-1} \tilde{\epsilon}_{2rm+l,m} \sum_{j=(2rm+l-2c_{n}) \vee 1}^{2(r-1)m+l} K \left( \frac{2rm+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} \\
+ \left \| \frac{1}{n} \sum_{i=2[n/2m]m+1}^{n} \tilde{\epsilon}_{i,m} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} \label{218}.
\end{multline}
For the second term of the right-hand side of~\eqref{218}, by the Cauchy-Schwarz inequality and by stationarity, we get:
\begin{eqnarray}
\left \| \frac{1}{n} \sum_{i=2[n/2m]m+1}^{n} \tilde{\epsilon}_{i,m} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} K \left( \frac{i-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}}
&\leq & \frac{1}{n} \sum_{i=2[n/2m]m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} \left \| \tilde{\epsilon}_{i,m} \tilde{\epsilon}_{j,m} \right \|_{\mathbb{L}^{1}} \notag \\
&\leq & \frac{1}{n} \sum_{i=2[n/2m]m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} \left \| \tilde{\epsilon}_{0,m} \right \|_{\mathbb{L}^{2}}^{2} \notag \\
&\leq & \frac{4}{n} \sum_{i=2[n/2m]m+1}^{n} \sum_{j=(i-2c_{n}) \vee 1}^{i-2m} \left \| \epsilon_{0} \right \|_{\mathbb{L}^{2}}^{2} \notag \\
&\leq & 16m \frac{c_{n}}{n} \left \| \epsilon_{0} \right \|_{\mathbb{L}^{2}}^{2},
\label{219}
\end{eqnarray}
and~\eqref{219} tends to $0$ as $n$ tends to infinity.
Using ideas developed by Dedecker \cite{dedecker1998central} (see the proof of his Theorem $1$), we study the first term of the right-hand side of~\eqref{218} and we shall prove that it is negligible. Let $Z$ be:
\begin{equation}
Z(r,n,m) = \frac{1}{n} \sum_{r=1}^{[n/2m]-1} \tilde{\epsilon}_{2rm+l,m} \sum_{j=(2rm+l-2c_{n}) \vee 1}^{2(r-1)m+l} K \left( \frac{2rm+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m}.
\label{220}
\end{equation}
Let $\varphi$ be the function defined by $\varphi'(0) = \varphi(0) = 0$ and $\varphi''(t) = (1 - \left | t \right |) \textbf{1}_{\{|t| <1\}}$, that is the symmetric function such that, for all $t$ greater or equal to $0$, $\varphi (t) = \frac{1}{6}(1-t)^{3} \textbf{1}_{\{t<1\}} + \frac{1}{2} t - \frac{1}{6}$.
Now, for all $\epsilon > 0$, by the growth of $\varphi$, there exists a constant $C$ such that:
\begin{eqnarray*}
\mathbb{E}(\left | Z(r,n,m) \right |)
&=& \mathbb{E}(\left | (Z(r,n,m) \right | \textbf{1}_{\{|Z(r,n,m)|>\epsilon \}}) + \mathbb{E}(\left | Z(r,n,m) \right | \textbf{1}_{\{|Z(r,n,m)|<\epsilon \}}) \\
&\leq & C \mathbb{E}(\varphi(Z(r,n,m)) \textbf{1}_{\{|Z(r,n,m)|>\epsilon \}}) + \mathbb{E}(\left | Z(r,n,m) \right | \textbf{1}_{\{|Z(r,n,m)|<\epsilon \}}) \\
&\leq & C \mathbb{E}(\varphi(Z(r,n,m))) + \epsilon.
\end{eqnarray*}
because the function $\varphi$ is positive.
We conclude the proof using Lemma~\ref{222bis}.
\end{proof}
\begin{lem}
In the conditions developed at the end of the previous proof, for all fixed $m$:
\begin{equation}
\lim_{n \rightarrow \infty} \mathbb{E}(\varphi (Z(r,n,m))) = 0.
\label{222}
\end{equation}
\label{222bis}
\end{lem}
\begin{proof}[\textbf{Proof of Lemma~\ref{222bis}}]
To prove that~\eqref{222} is negligible, the two following results are needed:
\begin{lem}
The following inequality holds:
\[| \varphi(x+h) - \varphi(x) - h \varphi'(x) | \leq \psi(h),\]
where:
\[\psi(h) = |h|^2 \textbf{1}_{\{|h| \leq 1\}} + (2|h| -1)\textbf{1}_{\{|h| > 1\}}.\]
\end{lem}
\begin{proof}
The function $\varphi$ is continuous and differentiable in the neighborhood of $0$. Using the Taylor formula, we have the following majorations:
\[| \varphi(x+h) - \varphi(x) - h \varphi'(x) | \leq \frac{|h|^{2}}{2} \sup_{u \in \mathbb{R}} | \varphi''(u) | \leq \frac{|h|^{2}}{2};\]
then, by the triangle inequality:
\[\left | \varphi(x+h) - \varphi(x) - h \varphi'(x) \right | \leq \left | \varphi(x+h) - \varphi(x) \right | + \left | h \right | \left | \varphi'(x) \right | \leq 2 \left | h \right | \sup_{u \in \mathbb{R}} \left | \varphi'(u) \right | \leq \left | h \right |.\]
The proof is complete.
\end{proof}
\begin{lem}
For all real $x$ in $\mathbb{R}$, we have:
\[|x| (1 \wedge |x|) \leq \psi(x) \leq 2|x|(1 \wedge |x|).\]
\label{lem626}
\end{lem}
The proof of Lemma~\ref{lem626}, being elementary, is left to the reader.\\
So we get:
\begin{multline*}
\mathbb{E}(\varphi (Z(r,n,m))) = \sum_{i=1}^{\left[ n/2m \right] -1} \mathbb{E} \Bigg( \Bigg[ \varphi \Bigg( \frac{1}{n} \sum_{q=1}^{i} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \Bigg( \frac{2qm+l-j}{c_{n}} \Bigg) \tilde{\epsilon}_{j,m} \Bigg) \\
- \varphi \Bigg( \frac{1}{n} \sum_{q=1}^{i-1} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \Bigg( \frac{2qm+l-j}{c_{n}} \Bigg) \tilde{\epsilon}_{j,m} \Bigg) \Bigg] \Bigg) \\
\leq \sum_{i=1}^{\left[ n/2m \right] -1} \Bigg| \mathbb{E} \Bigg( \varphi \Bigg( \frac{1}{n} \sum_{q=1}^{i} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \Bigg( \frac{2qm+l-j}{c_{n}} \Bigg) \tilde{\epsilon}_{j,m} \Bigg) \\
- \varphi \Bigg( \frac{1}{n} \sum_{q=1}^{i-1} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \Bigg( \frac{2qm+l-j}{c_{n}} \Bigg) \tilde{\epsilon}_{j,m} \Bigg) \Bigg) \Bigg|.
\end{multline*}
Then applying Taylor's expansion, with :
\begin{eqnarray*}
x &=& \frac{1}{n} \sum_{q=1}^{i-1} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \left( \frac{2qm+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \\
A(i,m) &=& \frac{1}{n} \tilde{\epsilon}_{2im+l,m} \sum_{j=(2im+l-2c_{n}) \vee 1}^{2(i-1)m+l} K \left( \frac{2im+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \\
x+A(i,m) &=& \frac{1}{n} \sum_{q=1}^{i} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \left( \frac{2qm+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m},
\end{eqnarray*}
we have:
\begin{eqnarray*}
\mathbb{E}(\varphi (Z(r,n,m)))
&\leq & \sum_{i=1}^{[n/2m]-1} \Bigg| \mathbb{E} \Bigg( \varphi' \left( \frac{1}{n} \sum_{q=1}^{i-1} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \left( \frac{2qm+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \right) \\
&& \qquad \times\: \frac{1}{n} \tilde{\epsilon}_{2im+l,m} \sum_{j=(2im+l-2c_{n}) \vee 1}^{2(i-1)m+l} K \left( \frac{2im+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} + \psi(A(i,m)) \Bigg) \Bigg|.
\end{eqnarray*}
Then by triangle inequality, we obtain:
\begin{eqnarray*}
\mathbb{E}(\varphi (Z(r,n,m)))
&\leq & \sum_{i=1}^{[n/2m]-1} \Bigg| \mathbb{E} \Bigg( \varphi' \left( \frac{1}{n} \sum_{q=1}^{i-1} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \left( \frac{2qm+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \right) \\
&& \quad \times\: \frac{1}{n} \tilde{\epsilon}_{2im+l,m} \sum_{j=(2im+l-2c_{n}) \vee 1}^{2(i-1)m+l} K \left( \frac{2im+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \Bigg) \Bigg| \\
&& \qquad + \sum_{i=1}^{[n/2m]-1} \Bigg| \mathbb{E} \left( \left| A(i,m) \right|^{2} \textbf{1}_{\{|A(i,m)| \leq 1\}} + \left( 2 \left| A(i,m) \right| -1 \right) \textbf{1}_{\{|A(i,m)|>1\}} \right) \Bigg| \\
&\leq & \sum_{i=1}^{[n/2m]-1} \Bigg| \mathbb{E} \Bigg( \varphi' \left( \frac{1}{n} \sum_{q=1}^{i-1} \tilde{\epsilon}_{2qm+l,m} \sum_{j=(2qm+l-2c_{n}) \vee 1}^{2(q-1)m+l} K \left( \frac{2qm+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \right) \\
&& \quad \times\: \frac{1}{n} \tilde{\epsilon}_{2im+l,m} \sum_{j=(2im+l-2c_{n}) \vee 1}^{2(i-1)m+l} K \left( \frac{2im+l-j}{c_{n}} \right) \tilde{\epsilon}_{j,m} \Bigg) \Bigg| \\
&& \qquad + \sum_{i=1}^{[n/2m]-1} \mathbb{E} \left( \left| A(i,m) \right|^{2} \textbf{1}_{\{|A(i,m)| \leq 1\}} + \left( 2 \left| A(i,m) \right| -1 \right) \textbf{1}_{\{|A(i,m)|>1\}} \right). \\
\end{eqnarray*}
By definition, $(\tilde{\epsilon}_{i,m})_{i \in \mathbb{Z}}$ satisfies:
\[\mathbb{E}(\tilde{\epsilon}_{2im+l,m} | \mathcal{F}_{2im+l-m}) = 0.\]
Hence:
\begin{multline*}
\mathbb{E}(\varphi (Z(r,n,m))) \leq \sum_{i=1}^{[n/2m]-1} \mathbb{E} \left( | A(i,m) |^{2} \textbf{1}_{\{|A(i,m)| \leq 1\}} + (2|A(i,m)| -1) \textbf{1}_{\{|A(i,m)|>1\}} \right) \\
= \sum_{i=1}^{[n/2m]-1} \mathbb{E}( \psi ( | A(i,m) | ).
\end{multline*}
For this term, put:
\[B(i,j,m,l) = \frac{[(2(i-1)m+l) - ((2im+l-2c_{n}) \vee 1)+1]}{n} K \left( \frac{2im+l-j}{c_{n}} \right) \tilde{\epsilon}_{2im+l,m} \tilde{\epsilon}_{j,m}.\]
Using the convexity of $\psi$ and Lemma 3 of Dedecker \cite{dedecker1998central}, we have that:
\begin{multline*}
\mathbb{E}(\psi(A(i,m))) \leq \frac{1}{[(2(i-1)m+l) - ((2im+l-2c_{n}) \vee 1)+1]} \sum_{j=(2im+l-2c_{n}) \vee 1}^{2(i-1)m+l} \mathbb{E} \left( \psi \left( B(i,j,m,l) \right) \right).
\end{multline*}
Then:
\begin{multline*}
\frac{1}{[(2(i-1)m+l) - ((2im+l-2c_{n}) \vee 1)+1]} \sum_{j=(2im+l-2c_{n}) \vee 1}^{2(i-1)m+l} \mathbb{E} \left( \psi \left( B(i,j,m,l) \right) \right) \\
\leq \frac{2}{[(2(i-1)m+l) - ((2im+l-2c_{n}) \vee 1)+1]} \sum_{j=(2im+l-2c_{n}) \vee 1}^{2(i-1)m+l} \mathbb{E} \left( \frac{2c_{n}}{n} | \tilde{\epsilon}_{0,m} |^{2} \left( 1 \wedge \frac{2c_{n}}{n} | \tilde{\epsilon}_{0,m} |^{2} \right) \right),
\end{multline*}
and:
\begin{multline*}
\frac{2}{[(2(i-1)m+l) - ((2im+l-2c_{n}) \vee 1)+1]} \sum_{j=(2im+l-2c_{n}) \vee 1}^{2(i-1)m+l} \mathbb{E} \left( \frac{2c_{n}}{n} | \tilde{\epsilon}_{0,m} |^{2} \left( 1 \wedge \frac{2c_{n}}{n} | \tilde{\epsilon}_{0,m} |^{2} \right) \right) \\
\leq \frac{8c_{n}}{n} \mathbb{E} \left( | \tilde{\epsilon}_{0,m} |^{2} \left( 1 \wedge \frac{c_{n}}{n} | \tilde{\epsilon}_{0,m} |^{2} \right) \right).
\end{multline*}
Thus we can conclude if, for $m$ fixed:
\begin{equation}
\lim_{n \rightarrow \infty} c_{n} \mathbb{E} \left( | \tilde{\epsilon}_{0,m} |^{2} \left( 1 \wedge \frac{c_{n}}{n} | \tilde{\epsilon}_{0,m} |^{2} \right) \right) = 0.
\label{223}
\end{equation}
To prove~\eqref{223}, notice that:
\begin{eqnarray}
c_{n} \mathbb{E} \left( | \tilde{\epsilon}_{0,m} |^{2} \left( 1 \wedge \frac{c_{n}}{n} | \tilde{\epsilon}_{0,m} |^{2} \right) \right)
&\leq & 4c_{n} \mathbb{E} \left( | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{m}) |^{2} \left( 1 \wedge \frac{c_{n}}{n} | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{m}) |^{2} \right) \right) \notag \\
&& +\: 4c_{n} \mathbb{E} \left( | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{m}) |^{2} \left( 1 \wedge \frac{c_{n}}{n} | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{-m}) |^{2} \right) \right) \notag \\
&& \quad + 4c_{n} \mathbb{E} \left( | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{-m}) |^{2} \left( 1 \wedge \frac{c_{n}}{n} | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{m}) |^{2} \right) \right)\notag \\
&& \qquad + 4c_{n} \mathbb{E} \left( | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{-m}) |^{2} \left( 1 \wedge \frac{c_{n}}{n} | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{-m}) |^{2} \right) \right).
\label{224}
\end{eqnarray}
For the first term and for the last term, we use the convexity of $\psi$:
\begin{eqnarray}
c_{n} \mathbb{E} \left( | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{m}) |^{2} \left( 1 \wedge \frac{c_{n}}{n} | \mathbb{E}(\epsilon_{0}|\mathcal{F}_{m}) |^{2} \right) \right)
&\leq & n \mathbb{E} \left( \psi \left( \mathbb{E} \left( \frac{c_{n}}{n} |\epsilon_{0}|^{2}|\mathcal{F}_{m} \right) \right) \right) \notag \\
&\leq & n \mathbb{E} \left( \mathbb{E} \left( \psi \left(\frac{c_{n}}{n} |\epsilon_{0}|^{2} \right)|\mathcal{F}_{m} \right) \right) \notag \\
&\leq & n \mathbb{E} \left( \psi \left( \frac{c_{n}}{n} |\epsilon_{0}|^{2} \right) \right) \notag \\
&\leq & 2c_{n} \mathbb{E} \left( |\epsilon_{0}|^{2} \left( 1 \wedge \frac{c_{n}}{n} |\epsilon_{0}|^{2} \right) \right). \label{225}
\end{eqnarray}
With the same idea, for the last term, we show that:
\begin{equation}
c_{n} \mathbb{E} \left( |\mathbb{E}(\epsilon_{0}|\mathcal{F}_{-m})|^{2} \left( 1 \wedge \frac{c_{n}}{n} |\mathbb{E}(\epsilon_{0}|\mathcal{F}_{-m})|^{2} \right) \right) \leq 2c_{n} \mathbb{E} \left( |\epsilon_{0}|^{2} \left( 1 \wedge \frac{c_{n}}{n} |\epsilon_{0}|^{2} \right) \right).
\label{226}
\end{equation}
For the second term, with the convexity of $\psi$, we have:
\begin{eqnarray}
n \mathbb{E} \left( \frac{c_{n}}{n} |\mathbb{E}(\epsilon_{0}|\mathcal{F}_{m})|^{2} \left( 1 \wedge \frac{c_{n}}{n} |\mathbb{E}(\epsilon_{0}|\mathcal{F}_{-m})|^{2} \right) \right)
&\leq & n \mathbb{E} \left( \frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2}|\mathcal{F}_{m}) \left( 1 \wedge \frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2}|\mathcal{F}_{-m}) \right) \right) \notag \\
&\leq & n \mathbb{E} \left( \psi \left( \mathbb{E} \left( \frac{c_{n}}{n}|\epsilon_{0}|^{2} | \mathcal{F}_{-m} \right) \right) \right) \notag \\
&\leq & 2c_{n} \mathbb{E} \left( |\epsilon_{0}|^{2} \left( 1 \wedge \frac{c_{n}}{n} |\epsilon_{0}|^{2} \right) \right). \label{227}
\end{eqnarray}
Since $g : x \rightarrow 1 \wedge x$ is a concave function on $\mathbb{R}_{+}^{\ast}$ and $\psi$ is a convex function, for the third term, we obtain that:
\begin{eqnarray}
c_{n} \mathbb{E} \left( |\mathbb{E}(\epsilon_{0}|\mathbb{F}_{-m})|^{2} \left( 1 \wedge \frac{c_{n}}{n} |\mathbb{E}(\epsilon_{0}|\mathcal{F}_{m})|^{2} \right) \right)
&\leq & n \mathbb{E} \left(\frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2}|\mathbb{F}_{-m}) \mathbb{E} \left( g \left( \frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2} | \mathcal{F}_{m}) \right) | \mathcal{F}_{-m} \right) \right) \notag \\
&\leq & n \mathbb{E} \left( \frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2}|\mathbb{F}_{-m}) g \left( \mathbb{E} \left( \frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2} | \mathcal{F}_{m}) | \mathcal{F}_{-m} \right) \right) \right) \notag \\
&\leq & n \mathbb{E} \left( \frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2}|\mathbb{F}_{-m}) \left( 1 \wedge \frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2} | \mathcal{F}_{-m}) \right) \right) \notag \\
&\leq & n \mathbb{E} \left( \psi \left( \frac{c_{n}}{n} \mathbb{E}(|\epsilon_{0}|^{2}|\mathbb{F}_{-m}) \right) \right) \notag \\
&\leq & 2c_{n} \mathbb{E} \left( |\epsilon_{0}|^{2} \left( 1 \wedge \frac{c_{n}}{n} |\epsilon_{0}|^{2} \right) \right). \label{228}
\end{eqnarray}
Using~\eqref{224} to~\eqref{228}, we deduce that \eqref{223} is verified as soon as~\eqref{48} is true.
\end{proof}
\subsubsection{Proposition~\ref{201bis}}
\begin{proof}[Proof]
Recall that:
\[f_{n}(\lambda) = \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) \hat{\gamma}_{k} e^{i k \lambda},\]
where:
\[\hat{\gamma}_{k} = \frac{1}{n} \sum_{j=1}^{n-|k|} \epsilon_{j} \epsilon_{j+|k|} = \frac{1}{n} \sum_{j=1}^{n-|k|} \left( Y_{j} - \sum_{l=1}^{p} x_{j,l} \beta_{l} \right) \left( Y_{j+|k|} - \sum_{l=1}^{p} x_{j+|k|,l} \beta_{l} \right), \ \quad \ 0 \leq |k| \leq (n-1),\]
and:
\[f_{n}^{\ast}(\lambda) = \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) \hat{\gamma}_{k}^{\ast} e^{i k \lambda},\]
where:
\[\hat{\gamma}_{k}^{\ast} = \frac{1}{n} \sum_{j=1}^{n-|k|} \hat{\epsilon}_{j} \hat{\epsilon}_{j+|k|} = \frac{1}{n} \sum_{j=1}^{n-|k|} \left( Y_{j} - \sum_{l=1}^{p} x_{j,l} \hat{\beta}_{l} \right) \left( Y_{j+|k|} - \sum_{l=1}^{p} x_{j+|k|,l} \hat{\beta}_{l} \right), \ \quad \ 0 \leq |k| \leq (n-1).\]
Thus we have:
\begin{eqnarray}
\left \| f_{n}^{\ast}(\lambda) - f_{n}(\lambda) \right \|_{\mathbb{L}^{1}}
&=& \left \| \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) \hat{\gamma}_{k}^{\ast} e^{i k \lambda} - \frac{1}{2\pi} \sum_{|k| \leq n-1} K \left( \frac{|k|}{c_{n}} \right) \hat{\gamma}_{k} e^{i k \lambda} \right \|_{\mathbb{L}^{1}} \notag \\
&=& \left \| \frac{1}{2\pi} \sum_{|k| \leq 2c_{n}} K \left( \frac{|k|}{c_{n}} \right) \left[ \hat{\gamma}_{k}^{\ast} - \hat{\gamma}_{k} \right] e^{i k \lambda}\right \|_{\mathbb{L}^{1}} \notag \\
&\leq & \frac{1}{2\pi} \sum_{|k| \leq 2c_{n}} \left \| \hat{\gamma}_{k}^{\ast} - \hat{\gamma}_{k} \right \|_{\mathbb{L}^{1}}. \label{250}
\end{eqnarray}
Since $\frac{c_{n}}{n}$ tends to $0$ when $n$ tends to infinite, it remains to prove that:
\begin{equation}
\sup_{|k| \leq 2c_{n}} \left \| \hat{\gamma}_{k}^{\ast} - \hat{\gamma}_{k} \right \|_{\mathbb{L}^{1}} = \mathcal{O} \left( \frac{1}{n} \right).
\label{251}
\end{equation}
\begin{lem}
The following inequality is verified:
\begin{eqnarray}
\left \| \hat{\gamma}_{k}^{\ast} - \hat{\gamma}_{k} \right \|_{\mathbb{L}^{1}}
&=& \Bigg \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( Y_{j} - \sum_{l=1}^{p} x_{j,l} \hat{\beta}_{l} \right) \left( Y_{j+|k|} - \sum_{l=1}^{p} x_{j+|k|,l} \hat{\beta}_{l} \right) \notag \\
&& -\: \frac{1}{n} \sum_{j=1}^{n-|k|} \left( Y_{j} - \sum_{l=1}^{p} x_{j,l} \beta_{l} \right) \left( Y_{j+|k|} - \sum_{l=1}^{p} x_{j+|k|,l} \beta_{l} \right) \Bigg \|_{\mathbb{L}^{1}} \notag \\
&\leq & \frac{1}{2n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \sum_{j=1}^{n-|k|} x_{j,l}^{2} \right \|_{\mathbb{L}^{1}} + \frac{1}{2n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \left( \beta_{l'} - \hat{\beta}_{l'} \right)^{2} \sum_{j=1}^{n-|k|} x_{j+|k|,l'}^{2} \right \|_{\mathbb{L}^{1}} \notag \\
&& +\: \frac{1}{n} \sum_{l=1}^{p} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}} + \frac{1}{n} \sum_{l=1}^{p} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j+|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \right \|_{\mathbb{L}^{1}}. \label{252}
\end{eqnarray}
\label{6.2.7}
\end{lem}
The proof of this lemma will be given in Section $6.3$.\\
It remains to calculate these four terms. For the first term of the right-hand side, for all $l$, $l'$ fixed and for all $k$, we have:
\[\left \| \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \sum_{j=1}^{n-|k|} x_{j,l}^{2} \right \|_{\mathbb{L}^{1}} \leq \left \| \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \sum_{j=1}^{n} x_{j,l}^{2} \right \|_{\mathbb{L}^{1}},\]
and:
\[\left \| \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \sum_{j=1}^{n} x_{j,l}^{2} \right \|_{\mathbb{L}^{1}} = \left \| d_{l}(n)^{2} \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \right \|_{\mathbb{L}^{1}} = d_{l}(n)^{2} \mathbb{E} \left( \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \right).\]
Hannan has proven in his paper \cite{hannan73clt} a Central Limit Theorem~\eqref{15} with the convergence of the second order moments~\eqref{15bis}. Consequently, we have:
\[\left \| d_{l}(n)^{2} \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \right \|_{\mathbb{L}^{1}} = \mathcal{O}(1),\]
hence:
\[\sup_{|k| \leq 2c_{n}} \left( \left \| \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \sum_{j=1}^{n-|k|} x_{j,l}^{2} \right \|_{\mathbb{L}^{1}} \right) \leq d_{l}(n)^{2} \mathbb{E} \left( \left( \hat{\beta}_{l} - \beta_{l} \right)^{2} \right) = \mathcal{O}(1).\]
So we can conclude:
\[\sup_{|k| \leq 2c_{n}} \left( \frac{1}{2n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \sum_{j=1}^{n-|k|} x_{j,l}^{2} \right \|_{\mathbb{L}^{1}} \right) = \mathcal{O} \left( \frac{1}{n} \right).\]
For the second term, the same arguments are used, because $\sum_{j=1}^{n-|k|} x_{j+|k|,l}^{2} \leq \sum_{j=1}^{n} x_{j,l}^{2}$. Hence:
\[\sup_{|k| \leq 2c_{n}} \left( \frac{1}{2n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \left( \beta_{l'} - \hat{\beta}_{l'} \right)^{2} \sum_{j=1}^{n-|k|} x_{j+|k|,l'}^{2} \right \|_{\mathbb{L}^{1}} \right) = \mathcal{O} \left( \frac{1}{n} \right).\]
For the third term, for all $l$ fixed, by the Cauchy-Schwarz inequality, we get:
\[\left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}} \leq \left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \right \|_{\mathbb{L}^{2}} \left \| \beta_{l} - \hat{\beta}_{l} \right \|_{\mathbb{L}^{2}}.\]
Then, we have:
\begin{eqnarray*}
\left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \right \|_{\mathbb{L}^{2}}^{2}
&=& \sum_{j=1}^{n-|k|} \sum_{i=1}^{n-|k|} \gamma_{i-j} x_{i+|k|,l} x_{j+|k|,l} \\
&=& \sum_{i=1}^{n-|k|} \sum_{j=i}^{n-|k|} \gamma_{j-i} x_{i+|k|,l} x_{j+|k|,l} + \sum_{i=1}^{n-|k|} \sum_{j=1}^{i-1} \gamma_{j-i} x_{i+|k|,l} x_{j+|k|,l}.
\end{eqnarray*}
For the first term of the right-hand side, it follows with the change of variables $r=j-i$:
\begin{eqnarray*}
\sum_{i=1}^{n-|k|} \sum_{j=i}^{n-|k|} \gamma_{j-i} x_{i+|k|,l} x_{j+|k|,l}
&=& \sum_{i=1}^{n-|k|} \sum_{r=0}^{n-|k|-i} \gamma_{r} x_{i+|k|,l} x_{i+|k|+r,l} \\
&\leq & \sum_{i=1}^{n-|k|} \sum_{r=0}^{n-|k|-i} | \gamma_{r} | | x_{i+|k|,l} | |x_{i+|k|+r,l} | \\
&\leq & \sum_{i=1}^{n-|k|} \sum_{r=0}^{n-|k|-i} | \gamma_{r} | ( x_{i+|k|,l}^{2} + x_{i+|k|+r,l}^{2}) \\
&\leq & \sum_{i=1}^{n-|k|} \sum_{r=0}^{n-|k|-i} | \gamma_{r} | x_{i+|k|,l}^{2} + \sum_{i=1}^{n-|k|} \sum_{r=0}^{n-|k|-i} | \gamma_{r} | x_{i+|k|+r,l}^{2}. \\
\end{eqnarray*}
Since $r \leq n-|k|-i$, we have $i \leq n-|k|-r$, and it comes:
\begin{multline*}
\sum_{i=1}^{n-|k|} \sum_{r=0}^{n-|k|-i} | \gamma_{r} | x_{i+|k|,l}^{2} + \sum_{i=1}^{n-|k|} \sum_{r=0}^{n-|k|-i} | \gamma_{r} | x_{i+|k|+r,l}^{2} \\
\leq \sum_{i=1}^{n-|k|} x_{i+|k|,l}^{2} \sum_{r=0}^{n-|k|-i} | \gamma_{r} | + \sum_{r=0}^{n-|k|} | \gamma_{r} | \sum_{i=1}^{n-|k|-r} x_{i+|k|+r,l}^{2} \\
\leq \sum_{i=1}^{n-|k|} x_{i+|k|,l}^{2} \sum_{r}^{} | \gamma_{r} | + \sum_{r}^{} | \gamma_{r} | \sum_{i=1}^{n-|k|-r} x_{i+|k|+r,l}^{2}.
\end{multline*}
Since $\sum_{k} | \gamma(k) | < \infty$:
\begin{eqnarray*}
\sum_{i=1}^{n-|k|} x_{i+|k|,l}^{2} \sum_{r}^{} | \gamma_{r} | + \sum_{r}^{} | \gamma_{r} | \sum_{i=1}^{n-|k|-r} x_{i+|k|+r,l}^{2}
&\leq & M \left( \sum_{i=1}^{n-|k|} x_{i+|k|,l}^{2} + \sum_{i=1}^{n-|k|-r} x_{i+|k|+r,l}^{2} \right) \\
&\leq & M \left( \sum_{i=1}^{n} x_{i,l}^{2} + \sum_{i=1}^{n} x_{i,l}^{2} \right) \\
&\leq & M' \sum_{i=1}^{n} x_{i,l}^{2}.
\end{eqnarray*}
With the same idea, for the second term of the right-hand side, we have:
\[\sum_{i=1}^{n-|k|} \sum_{j=1}^{i-1} \gamma_{j-i} x_{i+|k|,l} x_{j+|k|,l} \leq M' \sum_{j=1}^{n} x_{j,l}^{2},\]
thus:
\[\sup_{|k| \leq 2c_{n}} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \right \|_{\mathbb{L}^{2}}^{2} \leq 2 M' \sum_{j=1}^{n} x_{j,l}^{2} = M'' d_{l}(n)^{2}.\]
In conclusion :
\begin{eqnarray*}
\left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}}
&\leq & \left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \right \|_{\mathbb{L}^{2}} \left \| \beta_{l} - \hat{\beta}_{l} \right \|_{\mathbb{L}^{2}} \\
&\leq & C d_{l}(n) \sqrt{ \mathbb{E} \left( (\beta_{l} - \hat{\beta}_{l})^{2} \right)} \\
&\leq & C \sqrt{ d_{l}(n)^{2} \mathbb{E} \left( (\beta_{l} - \hat{\beta}_{l})^{2} \right)} = \mathcal{O}(1),
\end{eqnarray*}
hence:
\[\sup_{|k| \leq 2c_{n}} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}} = \mathcal{O}(1),\]
thereby:
\[\sup_{|k| \leq 2c_{n}} \left( \frac{1}{n} \sum_{l=1}^{p} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}} \right) = \mathcal{O} \left( \frac{1}{n} \right).\]
The same idea is used for the fourth term of the right-hand side of~\eqref{252}. Thus~\eqref{251} is verified and consequently~\eqref{201} is true.
\end{proof}
\subsection{Proof of Lemma~\ref{6.2.7}}
We start by developing the term $Y_{j}$:
\begin{multline*}
\Big \| \hat{\gamma}_{k}^{\ast} - \hat{\gamma}_{k} \Big \|_{\mathbb{L}^{1}}
= \Bigg \| \frac{1}{n} \sum_{j=1}^{n-|k|} \Bigg( Y_{j} - \sum_{l=1}^{p} x_{j,l} \hat{\beta}_{l} \Bigg) \Bigg( Y_{j+|k|} - \sum_{l=1}^{p} x_{j+|k|,l} \hat{\beta}_{l} \Bigg) \\
- \frac{1}{n} \sum_{j=1}^{n-|k|} \Bigg( Y_{j} - \sum_{l=1}^{p} x_{j,l} \beta_{l} \Bigg) \Bigg( Y_{j+|k|} - \sum_{l=1}^{p} x_{j+|k|,l} \beta_{l} \Bigg) \Bigg \|_{\mathbb{L}^{1}} \\
= \Bigg \| \frac{1}{n} \sum_{j=1}^{n-|k|} \Bigg( \sum_{l=1}^{p} x_{j,l} \Big( \beta_{l} - \hat{\beta_{l}} \Big) + \epsilon_{j} \Bigg) \Bigg( \sum_{l=1}^{p} x_{j+|k|,l} \Big( \beta_{l} - \hat{\beta}_{l} \Big) + \epsilon_{j+|k|} \Bigg) \\
- \frac{1}{n} \sum_{j=1}^{n-|k|} \Bigg( Y_{j} - \sum_{l=1}^{p} x_{j,l} \beta_{l} \Bigg) \Bigg( Y_{j+|k|} - \sum_{l=1}^{p} x_{j+|k|,l} \beta_{l} \Bigg) \Bigg \|_{\mathbb{L}^{1}}.
\end{multline*}
Because $\epsilon_{j}$ is equal to $Y_{j} - \sum_{l=1}^{p} x_{j,l} \beta_{l}$, we have:
\begin{multline*}
\Bigg \| \frac{1}{n} \sum_{j=1}^{n-|k|} \Bigg( \sum_{l=1}^{p} x_{j,l} \Big( \beta_{l} - \hat{\beta_{l}} \Big) + \epsilon_{j} \Bigg) \Bigg( \sum_{l=1}^{p} x_{j+|k|,l} \Big( \beta_{l} - \hat{\beta}_{l} \Big) + \epsilon_{j+|k|} \Bigg) \\
- \frac{1}{n} \sum_{j=1}^{n-|k|} \Bigg( Y_{j} - \sum_{l=1}^{p} x_{j,l} \beta_{l} \Bigg) \Bigg( Y_{j+|k|} - \sum_{l=1}^{p} x_{j+|k|,l} \beta_{l} \Bigg) \Bigg \|_{\mathbb{L}^{1}} \\
= \Bigg \| \frac{1}{n} \sum_{j=1}^{n-|k|} \Bigg( \sum_{l=1}^{p} x_{j,l} \Big( \beta_{l} - \hat{\beta_{l}} \Big) \sum_{l=1}^{p} x_{j+|k|,l} \Big( \beta_{l} - \hat{\beta}_{l} \Big) \\
+ \epsilon_{j} \sum_{l=1}^{p} x_{j+|k|,l} \Big( \beta_{l} - \hat{\beta}_{l} \Big) + \sum_{l=1}^{p} x_{j,l} \Big( \beta_{l} - \hat{\beta_{l}} \Big) \epsilon_{j+|k|} \Bigg) \Bigg \|_{\mathbb{L}^{1}}.
\end{multline*}
Using the triangle inequality, we obtain:
\begin{multline*}
\Bigg \| \frac{1}{n} \sum_{j=1}^{n-|k|} \Bigg( \sum_{l=1}^{p} x_{j,l} \Big( \beta_{l} - \hat{\beta_{l}} \Big) \sum_{l=1}^{p} x_{j+|k|,l} \Big( \beta_{l} - \hat{\beta}_{l} \Big) \\
+ \epsilon_{j} \sum_{l=1}^{p} x_{j+|k|,l} \Big( \beta_{l} - \hat{\beta}_{l} \Big) + \sum_{l=1}^{p} x_{j,l} \Big( \beta_{l} - \hat{\beta_{l}} \Big) \epsilon_{j+|k|} \Bigg) \Bigg \|_{\mathbb{L}^{1}} \\
\leq \left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \sum_{l=1}^{p} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \sum_{l=1}^{p} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right) \right \|_{\mathbb{L}^{1}} + \left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \epsilon_{j} \sum_{l=1}^{p} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right) \right \|_{\mathbb{L}^{1}} \\
+ \left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \sum_{l=1}^{p} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \epsilon_{j+|k|} \right) \right \|_{\mathbb{L}^{1}},
\end{multline*}
then we swap the sums between them:
\begin{eqnarray*}
&\phantom{=}&
\left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \sum_{l=1}^{p} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \sum_{l=1}^{p} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right) \right \|_{\mathbb{L}^{1}} \\
&& +\: \left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \epsilon_{j} \sum_{l=1}^{p} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right) \right \|_{\mathbb{L}^{1}} + \left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \sum_{l=1}^{p} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \epsilon_{j+|k|} \right) \right \|_{\mathbb{L}^{1}} \\
&\leq & \left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \sum_{l=1}^{p} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \sum_{l=1}^{p} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right) \right \|_{\mathbb{L}^{1}} \\
&& +\: \left \| \frac{1}{n} \sum_{l=1}^{p} \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}} + \left \| \frac{1}{n} \sum_{l=1}^{p} \sum_{j=1}^{n-|k|} \epsilon_{j+|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \right \|_{\mathbb{L}^{1}},
\end{eqnarray*}
and using again the triangle inequality:
\begin{eqnarray*}
&\phantom{=}&
\left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \sum_{l=1}^{p} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \sum_{l=1}^{p} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right) \right \|_{\mathbb{L}^{1}} \\
&& +\: \left \| \frac{1}{n} \sum_{l=1}^{p} \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}} + \left \| \frac{1}{n} \sum_{l=1}^{p} \sum_{j=1}^{n-|k|} \epsilon_{j+|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \right \|_{\mathbb{L}^{1}} \\
&\leq & \left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \sum_{l=1}^{p} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \sum_{l'=1}^{p} x_{j+|k|,l'} \left( \beta_{l'} - \hat{\beta}_{l'} \right) \right) \right \|_{\mathbb{L}^{1}} \\
&& +\: \frac{1}{n} \sum_{l=1}^{p} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}} + \frac{1}{n} \sum_{l=1}^{p} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j+|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \right \|_{\mathbb{L}^{1}}.
\end{eqnarray*}
For the first term of the right-hand side, we have:
\begin{multline*}
\left \| \frac{1}{n} \sum_{j=1}^{n-|k|} \left( \sum_{l=1}^{p} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \sum_{l'=1}^{p} x_{j+|k|,l'} \left( \beta_{l'} - \hat{\beta}_{l'} \right) \right) \right \|_{\mathbb{L}^{1}} \\
= \left \| \frac{1}{n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \sum_{j=1}^{n-|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) x_{j+|k|,l'} \left( \beta_{l'} - \hat{\beta}_{l'} \right) \right \|_{\mathbb{L}^{1}},
\end{multline*}
then by triangle inequality:
\begin{multline*}
\left \| \frac{1}{n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \sum_{j=1}^{n-|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) x_{j+|k|,l'} \left( \beta_{l'} - \hat{\beta}_{l'} \right) \right \|_{\mathbb{L}^{1}} \\
\leq \frac{1}{n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \sum_{j=1}^{n-|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) x_{j+|k|,l'} \left( \beta_{l'} - \hat{\beta}_{l'} \right) \right \|_{\mathbb{L}^{1}}.
\end{multline*}
Since $ab \leq \frac{1}{2} a^{2} + \frac{1}{2} b^{2}$, we get:
\begin{multline*}
\frac{1}{n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \sum_{j=1}^{n-|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) x_{j+|k|,l'} \left( \beta_{l'} - \hat{\beta}_{l'} \right) \right \|_{\mathbb{L}^{1}} \\
\leq \frac{1}{n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \frac{1}{2} \sum_{j=1}^{n-|k|} \left( x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \right)^{2} + \frac{1}{2} \sum_{j=1}^{n-|k|} \left( x_{j+|k|,l'} \left( \beta_{l'} - \hat{\beta}_{l'} \right) \right)^{2} \right \|_{\mathbb{L}^{1}},
\end{multline*}
and by triangle inequality:
\begin{multline*}
\frac{1}{n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \frac{1}{2} \sum_{j=1}^{n-|k|} \left( x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \right)^{2} + \frac{1}{2} \sum_{j=1}^{n-|k|} \left( x_{j+|k|,l'} \left( \beta_{l'} - \hat{\beta}_{l'} \right) \right)^{2} \right \|_{\mathbb{L}^{1}} \\
\leq \frac{1}{2n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \sum_{j=1}^{n-|k|} x_{j,l}^{2} \right \|_{\mathbb{L}^{1}} + \frac{1}{2n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \left( \beta_{l'} - \hat{\beta}_{l'} \right)^{2} \sum_{j=1}^{n-|k|} x_{j+|k|,l'}^{2} \right \|_{\mathbb{L}^{1}}.
\end{multline*}
In conclusion, we have:
\begin{eqnarray*}
\left \| \hat{\gamma}_{k}^{\ast} - \hat{\gamma}_{k} \right \|_{\mathbb{L}^{1}}
&\leq & \frac{1}{2n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \left( \beta_{l} - \hat{\beta_{l}} \right)^{2} \sum_{j=1}^{n-|k|} x_{j,l}^{2} \right \|_{\mathbb{L}^{1}} + \frac{1}{2n} \sum_{l=1}^{p} \sum_{l'=1}^{p} \left \| \left( \beta_{l'} - \hat{\beta}_{l'} \right)^{2} \sum_{j=1}^{n-|k|} x_{j+|k|,l'}^{2} \right \|_{\mathbb{L}^{1}} \\
&& +\: \frac{1}{n} \sum_{l=1}^{p} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j} x_{j+|k|,l} \left( \beta_{l} - \hat{\beta}_{l} \right) \right \|_{\mathbb{L}^{1}} + \frac{1}{n} \sum_{l=1}^{p} \left \| \sum_{j=1}^{n-|k|} \epsilon_{j+|k|} x_{j,l} \left( \beta_{l} - \hat{\beta_{l}} \right) \right \|_{\mathbb{L}^{1}}.
\end{eqnarray*}
\newpage
\bibliographystyle{abbrv}
|
1,477,468,751,416 | arxiv | \section{Introduction} \label{sec:intro}
Over the past years, the microblogging platform Twitter has become one of the most popular social networks on the Web. Users can build a network of follower connections to other Twitter users, which means that they can subscribe to content posted by their \textit{followees} \cite{Myers2014,kwak2010}. Twitter was also the first social platform that adopted the concept of \textit{hashtags}, as suggested by Chris Messina\footnote{\url{https://twitter.com/chrismessina/status/223115412}}.
Hashtags are freely-chosen keywords starting with the hash character ``\#'' to annotate, categorize and contextualize Twitter posts (i.e., tweets) \cite{romero2011,Huang2010}. The advantage of hashtags is that anyone with an interest in a hashtag can track it and search for it \cite{small2011hashtag}, thus receiving content posted by somebody outside of their own Twitter network. For example, users can retrieve tweets created during the European football championship by searching for the hashtag \textit{\#euro2016}, even if they do not have a social link to the tweet producers. Meanwhile, many social platforms, such as Instagram and Facebook, have adopted hashtags as well.
\para{Problem.} Unsurprisingly, the widespread acceptance of hashtags has sparked a lot of research in the field of \textit{hashtag recommendations} (see Section \ref{sec:relatedwork} for a selection of approaches) to support users in assigning the most descriptive hashtags to their posts. Existing methods typically utilize collaborative, content and topic features of tweets to recommend hashtags to users. Undoubtedly, these features play an important role in recommending hashtags that best describe a tweet. In this paper, however, we are especially interested in predicting which hashtags a user will likely apply in a newly created tweet given previous hashtag assignments.
The main problem we want to address is whether we can identify \textit{temporal usage patterns} that influence if a Twitter user will likely utilize a certain hashtag in a tweet, given the hashtags she and/or her followees have been using in the past. Our goal is to describe such temporal usage patterns using a model from human memory theory and to design a hashtag recommendation algorithm based on that. To the best of our knowledge, so far, few studies (e.g., \cite{harvey2015long}) have investigated the way temporal effects can be exploited in the hashtag recommendation process.
\para{Approach and methods.} We propose a cognitive-inspired hashtag recommendation algorithm we call BLL$_{I,S}${} that is based on temporal usage patterns of hashtags derived from empirical evidence. In essence, these patterns reflect how a person's own hashtags as well as hashtags from the social network are utilized and reused. In our approach, we utilize the Base-Level Learning (BLL) equation from the cognitive architecture ACT-R \cite{anderson2004integrated,anderson_reflections_1991} to model temporal usage of hashtags. The BLL equation accounts for the time-dependent decay of item exposure in human memory. It quantifies the usefulness of a piece of information (e.g., a hashtag) based on how frequently and how recently it was used by a user in the past and models this time-dependent decay by means of a power-law distribution. Thus, BLL$_{I,S}${} takes into consideration the frequency and recency of hashtags used by a user and her followees in the past.
We presented the BLL equation in our previous work as a model to recommend tags in social bookmarking systems such as BibSonomy and CiteULike \cite{www_bll, Kowald2016a}. In the present work, we build upon these results by adopting the BLL equation to model the effect of time on the reuse of individual and social hashtags to build our hashtag recommendation algorithm. We demonstrate the efficacy of our approach in two empirical social networks crawled from Twitter. The first social network, termed \textit{CompSci}{} dataset, is built upon the tweets of a sample of Twitter users, who have been identified as computer scientists in previous related work \cite{hadgu2014identifying}, and their followees. The second network, termed \textit{Random}{} dataset, is built upon the tweets of a set of randomly chosen Twitter users and their followees. We experiment with these datasets to investigate the performance of our hashtag recommendation approach in two settings: (i) tweets of a domain-specific Twitter network, and (ii) tweets of a random network of Twitter users.
\para{Contributions and findings.} The main contributions of our work are two-fold. Firstly, our paper shows that time has a large effect on individual as well as social hashtag reuse in Twitter. Specifically, we observe a time-dependent decay of individual and social hashtag reuse that follows a power-law distribution. This finding paves the way for our idea to utilize the BLL equation as a predictive model to recommend hashtags for new tweets. Thus, our second contribution is that we design, develop and evaluate a personalized hashtag recommendation algorithm based on the BLL equation that outperforms current state-of-the-art approaches.
We implement the BLL equation in two variants, where the first one (i.e., BLL$_{I,S}${}) predicts the hashtags of a user solely based on past hashtag usage, and the second one (i.e., BLL$_{I,S,C}${}) combines BLL$_{I,S}${} with a content-based tweet analysis to also incorporate the text of the currently proposed tweet of a user. We evaluate our approach using standard evaluation protocols and metrics, and we find that our approach provides significantly higher prediction accuracy and ranking estimates than current state-of-the-art hashtag recommendation algorithms in both scenarios. We attribute this to the fact that our approach, in contrast to other related methods, mimics the way humans use and adapt hashtags by building upon insights from human memory theory (i.e., the BLL equation).
\para{Structure of this paper.} In Section \ref{sec:datasets}, we continue by describing the crawling procedure of our two Twitter datasets and analyzing hashtag usage types in these datasets. Then, in Section \ref{sec:analysis}, we study temporal usage patterns of individual and social hashtag reuse. In Section \ref{sec:approach}, we describe two variants of our approach (i.e., without and with the current tweet). This is followed in Section \ref{sec:evaluation} by our evaluation methodology and experimental results. Finally, we discuss related work in the field in Section \ref{sec:relatedwork} and we give a summary of our findings as well as our future plans in Section \ref{sec:conclusion}.
\section{Datasets} \label{sec:datasets}
In this section, we describe the data collection procedure and the two datasets we use for our study. Additionally, we investigate individual as well as social hashtag reuse patterns in our datasets as a prerequisite for our hashtag recommendation approach.
\para{Crawling strategy and dataset statistics.} In order to address our research goals, we crawl two datasets using the Search API of Twitter\footnote{\url{https://dev.twitter.com/rest/public/search}}. The final statistics of these datasets are illustrated in Table \ref{tab:datasets}.
The first one (i.e., \textit{CompSci}{} dataset) consists of researchers from the field of computer science and their followees, while the second one (i.e., \textit{Random}{} dataset) consists of random people and their followees. Our idea is to test our hashtag recommendation approach in two different network settings: (i) a domain-specific one, in our case the domain of computer scientists, and (ii) a more general one consisting of random Twitter users. Our crawling strategy for both datasets comprises of the following four steps:
\subpara{(a) Crawl seed users.} We start with identifying and crawling a list of seed users $U_S$ for each dataset. In the case of the \textit{CompSci}{} dataset, we take the users who were identified as computer scientists in the work of \cite{hadgu2014identifying}. In the case of the \textit{Random}{} dataset, we used the Streaming API of Twitter\footnote{\url{https://dev.twitter.com/streaming/overview}} in October 2015 to get a stream of tweets and extracted the user-ids to get our list of random seed users. From both user lists, we remove all users with more than 180 followees, which results in $|U_S|$ = 2,551 seed users for the \textit{CompSci}{} dataset and $|U_S|$ = 3,466 seed users for the \textit{Random}{} dataset. The threshold of using a maximum of 180 followees is chosen because the Twitter Search API only allows 180 requests per 15 minutes, which gives us the possibility to crawl the tweets of all followees of a seed user within this reasonable time window.
\subpara{(b) Crawl followees.} Next, we use these follower relationships to crawl the followees $F$ of the seed users in order to create a directed user network for analyzing the social influence on hashtag reuse. Based on the number of seed users, the average number of followees per seed user $|F| / |U_S|$ = 94 in the case of the \textit{CompSci}{} dataset and 72 in the case of the \textit{Random}{} dataset. Following these notations, the set of followees of user $u$ is denoted as F$_u$ in the remainder of this paper. Overall, our crawling procedure gives us $|U|$ = 91,776 total users for the \textit{CompSci}{} dataset and $|U|$ = 127,112 total users for the \textit{Random}{} dataset.
\begin{table}[t!]
\small
\setlength{\tabcolsep}{2.3pt}
\centering
\begin{tabular}{l||cccccc}
\specialrule{.2em}{.1em}{.1em}
Dataset & $|U_S|$ & $|F|$ & $|U|$ & $|T|$ & $|HT|$ & $|HTAS|$ \\\hline
\textit{CompSci}{} & 2,551 & 241,225 & 91,776 & 5,649,359 & 1,081,403 & 9,161,842 \\\hline
\textit{Random}{} & 3,466 & 252,219 & 127,112 & 8,157,702 & 1,507,773 & 13,628,750 \\
\specialrule{.2em}{.1em}{.1em}
\end{tabular}
\caption{Statistics of our \textit{CompSci}{} and \textit{Random}{} Twitter datasets. Here, $|U_S|$ is the number of seed users, $|F|$ is the number of followees of these seed users, $|U|$ is the number of total users, $|T|$ is the number of Tweets, $|HT|$ is the number of distinct hashtags and $|HTAS|$ is the number of hashtag assignments.}
\label{tab:datasets}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{barplot}
\caption{Analysis of hashtag usage types in our two datasets. For each hashtag assignment, we study whether the corresponding hashtag has been used by the same user before in time (``individual''), by some of the users she follows (``social''), by both (``individual/social''), by anyone else in the dataset (``network'') or neither of them (``external''). We find that between 66\% and 81\% of hashtag assignments in both datasets can be explained by individual or social hashtag usage (i.e., the sum of ``individual'', ``social'' and ``individual/social'').
\vspace{-3mm}}
\label{fig:intro}
\end{figure}
\begin{figure*}[t!]
\centering
\captionsetup[subfigure]{justification=centering}
\subfloat[][Individual hashtag reuse\\\textit{CompSci}{} dataset ($R^2$ = .883)]{
\includegraphics[width=0.24\textwidth]{researcher_individual_recency_year}
}
\subfloat[][Individual hashtag reuse\\\textit{Random}{} dataset ($R^2$ = .894)]{
\includegraphics[width=0.24\textwidth]{general_individual_recency_year}
}
\subfloat[][Social hashtag reuse\\\textit{CompSci}{} dataset ($R^2$ = .689)]{
\includegraphics[width=0.24\textwidth]{researcher_social_recency_year}
}
\subfloat[][Social hashtag reuse\\\textit{Random}{} dataset ($R^2$ = .771)]{
\includegraphics[width=0.24\textwidth]{general_social_recency_year}
}
\caption{The effect of time on individual and social hashtag reuse for the \textit{CompSci}{} and \textit{Random}{} datasets (plots are in log-log scale). Plots (a) and (b) show that the more recently a hashtag $ht$ was used by a user $u$, the higher its individual reuse count (i.e., people tend to reuse hashtags that have been used very recently by their own). Plots (c) and (d) show that the more recently a user $u$ was exposed to a hashtag $ht$, which was used by her followees $F_u$, the higher its social reuse count (i.e., people tend to reuse hashtags that have been used recently in the social network). Additionally, we report the $R^2$ estimates for the linear fits of the data. We find that temporal effects play an important role in individual and social hashtag reuse in both datasets.
\vspace{-3mm}}
\label{fig:analysis}
\end{figure*}
\subpara{(c) Crawl tweets.} In the third step, we crawl the 200 most recent tweets of all the users and remove the tweets in which no hashtags are used. The threshold of a maximum of 200 most recent tweets is set because of another restriction of the Twitter Search API that only allows 200 tweets to be received per a single request. This crawling procedure results in $|T|$ = 5,649,359 tweets for the \textit{CompSci}{} dataset with an average number of tweets per user $|T| / |U|$ = 61, and $|T|$ = 8,157,702 tweets for the \textit{Random}{} dataset with $|T| / |U|$ = 64. Our crawled tweets cover a time range from 2007 to 2015.
\subpara{(d) Extract hashtags.} Finally, we extract the hashtags of the tweets by searching for all words that start with a ``\#'' character. This results in $|HTAS|$ = 9,161,842 hashtag assignments for $|HT|$ = 1,081,403 distinct hashtags in the \textit{CompSci}{} network and $|HTAS|$ = 13,628,750 for $|HT|$ = 1,507,773 in the \textit{Random}{} network. Thus, in both datasets, each distinct hashtag is used approximately 9 times on average and each user uses approximately 100 hashtag assignments in her tweets on average. Examples for popular hashtags are \textit{\#bigdata}, \textit{\#iot} and \textit{\#ux} in case of the \textit{CompSci}{} dataset, and \textit{\#shahbag}, \textit{\#ff} and \textit{\#art} in case of the \textit{Random}{} dataset.
\para{Analysis of hashtag usage types.} In our datasets, we analyze hashtag assignments as well as hashtag reuse practices with the aim of identifying the different types of hashtag usages as a prerequisite for our recommendation approach. Specifically, for each hashtag assignment, we study whether the corresponding hashtag has either been used by the same user before (``individual''), by some of her followees (``social''), by both (``individual/social''), by anyone else in the dataset (``network'') or by neither of them (``external'').
The results of this study are shown in Figure \ref{fig:intro}. We find that 66\% of hashtag assignments in the \textit{CompSci}{} dataset and 81\% in the \textit{Random}{} dataset can be explained by individual or social hashtag reuse. This finding further corroborates our choice to utilize these two types of influences (i.e., individual and social) to create our model. In contrast to these large numbers, the 6\% to 8\% of hashtags in the ``network'' category is relatively small. Interestingly, the amount of ``external'' hashtags is twice as high in the \textit{CompSci}{} dataset (i.e., 26\%) as in the \textit{Random}{} one (i.e., 13\%). Thus, in our datasets, computer scientists tend to use more hashtags, which have not been previously introduced in the network, than random Twitter users. Because of this, we believe that the recommendation accuracy results would generally be lower in the \textit{CompSci}{} dataset than in the \textit{Random}{} one, which will be evaluated in Section \ref{sec:evaluation}. Summing up, both individual and social hashtags have an impact on users' choice of hashtags for a new tweet.
\section{Temporal Effects on Hashtag\\Reuse in Twitter} \label{sec:analysis}
In this section, we study to what extent temporal effects play a role in the reuse of individual and social hashtags in our two datasets (i.e., \textit{CompSci}{} and \textit{Random}{}). Specifically, we analyze the recency of hashtags assignments (i.e., the time since the last hashtag usage/exposure), as well as whether this effect of time-dependent decay follows a power-law or exponential distribution.
\para{Temporal effects on individual hashtag reuse.} The effect of time on individual hashtag reuse is visualized in the plots (a) and (b) of Figure \ref{fig:analysis}. To put the x-scale of these plots onto a meaningful range, we set the threshold for the maximum hashtag reuse recency to one year (i.e., 8,760 hours). The plots show the individual hashtag reuse count plotted over the reuse recency of a hashtag $ht$ by a user $u$ in hours. Hence, for each hashtag assignment of a hashtag $ht$ by user $u$, we take the time since the last usage of $ht$ by $u$ (i.e., the reuse recency) and pool together all hashtag assignments with the same recency value (i.e., the same time difference in hours). The individual reuse count for this recency value is then given by the size of the set of these hashtag assignments.
The two plots show similar results for both datasets and indicate that the more recently a hashtag $ht$ was used by a user $u$ in the past, the higher its individual reuse count is. Interestingly, there is a clear peak after 24 hours in both datasets, which further indicates that users typically use the same set of hashtags in this time span and thus, tend to tweet about similar topics on a daily basis. Furthermore, we also observe high $R^2$ values of nearly .9 for the linear fits in the log-log scaled plots, which indicates that a large amount of our data can be explained by a power function. This is also suggested by the power-law-based model of the BLL equation \cite{anderson_reflections_1991,anderson2004integrated}. In contrast, the linear fits in log-linear scaled plots only provide $R^2$ values of approximately .7, where high values would speak in favor of an exponential function.
\para{Temporal effects on social hashtag reuse.} Plots (c) and (d) of Figure \ref{fig:analysis} show the effect of time on the social hashtag reuse for the \textit{CompSci}{} and \textit{Random}{} datasets. These plots are created similarly as plots (a) and (b) but this time, we plot the social hashtag reuse count over the reuse recency of a hashtag $ht$ by the followees $F_u$ of user $u$. Hence, for each hashtag assignment of $ht$ by $u$, we take the most recent usage timestamp of $ht$ by $F_u$. The difference between this timestamp and the timestamp of the currently analyzed hashtag assignment indicates the time since the last social exposure of $ht$ to $u$. Again, we set the threshold for the maximum hashtag reuse recency to one year (i.e., 8,760 hours).
In these plots, we observe similar results for the two datasets since, in both cases, the more recently a user was exposed to a hashtag, the higher its social reuse count is. Furthermore, there is again (i) a clear peak after 24 hours, and (ii) the $R^2$ values for the linear fits in the log-log scaled plots (i.e., = .7) are larger than in the log-linear scaled plots (i.e., = .4), which speaks in favor of a power function. We now study if this is really the case.
\para{Power-law vs. exponential time-dependent decay.} The question whether a power or an exponential function is better suited to model the time-dependent decay of hashtag reuse is of interest especially for the design of our hashtag recommendation approach since both types of functions have been used in the area of time-aware recommender systems. While the BLL equation suggests the use of a power function to model the decay of item exposure in human memory \cite{anderson_reflections_1991}, related hashtag recommender approaches, such as the one proposed in \cite{harvey2015long}, use an exponential function for this purpose. As already mentioned, the visual inspection of Figure \ref{fig:analysis} and the $R^2$ values of the linear fits favor a power function. However, \cite{clauset2009power} has shown that this least squares-based method can lead to misinterpretations and thus, a likelihood ratio-based test is suggested.
We use the Python implementation \cite{alstott2014powerlaw} of the method described in \cite{clauset2009power} to validate if a power function produces a better fit than an exponential one. The results of this test are shown in Table \ref{tab:analysis}. The main value of interest here is the log-likelihood ratio $R$ between the two functions. As we see, $R > 0$ in all four cases with $p < .001$. This means that the power function indeed provides a better fit than the exponential function for explaining temporal effects on individual and social hashtag reuse. We also provide the $x_{min}$ and $\alpha$ values of the fits. In this respect, the $\alpha$ slopes can be used to set the $d$ parameter of the BLL equation (i.e., 1.7 in the individual case and 1.25 in the social case, see Section \ref{sec:approach}). Interestingly, these values are much higher than the suggested value of BLL's $d$ parameter, which is .5 \cite{anderson2004integrated}. We believe that this is the case because tweeting is more strongly influenced by temporal interest drifts than other applications studied in the ACT-R community (e.g., \cite{anderson_reflections_1991}).
\vspace{-1mm}
\findingbox{Temporal effects have an important influence on both individual as well as social hashtag reuse: people tend to reuse hashtags that were used very recently by their own and/or by their Twitter followees. Furthermore, a power function is better suited to model this time-dependent decay than an exponential one. This suggests that the BLL equation from the cognitive architecture ACT-R should be a suitable model for designing our time-dependent hashtag recommendation algorithm.}
\begin{table}[t!]
\small
\setlength{\tabcolsep}{7.2pt}
\centering
\begin{tabular}{l||l|cc}
\specialrule{.2em}{.1em}{.1em}
Dataset & Parameter & Individual ht reuse & Social ht reuse \\\hline
\multirow{3}{*}{\centering{\textit{CompSci}{}}}
& x$_{min}$ & 141 & 1 \\
& $\alpha$ & 1.699 & 1.242 \\
& $R$ & \textbf{188} & \textbf{164} \\\hline
\multirow{3}{*}{\centering{\textit{Random}{}}}
& x$_{min}$ & 141 & 1 \\
& $\alpha$ & 1.723 & 1.269 \\
& $R$ & \textbf{235} & \textbf{294} \\
\specialrule{.2em}{.1em}{.1em}
\end{tabular}
\caption{Power-law vs. exponential time-dependent decay. We see that a power function provides a better fit than an exponential function ($R > 0$) for explaining temporal effects on individual and social hashtag reuse in our two datasets ($p < .001$).
\vspace{-5mm}}
\label{tab:analysis}
\end{table}\section{A Cognitive-Inspired Hashtag\\Recommendation Approach} \label{sec:approach}
In the previous section, we have shown that temporal effects are important factors when users reuse individual and social hashtags. In this section, we use these insights as a basis to design our hashtag recommendation approach illustrated in Figure \ref{fig:approach}. Thus, we distinguish between hashtag recommendations without (\textit{Scenario 1}{}) and with (\textit{Scenario 2}{}) incorporating the current tweet $t$.
Whereas the first variant of our approach solely uses the past hashtags of a user $u$ and/or her followees F$_u$, the second variant also utilizes the text of the current tweet $t$. Hence, these two scenarios also differ in their possible use cases since the first one aims to foresee the topics a specific user will tweet about based on the predicted hashtags, whereas the second one aims to support a user in finding the most descriptive hashtags for a new tweet text \cite{Godin2013}.
For reasons of reproducibility, we implement and evaluate our approach by extending our open-source tag recommender benchmarking framework \textit{TagRec}. The source code and framework is freely accessible for scientific purposes on the Web\footnote{\url{https://github.com/learning-layers/TagRec}}.
\subsection{Scenario 1: Hashtag rec. w/o current tweet}
For the first variant of our approach, we ignore the content of the current tweet $t$ and solely utilize past hashtag usages. As already stated, we use the BLL equation coming from the cognitive architecture ACT-R \cite{anderson2004integrated,anderson_reflections_1991} for this task. We go for a cognitive-inspired approach, since we know from research on the underlying mechanisms of social tagging that the way users choose tags for annotating resources (e.g., Web links) strongly corresponds to processes in human memory and its cognitive structures \cite{cress2013collective,seitlinger2015verbatim}. The BLL equation quantifies the general usefulness of a piece of information (e.g., a word or hashtag) by considering how frequently and recently it was used by a user in the past. Formally, it is given by:
\begin{equation}\label{eq:bll}
B_i = ln(\sum\limits_{j = 1}\limits^{n}{t_{j}^{-d}})
\end{equation}
where $B_i$ is the base-level activation of a memory unit $i$ and $n$ is the frequency of $i$'s occurrences in the past (i.e., how often $i$ was used by $u$). Furthermore, $t_j$ states the recency (i.e., the time since the $j$th occurrence of $i$) and the exponent $d$ accounts for the power-law of time-dependent decay. As visualized in \textit{Scenario 1}{} of Figure \ref{fig:approach}, we adopt the BLL equation for (i) modeling the reuse of individual hashtags (BLL$_{I}$), (ii) modeling the reuse of social hashtags (BLL$_{S}$), and (iii) combining the former two into a hybrid recommendation approach (BLL$_{I,S}$).
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{approach}
\caption{Schematic illustration of our cognitive-inspired approach for hashtag recommendations. We implement our approach in two scenarios (i.e., without and with incorporating the content of the current tweet). In \textit{Scenario 1}{}, we use the BLL equation to realize (i) the individual BLL$_I$ algorithm, (ii) the social BLL$_S$ algorithm, and (iii) the hybrid BLL$_{I,S}${} algorithm, which combines both. In \textit{Scenario 2}{}, we use TF-IDF to identify similar tweets for a currently proposed tweet $t$ and identify the hashtags of the most similar ones. We combine this content-based tweet analysis with our BLL$_{I,S}${} method to provide personalized and content-aware hashtag recommendations in the form of our hybrid BLL$_{I,S,C}${} approach.
\vspace{-3mm}}
\label{fig:approach}
\end{figure}
\vspace{2mm} \noindent \textbf{Modeling individual hashtag reuse.}
In order to model the reuse of individual hashtags, we define the individual base-level activation $B_I(ht, u)$ of a hashtag $ht$ for a user $u$ as follows:
\begin{equation}
B_I(ht, u) = \ln(\sum\limits_{j = 1}\limits^{n}{(TS_{ref} - TS_{ht,u,j})^{-d_I})}
\end{equation}
where $n$ denotes the number of times $ht$ was used by $u$ in the past (i.e., $|HTAS_{ht,u}|$) and the term $TS_{ref} - TS_{ht,u,j}$ states the recency of the $j$th usage of $ht$ by $u$. In this respect, $TS_{ref}$ is the reference timestamp (i.e., when recommendations should be calculated) and $TS_{ht,u,j}$ is the timestamp when $ht$ was used by $u$ for the $j$th time. Based on the results of our analysis presented in Table \ref{tab:analysis}, we set the individual time-dependent decay factor $d_I$ to 1.7.
\vspace{2mm} \noindent \textbf{Modeling social hashtag reuse.}
We model the reuse of social hashtags in a similar way but instead of analyzing how frequently and recently a hashtag $ht$ was used by user $u$, we analyze how frequently and recently $ht$ was used by the set of followees $F_u$ of $u$. Thus, we formulate the social base-level activation $B_S(ht, u)$ of $ht$ for $u$ as follows:
\begin{equation}
B_S(ht, u) = \ln(\sum\limits_{j = 1}\limits^{m}{(TS_{ref} - TS_{ht,F_u,j})^{-d_S})}
\end{equation}
where $m$ is the number of times $ht$ was used by $F_u$ before the reference timestamp $TS_{ref}$ (i.e., $|HTAS_{ht,F_u}|$). The term $TS_{ref} - TS_{ht,F_u,j}$ states the recency of the $j$th exposure of $ht$ to $u$ caused by $F_u$, where $TS_{ht,F_u,j}$ is the timestamp when $ht$ was used by $F_u$ for the $j$th time. As when modeling the individual hashtag reuse, we set the social time-dependent decay factor $d_S$ based on the results of our analysis in Table \ref{tab:analysis} (i.e., to 1.25).
\para{Combining individual and social hashtag reuse.}
As we have formalized the individual as well as social hashtag reuse, we want to mix both components in form of a hybrid approach using a linear combination \cite{jaschke2008tag}. Hence, in order to be able to add the individual and social base-level activations $B_I(ht, u)$ and $B_S(ht, u)$, we have to map these values onto a common range of 0 to 1 that add up to 1. Therefore, we define the softmax functions $\sigma(B_I(ht,u))$ and $\sigma(B_S(ht,u))$ as proposed by \cite{mcauley2013hidden,www_bll}. This is given by:
\begin{equation} \label{eq:sm}
\sigma(B_I(ht,u)) = \frac{\exp(B_I(t, u))}{\sum\limits_{ht' \in HT_{u}}{\exp(B_I(ht', u))}}
\end{equation}
where $HT_u$ is the set of distinct hashtags used by $u$. For $B_S(ht, u)$, the softmax function $\sigma(B_S(ht,u))$ can be calculated in the same way but on the basis of $HT_{F_u}$ (i.e., the set of hashtags used by $u$'s followees $F_u$).
Taken together, the combined base-level activation $B_{I,S}$ for our BLL$_{I,S}$ approach is given by:
\begin{equation} \label{eq:hybrid}
B_{I,S}(ht,u) = \beta \underbrace{\sigma(B_I(ht, u))}_{BLL_I} + (1 - \beta) \underbrace{\sigma(B_S(ht, u))}_{BLL_S}
\end{equation}
where the $\beta$ parameter can be used to give weights to the two components. Based on experimentation, we set $\beta$ to .5 to equally weigh the individual and social influence. As indicated in Equation \ref{eq:hybrid} and Figure \ref{fig:approach}, we can also calculate predictions either solely based on the individual hashtag reuse, referred as BLL$_I$, or the social hashtag reuse, referred as BLL$_S$.
\subsection{Scenario 2: Hashtag rec. w/ current tweet} \label{sec:hashtagrec}
As shown in \textit{Scenario 2}{} of Figure \ref{fig:approach}, the second variant of our approach aims to provide hashtag suggestions while also incorporating the content of the currently proposed tweet $t$. Thus, we build on the unpersonalized method proposed by \cite{zangerle2011recommending} to find hashtags of similar tweets and combine this method with our BLL$_{I,S}${} approach to generate personalized and content-aware recommendations.
\para{Content-based tweet analysis.} We analyze the content of tweets in order to find similar tweets for a target tweet $t$ and to extract the hashtags of these similar ones. Therefore, we incorporate the term frequency-inverse document frequency (TF-IDF) statistic, which identifies the importance of a term for a document in a collection of documents. TF-IDF can be further used to calculate the similarity between two documents $d$ and $\overline{d}$ by summing up the TF-IDF statistics of $d$'s terms in $\overline{d}$. When applying this statistic to Twitter, we treat tweets as documents and calculate the similarity between the target tweet $t$ and a candidate tweet $\overline{t}$ as follows:
\begin{equation}
sim(t, \overline{t}) = \sum\limits_{c \in C_t}{n_{c,\overline{t}} \times \log(\frac{|T|}{|\{t': c \in t'\}|})}
\end{equation}
where $C_t$ are the terms in the text of target tweet $t$, $n_{c, \overline{t}}$ is the number of times $c \in C_t$ occurs in the candidate tweet $\overline{t}$, $|T|$ is the number of tweets in the dataset and $|\{t': c \in t'\}|$ is the number of times $c$ occurs in any tweet $t' \in T$. The first factor of this equation reflects the term frequency $TF$, whereas the second factor reflects the inverse document frequency $IDF$ \cite{zangerle2011recommending}.
Based on these similarity values, we identify the most similar tweets $S_t$ for $t$ and extract the hashtags used in these tweets (i.e., $HT_{S_t}$). For each hashtag $ht \in HT_{S_t}$, we assign a content-based score $CB(ht, t)$, which is the highest similarity value within the most similar tweets $S_t$ in which $ht$ occurs. We implement this method using the Lucene-based full-text search engine Apache Solr 4.7.10\footnote{\url{http://lucene.apache.org/solr/}}. Based on Solr's software documentation and our own experimentation, we set the minimum term frequency $tf$ to 2 and the minimum document frequency $df$ to 5.
\para{Combining personalized and content-aware hashtag rec.} We combine our personalized BLL$_{I,S}${} approach with this content-based analysis (C) in order to generate personalized hashtag recommendations (see Figure \ref{fig:approach}). Again, we achieve this via a linear combination of both approaches. Taken together, the top-$k$ recommended hashtags $\widetilde{HT}_{u,t}$ for user $u$ and tweet $t$ are given by:
\begin{equation}
\begin{split}
\widetilde{HT}_{u,t} = \argmax_{ht \in \overline{HT}_{u,t}}^{k}(\lambda \underbrace{B_{I,S}(ht, u)}_{BLL_{I,S}} + (1 - \lambda) \underbrace{\sigma(CB(ht, t))}_{C})
\end{split}
\end{equation}
where $\overline{HT}_{u,t}$ is the set of candidate hashtags for $u$ and $t$ (i.e., $HT_u \cup HT_{F_u} \cup HT_{S_t}$). The $\lambda$ parameter is used to give weights to the personalized and content-aware components. To that end, we set $\lambda$ to .3 based on experimentation. Please note that the content-based score $CB(ht, t)$ has to be normalized using the softmax function (see Equation \ref{eq:sm}), whereas $B_{I,S}(ht, u)$ is already normalized (see Equation \ref{eq:hybrid}). This finally constitutes our personalized hashtag recommendation algorithm termed BLL$_{I,S,C}${}.\section{Evaluation} \label{sec:evaluation}
In this section, we present the evaluation of our approach. This includes the methodology used as well as the results in terms of recommendation accuracy and ranking for our two scenarios.
\subsection{Methodology}
The methodology of our evaluation is given by the evaluation protocol, evaluation metrics and baseline algorithms used.
\para{Evaluation protocol.} In order to split our datasets into training and test sets, we use an established leave-one-out evaluation protocol from research on information retrieval and recommender systems \cite{jaschke2008tag}. For each seed user in our datasets (see Section \ref{sec:datasets}) with at least two tweets (i.e., 2,020 users in the \textit{CompSci}{} dataset and 2,679 users in the \textit{Random}{} dataset), we determine her most recent tweet and put it (and its hashtags) into the test set. The remaining tweets are then put into the training set. This protocol ensures not only that the hashtags of at least one tweet per user are available for training but also that the chronological order of the data is preserved (i.e., future hashtags are predicted based on usage patterns of past ones). We use these sets in two evaluation scenarios:
\subpara{\textit{Scenario 1}{}.} In the first scenario, we ignore the content of the currently proposed tweet (i.e., the one in the test set) and solely provide hashtag predictions based on the current user-id. Thus, in \textit{Scenario 1}{}, we are able to evaluate all test set tweets.
\subpara{\textit{Scenario 2}{}.} In the second scenario, we also incorporate the content of the current tweet. In this setting, we only evaluate the test set entries, which do not include retweets (i.e., 954 test set tweets in the \textit{CompSci}{} dataset and 1,504 test set tweets in the \textit{Random}{} dataset). The reason for excluding the retweets from the test set in \textit{Scenario 2}{} is that searching for similar tweets in the training set would result in identical tweets with identical hashtags, which would heavily bias our evaluation (see also \cite{zangerle2011recommending}).
\para{Evaluation metrics.} To finally quantify the quality of the algorithms, for each test set entry, we compare the top-$10$ hashtags an algorithm predicts for the given user $u$ and tweet $t$ (i.e., $\widetilde{HT}_{u,t}$) with the set of relevant hashtags actually used by $u$ in $t$.
This comparison is done using various evaluation metrics known from the field of recommender systems. Specifically, we report Precision (P) and Recall (R) for $k$ = 1 to 10 predicted hashtags by means of Precision/Recall plots, and F1-score (F1@5) for $k$ = 5 predicted hashtags. We set $k$ = 5 for the F1-score since F1@5 was also used as the main evaluation metric in the well-known ECML PKDD 2009 discovery challenge\footnote{\url{http://www.kde.cs.uni-kassel.de/ws/dc09/evaluation}.}. Additionally, we report the ranking-dependent metrics Mean Reciprocal Rank (MRR@10), Mean Average Precision (MAP@10) and Normalized Discounted Cumulative Gain (nDCG@10) for $k$ = 10 predicted hashtags \cite{jarvelinMetrics}.
\para{Baseline algorithms.} We compare our approach to a rich set of 9 state-of-the-art hashtag recommendation algorithms:
\begin{table*}[t!]
\small
\setlength{\tabcolsep}{6.0pt}
\centering
\begin{tabular}{l||l|lll|lll|llll|lll}
\specialrule{.2em}{.1em}{.1em}
& & \multicolumn{10}{c|}{\textit{Scenario 1}{}:} & \multicolumn{3}{c}{\textit{Scenario 2}{}:} \\
& & \multicolumn{10}{c|}{Hashtag rec. w/o current tweet} & \multicolumn{3}{c}{Hashtag rec. w/ current tweet} \\
Dataset & Metric & MP$_I$ & MR$_I$ & BLL$_I$ & MP$_S$ & MR$_S$ & BLL$_S$ & MP & FR & CF & BLL$_{I,S}${} & SR & TCI & BLL$_{I,S,C}${} \\\hline
\multirow{4}{*}{\centering{\textit{CompSci}{}}}
& F1@5 & .086 & .098 & \textbf{.101} & .022 & .076 & \textbf{.118} & .006 & .083 & .099 & \textbf{.153$^{***}$} & .139 & .182 & \textbf{.200$^{*}$} \\
& MRR@10 & .136 & .188 & \textbf{.193} & .032 & .122 & \textbf{.187} & .007 & .130 & .163 & \textbf{.268$^{***}$} & .264 & .334 & \textbf{.395$^{***}$} \\
& MAP@10 & .143 & .195 & \textbf{.202} & .033 & .128 & \textbf{.205} & .007 & .136 & .169 & \textbf{.285$^{***}$} & .283 & .354 & \textbf{.417$^{***}$} \\
& nDCG@10 & .175 & .218 & \textbf{.225} & .046 & .154 & \textbf{.235} & .012 & .169 & .196 & \textbf{.324$^{***}$} & .299 & .385 & \textbf{.446$^{**}$} \\\hline
\multirow{4}{*}{\centering{\textit{Random}{}}}
& F1@5 & .160 & .169 & \textbf{.175} & .072 & .103 & \textbf{.138} & .012 & .159 & .165 & \textbf{.208$^{***}$} & .181 & .243 & \textbf{.261$^{*}$} \\
& MRR@10 & .261 & .300 & \textbf{.314} & .109 & .159 & \textbf{.220} & .023 & .260 & .278 & \textbf{.361$^{***}$} & .341 & .436 & \textbf{.489$^{**}$} \\
& MAP@10 & .279 & .315 & \textbf{.335} & .116 & .171 & \textbf{.240} & .024 & .279 & .296 & \textbf{.389$^{***}$} & .374 & .472 & \textbf{.530$^{**}$} \\
& nDCG@10 & .323 & .352 & \textbf{.370} & .144 & .205 & \textbf{.280} & .035 & .324 & .333 & \textbf{.434$^{***}$} & .388 & .507 & \textbf{.562$^{**}$} \\
\specialrule{.2em}{.1em}{.1em}
\end{tabular}
\caption{Recommender accuracy results of our two evaluation scenarios. In \textit{Scenario 1}{}, we compare approaches that ignore the current tweet content, while in \textit{Scenario 2}{}, we compare algorithms that also incorporate the current tweet. We observe that (i) BLL$_I$ outperforms MP$_I$ and MR$_I$, (ii) BLL$_S$ outperforms MP$_S$ and MR$_S$, (iii) BLL$_{I,S}$ outperforms MP, FR and CF, and (iv) BLL$_{I,S,C}${} outperforms SR and TCI. Based on a t-test, the symbols $^{*}$ ($\alpha$ = .1), $^{**}$ ($\alpha$ = .01) and $^{***}$ ($\alpha$ = .001) indicate statistically significant differences between BLL$_{I,S}${} and CF in \textit{Scenario 1}{}, and between BLL$_{I,S,C}${} and TCI in \textit{Scenario 2}{}.
\vspace{-3mm}}
\label{tab:results}
\end{table*}
\subpara{MP$_I$.} The \textit{Most Popular Individual Hashtags} algorithm ranks the hashtags based on the frequency in the hashtag assignments of current user $u$. MP$_I$ is also referred to as Most Popular Tags by User (MP$_u$) in tag recommendation literature \cite{jaschke2008tag}.
\subpara{MR$_I$.} \textit{Most Recent Individual Hashtags} is a time-dependent variant of MP$_I$. MR$_I$ suggests the $k$ most recently used hashtags of current user $u$ \cite{campos2014time}. Our BLL$_I$ approach can be seen as an integrated combination of MP$_I$ and MR$_I$ based on human memory theory.
\subpara{MP$_S$.} The \textit{Most Popular Social Hashtags} algorithm is the social correspondent to the individual MP$_I$ approach \cite{jaschke2008tag}. Thus, MP$_S$ does not rank the hashtags based on the frequency in the hashtag assignments of user $u$ but based on the frequency in the hashtag assignments of user $u$'s set of followees $F_u$.
\subpara{MR$_S$.} \textit{Most Recent Social Hashtags} is the time-dependent equivalent to MP$_S$. MR$_S$ sorts the hashtag assignments of $u$'s followees $F_u$ by time and recommends the $k$ most recent ones. Our BLL$_S$ algorithm is a cognitive-inspired integration of MP$_S$ and MR$_S$.
\subpara{MP.} The unpersonalized \textit{Most Popular Hashtags} approach returns the same set of hashtags for any user. These hashtags are ranked by their overall frequency in the dataset \cite{jaschke2008tag}.
\subpara{FR.} \textit{FolkRank} is an adaption of Google's PageRank approach used to rank the entities in folksonomy graphs and has become one of the most successful tag recommender methods \cite{hotho2006folkrank}.
We use the standard FR implementation provided by the University of Kassel\footnote{\url{http://www.kde.cs.uni-kassel.de/code}} with its suggested default parameters. More specifically, the weight of the preference vector $d$ is set to .7 and the maximum number of iterations $l$ is set to 10 \cite{jaschke2008tag}.
\subpara{CF.} \textit{User-based Collaborative Filtering} is a well-known algorithm used in many variants of modern recommender systems and was adapted by \cite{marinho2008collaborative} for use in tag-based settings. We apply the same idea for the task of recommending hashtags and thus, first identify the $k$ most similar users (i.e., the nearest neighbors) for current user $u$ by means of the cosine similarity measure and then suggest the hashtags used by these neighbors. For our experiments, we use a neighborhood size $k$ of 20 users (see also \cite{gemmell2009improving}).
\subpara{SR.} \textit{SimilarityRank} is an unpersonalized hashtag recommendation algorithm, which utilizes the content of the currently proposed tweet $t$ \cite{zangerle2011recommending}. Similarly to our BLL$_{I,S,C}${} approach, this is achieved using TF-IDF to determine content-based similarity scores between tweets (see Section \ref{sec:hashtagrec}). These scores are used to recommend the $k$ hashtags that occur in $t$'s most similar tweets.
\subpara{TCI.} \textit{TemporalCombInt} is one of the most recent approaches for personalized hashtag recommendations and also one of the very few approaches that accounts for the effect of time on hashtag usage \cite{harvey2015long} (see also Section \ref{sec:relatedwork}). TCI builds on a linear combination of SR and CF and incorporates temporal effects by considering the time-dependent relevance of a hashtag with respect to the recommendation date. This is done by categorizing the hashtags into ``organizational'' and ``conversational'' hashtags, and modeling the decay of temporal relevance using an exponential function. By fitting this model to our crawled data, we set the two main parameters of the algorithm, $\eta_l$ and $\eta_h$, to .1 and .2, respectively.
\subsection{Results and Discussion} \label{sec:results}
In Section \ref{sec:analysis}, we found that time is an important factor for hashtag reuse. Because of this, we assume that our time-dependent and cognitive-inspired approach should provide reasonable results compared to other algorithms. The accuracy estimates for our two evaluation scenarios are shown in Table \ref{tab:results} and Figure \ref{fig:results}.
\begin{figure*}[t!]
\centering
\captionsetup[subfigure]{justification=centering}
\subfloat[][\textit{Scenario 1}{}: Hashtag rec. w/o current tweet\\\textit{CompSci}{} dataset]{
\includegraphics[width=0.24\textwidth]{twitter_res_pred_precrec}
}
\subfloat[][\textit{Scenario 1}{}: Hashtag rec. w/o current tweet\\\textit{Random}{} dataset]{
\includegraphics[width=0.24\textwidth]{twitter_gen_pred_precrec}
}
\subfloat[][\textit{Scenario 2}{}: Hashtag rec. w/ current tweet\\\textit{CompSci}{} dataset]{
\includegraphics[width=0.24\textwidth]{twitter_res_rec_precrec}
}
\subfloat[][\textit{Scenario 2}{}: Hashtag rec. w/ current tweet\\\textit{Random}{} dataset]{
\includegraphics[width=0.24\textwidth]{twitter_gen_rec_precrec}
}
\caption{Precision / Recall plots of our two evaluation scenarios showing the accuracy of BLL$_I$, BLL$_S$, CF, BLL$_{I,S}${}, SR, TCI and BLL$_{I,S,C}${} for $k$ = 1 - 10 recommended hashtags. Again, BLL$_{I,S}${} provides the best results in \textit{Scenario 1}{} and BLL$_{I,S,C}${} in \textit{Scenario 2}{}.
\vspace{-3mm}}
\label{fig:results}
\end{figure*}
\para{\textit{Scenario 1}{}: Hashtag rec. w/o current tweet.} In our first evaluation scenario, we validate approaches that predict future hashtags without incorporating the content of the currently proposed tweet. Here, we identify three main results:
\subpara{(a) BLL$_I$ $>$ MP$_I$, MR$_I$.} When predicting individual hashtag reuse, we compare our BLL$_I$ approach to the frequency-based MP$_I$ and the recency-based MR$_I$ algorithms. The results clearly reflect the importance of the time component since MR$_I$ and BLL$_I$ provide higher prediction accuracy and ranking estimates than MP$_I$ for all evaluation metrics across both datasets. Apart from that, we observe that BLL$_I$ outperforms MR$_I$, which speaks in favor of the cognitive-inspired combination of hashtag frequency and recency by means of the BLL equation.
\subpara{(b) BLL$_S$ $>$ MP$_S$, MR$_S$.} Concerning the prediction of social hashtag reuse, we compare our BLL$_S$ approach to the frequency-based MP$_S$ and the recency-based MR$_S$ methods. Similar to the case of individual hashtag reuse, MR$_S$ and our BLL-based method provide higher accuracy estimates than the solely frequency-based one, but interestingly, this time the differences between these methods is much larger. This indicates that the time information is especially important in a social setting. We somehow expected this behavior since typically only the most recent tweets of the followees are shown on a user's Twitter timeline. Again, the combination of hashtag frequency and recency by means of the BLL equation provides the best results.
\subpara{(c) BLL$_{I,S}${} $>$ MP, FR, CF.} Finally, we compare our hybrid BLL$_{I,S}${} approach to the unpersonalized MP algorithm, the well-known FR method from tag recommender research and classic user-based CF. The first observation that becomes apparent is the poor performance of the unpersonalized MP baseline, which underpins the importance of personalized methods for hashtag recommendation.
Additionally, and more importantly, our hybrid BLL$_{I,S}${} approach does not only improve its BLL$_I$ and BLL$_S$ components but also provides significantly higher accuracy and ranking estimates than FR and CF. This shows that BLL$_{I,S}${} is capable of providing reasonable hashtag recommendations solely based on temporal usage patterns of past hashtag assignments.
\para{\textit{Scenario 2}{}: Hashtag rec. w/ current tweet.} In the second scenario, we evaluate hashtag recommendation methods that also incorporate the content of the current tweet. This includes the unpersonalized SR approach, the time-dependent TCI algorithm and our BLL$_{I,S,C}${} approach. Our two main results are:
\subpara{(a) TCI, BLL$_{I,S,C}${} $>$ SR.} The first main result of our second evaluation scenario is that both time-dependent methods TCI and BLL$_{I,S,C}${} outperform the unpersonalized SR approach. We somehow expected this result since both TCI and BLL$_{I,S,C}${} extend the TF-IDF-based tweet content analysis of SR with personalization techniques via CF (TCI) or the BLL equation (BLL$_{I,S,C}${}).
\subpara{(b) BLL$_{I,S,C}${} $>$ TCI.} The second main result of \textit{Scenario 2}{} is that BLL$_{I,S,C}${} provides significantly higher accuracy estimates than TCI. This is due to three main differences between these methods: (i) instead of using hashtags of similar users by means of CF for adding personalization, we incorporate not only individual hashtags of the current user but also social hashtags of the current user's followees, (ii) instead of applying the effect of time on a global hashtag level, we model the time-dependent decay on an individual and social level, and (iii) instead of modeling this time-dependent decay using an exponential function, we use a power function by means of the BLL equation.
\para{\textit{CompSci}{} dataset vs. \textit{Random}{} dataset.} Another interesting finding we observe is that all algorithms provide better results for the \textit{Random}{} dataset than for the \textit{CompSci}{} dataset. In our case, this indicates that the task of predicting hashtags in the domain-specific network of computer scientists is harder than in the network of random users. If we look back at Figure \ref{fig:intro}, this makes sense since the amount of ``external'' hashtags is twice as high in the \textit{CompSci}{} dataset (i.e., 26\%) than in the \textit{Random}{} one (i.e., 13\%).
\vspace{-1mm}
\findingbox{The BLL equation, which accounts for temporal effects of item exposure in human memory, provides a suitable model for personalized hashtag recommendations. This is validated in two evaluation scenarios (i.e., without and with incorporating the content of the current tweet), in which our cognitive-inspired approach outperforms several state-of-the-art hashtag recommendation algorithms in terms of prediction accuracy.}
\section{Related Work} \label{sec:relatedwork}
Over the past years, tagging has emerged as an important feature of the social Web, which supports users to collaboratively organize and find content \cite{Korner2010}. Two types of tags have been established: (i) social tags as used in systems like BibSonomy and CiteUlike, and (ii) hashtags as used in systems like Twitter and Instagram. Whereas social tags are mainly used to index resources for later retrieval, hashtags have a more conversational nature and are used to filter and direct content to certain streams of information \cite{Huang2010}.
One of the most prominent approaches in the field of tag recommendations is the FolkRank algorithm \cite{hotho2006folkrank,jaschke2007tag,jaschke2008tag}. FolkRank is an extension of the well-known Google PageRank approach to rank the entities in folksonomies (i.e., users, resources and tags). Other important tag recommendation methods are based on Collaborative Filtering \cite{marinho2008collaborative,gemmell2009improving}, Latent Dirichlet Allocation \cite{krestel2009latent,krestel2012personalized} or Tensor Factorization \cite{rendle2010pairwise,rendle2009learning}. Recent observations in the field of social tagging state the importance of the time component for the individual tagging behavior of users. In this respect, \cite{zhang2012integrating,yin2011exploiting,yin2011temporal} propose time-dependent tag recommender approaches, which model the tagging variation over time using exponential functions. In our previous work \cite{www_bll,Kowald2016a}, we presented a more theory-driven approach, where we use the BLL equation coming from the cognitive architecture ACT-R \cite{anderson_reflections_1991,anderson2004integrated} to model the power-law of time-dependent decay. We evaluated our approach in detail and compared it to other state-of-the-art methods in \cite{Kowald2015}. In the present work, we build upon our results and incorporate the BLL equation to study the effect of time on hashtag reuse to design our hashtag recommendation approach.
There is already a large body of research available that focuses on the recommendation of hashtags in Twitter. One illustrative example is the work presented in \cite{Godin2013}, in which hashtag recommendations are provided by categorizing tweets into general topics using LDA. The approach then recommends the hashtags that best fit the topics of a new tweet. The authors evaluate their approach using a qualitative study, in which they ask persons if the recommended hashtags describe the topics of a tweet and could be used to semantically enrich it. In 80\% of the cases, they are able to provide a suitable hashtag from a selection of five possibilities. Other similar approaches that use topic models for hashtag recommendations are presented in \cite{She2014,wang2014tag,xu2015personalized,efron2010hashtag}. In \cite{jeon2014hashtag}, a related algorithm based on a hashtag classification scheme is proposed.
The most notable work in the context of hashtag recommendations is probably the content-based SR approach presented in \cite{zangerle2011recommending} and \cite{zangerle2013impact}. The authors use the TF-IDF statistic to calculate similarities between tweets and identify suitable hashtags based on these similarity scores. They show that SR improves Recall and Precision by around 35\% compared to a popularity-based approach. Our BLL$_{I,S,C}${} approach uses the same statistic to integrate the content of a user's currently proposed tweet. In \cite{kywe2012recommending}, a personalized extension of SR is presented, in which the authors combine it with user-based CF. Apart from that, a content-based hashtag recommendation algorithm for hyper-linked tweets is proposed in \cite{sedhai2014hashtag}.
Related research has studied temporal effects on hashtag usage, for instance in the context of popular hashtags in Twitter \cite{lin2012study,lehmann2012dynamical,tsur2012,ma2012will}. For example, in \cite{ma2012will}, the authors aim to predict if a specific hashtag will be popular on the next day. By formulating this task as a classification problem, they find that both content features (e.g., the topic of the hashtag) and context features (e.g., the users who used the hashtags) are effective features for popularity prediction. A similar approach is presented in \cite{yang2011patterns}, in which the authors uncover the temporal dynamics of online content (e.g., tweets) by formulating a time series clustering problem. One of the very few examples of a time-aware hashtag recommendation approach is the recently proposed algorithm described in \cite{harvey2015long}. The authors extend the content-based SR approach \cite{zangerle2011recommending} with a personalization technique by means of CF and further consider the temporal relevance of hashtags. To account for this temporal relevance, they divide the hashtags into two categories: ``organizational'' ones, which are used over a long period of time and ``conversational'' ones, which are used only during a short time span (e.g., for a specific event).
In contrast to our proposed algorithm, which relies on the BLL equation, their approach considers the effect of time on a global hashtag level of the whole Twitter network and not on an individual and social level of a specific user. Furthermore, we use a power function rather than an exponential one to model the time-dependent decay based on our empirical findings.\section{Conclusion and Future Work} \label{sec:conclusion}
In this paper, we presented a cognitive-inspired approach for hashtag recommendations in Twitter. Our approach utilizes the BLL equation from the cognitive architecture ACT-R to account for temporal effects on individual hashtag reuse (i.e., reusing own hashtags) and social hashtag reuse (i.e., reusing hashtags, which has been previously used by a followee). Our analysis of hashtag usage types in two empirical networks (i.e., \textit{CompSci}{} and \textit{Random}{} datasets) crawled from Twitter reveals that between 66\% and 81\% of hashtag assignments can be explained by past individual and social hashtag usage. By analyzing the timestamps of these hashtag assignments, we find that temporal effects play an important role for both individual and social reuse of hashtags and that a power function provides a better fit to model this time-dependent decay than an exponential function.
Thus, the more recently a hashtag was used by a user or her followees, the higher the probability that this user will use the same hashtag again later in time. Based on these findings, we utilized the Base-Level Learning (BLL) equation of the cognitive architecture ACT-R, which accounts for the time-dependent decay of item exposure in human memory, to develop BLL$_{I,S}${} and BLL$_{I,S,C}${}, two algorithms for recommending hashtags. Whereas BLL$_{I,S}${} aims to recommend hashtags without incorporating the current tweet (\textit{Scenario 1}{}), BLL$_{I,S,C}${} also utilizes the content of the current tweet using the TF-IDF statistic (\textit{Scenario 2}{}). We compared both algorithms to state-of-the-art hashtag recommendation algorithms and found that our cognitive-inspired approaches outperform these algorithms in terms of prediction accuracy and ranking.
One limitation of this work is that we model the reuse of social hashtags solely by analyzing how frequently and recently a hashtag was used by a user's followees, neglecting by whom the hashtag was used. Thus, for future work, we plan to extend our approach with the social status of the followee (e.g., via the reputation of the user by means of the number of followers). In this respect, we will also utilize the social connection strength between a user and her followee (e.g., by the number of mentions or retweets).
With respect to the hashtag assignments that cannot be explained by hashtag reuse (i.e., 26\% in the \textit{CompSci}{} dataset and 13\% in the \textit{Random}{} dataset), we want to utilize an external knowledge base to also account for these hashtag assignments. We will achieve this by suggesting hashtags of currently trending topics or events. Finally, we also plan to verify our findings in larger Twitter data samples than the ones used in this paper as well as in other online social networks that feature hashtags, such as Instagram and Facebook.
In summary, our work contributes to the rich line of research on improving the use of hashtags in social networks. We hope that future work will be attracted by our insights into how temporal effects on hashtag usage can be modeled using models from human memory theory, such as the BLL equation.
\para{Acknowledgments.} The authors would like to thank Matthias Traub and Dieter Theiler for valuable inputs. This work is funded by the Know-Center and the EU project AFEL (GA: 687916).
\small
\bibliographystyle{abbrv}
|
1,477,468,751,417 | arxiv | \section{Introduction} \label{s:intro}
The highly nonlinear behavior of fluids is an endless source of fascination and challenges
to our understanding. Much mathematical analysis can only deal with models of smooth,
rather quiescent flows.
Yet some of the most powerful and dramatic fluid phenomena are associated with singular flows.
What happens when waves crash against a seawall?
How do whitecaps form on windblown waves? How do droplets shatter and become spray?
Our understanding of such phenomena is very primitive. We consider here
the question of singularity formation for one of the simplest fluid models,
Euler's equations for potential flow of an ideal fluid.
With velocity field $v=\nabla\phi$, pressure $p$, and constant density $\rho=1$,
occupying a region $\Omega_t\subset\mathbb{R}^d$ with smooth boundary at time $t$,
these equations take the following form: For each time $t$,
\begin{align}
\Delta\phi = 0 &
\quad \mbox{in $\Omega_t$},
\label{1.laplace}\\
\phi_t + \frac12|\nabla\phi|^2 + p = 0&
\quad \mbox{in $\Omega_t$},
\label{1.bernoulli}\\
p = 0 &
\quad \mbox{on $\D\Omega_t$}.
\label{1.pzero}
\end{align}
The {\em kinematic condition}, stating that the fluid domain $\Omega_t$ is transported by the velocity,
supplements these equations.
The effects of gravity and surface tension are neglected.
At the small scales involved in singularity formation,
it is generally appreciated that the effect of gravity ought to be negligible.
It is true that surface tension is physically important on small scales,
but we focus this study on the mathematical issues that arise when it is neglected.
It is our purpose here to extend our previous work \cite{LPmajda}
reviewing research relevant to the issue of whether and how local singularities
can form in solutions of this system, and offer additional numerical evidence
that suggests a new scenario for formation of a local singularity.
In the sequel, particularly when considering bounded domains $\Omega_t$
we refer to \eqref{1.laplace}--\eqref{1.pzero} simply as the {\em ideal droplet equations}.
\section{Background---scenarios for singularities}
Mathematical analysis of the initial-value problem for the governing equations
\eqref{1.laplace}--\eqref{1.pzero} with free boundary is subtle and difficult.
The problem can be treated, however, by the methods that S. Wu developed in the 1990s for
water waves with gravity. For smooth enough initial data in smooth bounded domains,
the works \cite{Lindblad,CoutShko2007,CoutShko2010} establish short-time
existence for smooth solutions of the incompressible Euler equations with
pressureless free boundary in zero gravity, including the case of nonzero vorticity.
In this section we briefly review work related to a number of scenarios for
the possible breakdown of smooth solutions and development of singularities in solutions.
Bounds that constrain local singularity formation have been provided in
recent work of Kinsey and Wu~\cite{KinseyWu2018} and Wu~\cite{wu2015blow}.
In the latter work it is also shown that certain kinds of corners in the free surface
can persist for short time if they are present in the initial data.
{\em Splash singularities.}
A simple way that fluids can develop a singularity is by collision of distinct droplets.
A related but more complicated scenario is that different parts of the surface of a
connected fluid domain may collide, while the interface remains smooth up to the time of collision.
The existence of such {\em splash singularities}
was proved by Castro et al.~\cite{Castro2012,Castro2013}.
{\em Droplet splitting.}
One can imagine that a single dumbbell-shaped droplet provided with a strongly bipolar
initial velocity should break in pieces. There are many physical studies of
this behavior that take into account surface tension and/or viscosity. We are not aware
of any study of the problem in the absence of these effects, however, and it may be that surface tension
is necessary after all for pinch-off to occur. One idea for approaching the splitting problem
could involve finding a {\em least-action} path of fluid configurations that deform one droplet into two.
Smooth incompressible potential flows in a fixed domain were shown by Brenier \cite[Theorem 2.4]{Brenier99}
to truly minimize action for sufficiently short time.
These flows correspond to volume-preserving paths of diffeomorphisms that minimize
distance according to a relaxed version of Arnold's variational characterization of
geodesic paths in the diffeomorphism group \cite{Arnold66}.
It was recently proved, however, that free-boundary flows with zero gravity and
surface tension are critical paths for action, but {\em never minimize it}
except for piecewise-rigid motions~\cite[Corollary~5.6]{LPS}.
{\em Flip-through and jet formation.} The breaking of gravity waves against a vertical wall can be thought of
as a kind of splash singularity, by reflecting the fluid motion through the plane of the wall.
In work of Cooker and Peregrine \cite{cooker1990computations,cooker1992violent}
2D numerical computations show that wave impacts that trap a bubble of `air' are less violent than
waves that only get {\em close} to breaking at the wall.
Strong forces and very large accelerations can be produced as a sheet of water ``flips through''
the trough and generates a powerful upward jet of fluid.
For discussion of the flip-through phenomenon and related experiments
see \cite{Peregrine2003,Bredmose2010,wang2018}.
In a series of papers including
\cite{Longuet1972,Longuet1976,Longuet1980,Longuet1983},
Longuet-Higgins described a jet-formation phenomenon that appears to be associated with
flip-through and some other situations where the
na\"ive expectation is that local singularities might form.
In particular, Longuet-Higgins described ``Dirichlet hyperboloid'' exact solutions,
extending a family of time-dependent ellipsoidal solutions found by Dirichlet~\cite{dirichlet1860}
in relation to a long line of investigations on ellipsoidal self-gravitating fluid bodies.
As Longuet-Higgins mentioned, Fritz-John \cite{john1953} had found related flows with parabolic free boundary.
Longuet-Higgins compared Dirichlet hyperboloid solutions with experiments on breaking waves and bubbles
in \cite{Longuet1983}. All the flows he observed remain smooth.
There are Dirichlet hyperboloid solutions that become singular in finite time, but what happens is that the
pressure and velocity blow up everywhere while the fluid interface remains smooth.
By taking a large-scale limit, in \cite{Longuet1980} Longuet-Higgens described
time-dependent solutions that have corners for all times.
{\em Self-similar approach to cones or corners.}
The tendency to form jets with smooth tips may make it unlikely that
smooth free boundaries develop local singularities in many typical flows.
We might expect that a local singularity may appear in a borderline situation,
e.g., between strong flip-through and bubble-trapping splash singularity.
Experimental and numerical evidence of such a singularity,
for 3D incompressible flows with viscosity and surface tension,
was provided by D.~Lathrop's group in the 1990s~\cite{lathrop1998,zeff2000singularity}.
These authors demonstrated a self-similar collapse of the fluid interface to one with a conical singularity,
followed by the dramatic emergence of a very high and thin self-similar jet.
No rigorous mathematical analysis of this problem has yet appeared,
as far as we are aware.
{\em Ballistic interfaces.} In the papers
\cite{KZjfm2014,KZdok2016}, Karabut and Zhuravleva
described several analytical solutions of the free-boundary problem
\eqref{1.laplace}--\eqref{1.pzero}
for which fluid particles on the free surface move with {\em zero acceleration}, i.e., they move ballistically.
Very recently, Zubarev and Karabut \cite{ZK2018} and Zhuravleva et al.~\cite{zhuravleva2020new}
have described examples of this type of flow capable of developing local singularities
from a smooth interface.
These solutions are derived using the complex Hopf equation by imposing a particular relation between
pressure and velocity at the free boundary.
Zakharov~\cite{Zak20} has provided an interesting independent perspective on solutions of this type.
At present it seems doubtful that the singularities of the kind found in these works
can emerge in smooth flows in bounded domains. For acceleration-free interfaces,
both the pressure and its gradient vanish at the free boundary. In addition,
the pressure-velocity relations imposed in \cite{ZK2018,zhuravleva2020new}
imply that the pressure $p<0$ inside the fluid domain.
However, it is necessary that $p>0$ inside the fluid for any nontrivial smooth solution of
\eqref{1.laplace}--\eqref{1.pzero} in a bounded domain.
This follows from the fact that $-\Delta p = \Delta|\nabla\phi|^2\ge0$.
Then Hopf's lemma implies that
\begin{equation}\label{e:taylor}
\frac{\D p}{\D n} <0 \quad\mbox{on $\D\Omega_t$}.
\end{equation}
In the present context this says that the {\em Taylor sign condition} for linear well-posedness holds.
More generally this condition requires the outward normal acceleration of the
interface to exceed the acceleration due to gravity, see \cite{taylor50,bhl93}.
It was recognized to be key to nonlinear well-posedness theory by Wu \cite{Wu97,Wu99}.
{\em Plan of the paper.}
In sections~\ref{s:leastaction}--\ref{s:dirichlet} below, we aim to describe
the explicit Dirichlet ellipsoid and hyperboloid solutions of the ideal
droplet equations \eqref{1.laplace}--\eqref{1.pzero}, with a view to focus on their significance
for the droplet splitting and jet formation scenarios mentioned above.
These solutions exist in an historical context that is interesting to review,
involving Hamilton's least action principle, kinematically constrained geodesic flow,
and a nontrivial symmetry exhibited by self-gravitating bodies
that was made explicit by Dekekind when preparing Dirichlet's work for posthumous publication.
In section~\ref{s:ZK}, we summarize how local singularities on ballistic interfaces
were derived in~\cite{ZK2018} for purely horizontal surface motions
and in \cite{zhuravleva2020new} for cavity collapse scenarios.
Then in the last section below, we extend our computations from~\cite{LPmajda}
to provide additional evidence for a scenario involving unstable corner formation.
We make use of a conformal mapping formulation of the governing equations
closely related to one described by A.~I.~Dyachenko in \cite{dyachenko2001dok}
and used by S.~A.~Dyachenko in \cite{dyachenko2019jfm} to compute bounded ideal droplet solutions
with and without surface tension.
We find evidence for the existence of a two-parameter of self-similar smooth flows that may
emerge from an infinite perfect wedge-shaped domain with power-law initial velocity,
by computing a time-reversed flow that develops from a smooth bounded approximation to the wedge,
together with a scaling argument. The problem of rigorously demonstrating the existence
of such solutions (or showing that some other instabilities must occur on scales invisible to our numerics)
appears to pose a difficult challenge for mathematical analysis.
\section{Least action principle with free boundary and self-interaction energy}
\label{s:leastaction}
We begin our study by using Hamilton's least action principle,
and a variant of the standard Helmholtz decomposition of vector fields,
to provide a simple derivation of the governing equations
for smooth ideal fluid flows with pressureless free boundary and self-interaction energy.
We recall V. I. Arnold's classic use of least action to formally characterizes solutions
of the Euler equations for incompressible flows in a fixed domain in terms of geodesic
paths of diffeomorphisms.
Let $\Omega_t\subset\mathbb{R}^d$ denote the domain occupied by the fluid at time $t$, and let $X$
denote the Lagrangian flow map, defined on the space-time domain $Q=\cup_t \Omega_t\times\{t\}$
so that
\begin{equation}\label{e:Xflow}
\dot X (z,t) = v(X(z,t),t), \qquad X(z,0)=z\in\Omega_0
\end{equation}
for all $t$ in some interval $[0,\tbar]$.
Here the velocity field $v$ is presumed to be sufficiently smooth up to the boundary.
The associated density field $\rho$ with given constant initial density $\rho_0$ is given by
\begin{equation}
\rho(x,t) = \rho_0\det\left(\frac{\D X}{\D z}(z,t)\right)\inv, \qquad x=X(z,t)\in\Omega_t.
\end{equation}
We let $\calA = \calK-\calV$ denote the Lagrangian action associated with the flow,
where
\begin{align}
&\calK = \frac12\int_0^\tbar\int_{\Omega_t}\rho(x,t)|v(x,t)|^2\,dx\,dt
= \frac12 \int_0^\tbar\int_{\Omega_0}\rho_0 |\dot X(z,t)|^2\,dz\,dt,
\label{d:calK}
\\&
\calV = \frac12 \int_0^\tbar \int_{\Omega_t^2} \Phi(x,x')\rho(x,t)\rho(x',t)\,dx\,dx'\,dt
\label{d:calV}
\end{align}
respectively denote kinetic energy and self-interaction energy with symmetric kernel $\Phi(x,x')$, given for the Newtonian gravitational potential in particular by
\[
\Phi(x,x')= - \frac{G}{|x-x'|}.
\]
For any family $\varepsilon\to X_\varepsilon$ of flow maps depending smoothly on a variational parameter $\varepsilon$,
one finds that the variation $\delta X = (\D X/\D \varepsilon)|_{\varepsilon=0}$ induces a density variation
$\delta\rho$ satisfying
\[
-\frac{\delta\rho}\rho = \nabla\cdot \tilde v, \qquad \tilde v(x,t) = \delta X(z,t),
\]
so naturally $\nabla\cdot\tilde v=0$ for variations that leave the density invariant.
We proceed to compute the variation of the action at a density-preserving flow for
density-preserving variations.
Firstly, requiring that the variation $\delta X$ \emph{vanishes at the endpoints $t=0$ and $\tbar$},
we find
\begin{align*}
\delta \calK &=
\int_0^\tbar\int_{\Omega_0}\rho_0
\dot X(z,t)\cdot\delta\dot X(z,t)\,dz\,dt
= - \int_0^\tbar\int_{\Omega_0}\rho_0
\ddot X(z,t)\cdot\delta X(z,t)\,dz\,dt
\\&=
- \int_0^\tbar\int_{\Omega_t}\rho_0(\D_t v+v\cdot\nabla v)\cdot\tilde v \,dx\,dt ,
\\
\delta \calV &=
-\int_0^\tbar\int_{\Omega_t}\rho_0 f(x,t)\cdot \tilde v(x,t) \,dx\,dt ,
\end{align*}
where $f(x,t)$ is the (specific) self-interaction force field, given by
\[
f = -\nabla \varphi, \qquad \varphi(x,t) = \int_{\Omega_t} \rho_0\Phi(x,x')\,dx'.
\]
Now a flow $X$ is critical for the action $\calA$ if the variation
\[
\delta A = \delta K-\delta V =
-\int_0^\tbar\int_{\Omega_t}
\rho_0(\D_t v+v\cdot\nabla v -f)\cdot\tilde v\,dx\,dt = 0,
\]
for all virtual displacements $\tilde v$ for which $\nabla\cdot \tilde v=0$ in $\Omega_t$
and which vanish at $t=0$ and $\tbar$.
At this point we note that any $L^2$ vector field $u$ on $\Omega_t$ has a unique
$L^2$-orthogonal decomposition
\begin{equation}
u = w + \nabla p, \qquad\mbox{with}\quad \nabla\cdot w=0 \mbox{\ \ in $\Omega_t$}, \quad p = 0 \mbox{\ \ on $\D\Omega_t$},
\end{equation}
obtained by solving $\Delta p=\nabla\cdot u$ for $p$ in the Sobolev space $H^1_0(\Omega_t)$.
(This is a variant of the standard Helmholtz decomposition, see \cite[p.~215]{DL}.)
By choosing $u=f-(D_tv+v\cdot\nabla v)$, we infer that for a density-preserving critical path,
the velocity field should satisfy the Euler equations
\begin{equation}\label{e:euler}
\D_t v + v\cdot\nabla v + \nabla p = f, \quad \nabla\cdot v=0 \mbox{\ \ in $\Omega_t$},
\end{equation}
with the condition
\begin{equation}\label{pzero}
\quad p = 0 \mbox{\ \ on $\D\Omega_t$}
\end{equation}
on the free boundary, along with the \emph{kinematic} condition that $\Omega_t=X(\Omega_0,t)$.
It will be useful below to note that in terms of the deformation gradient $F = {\D X}/{\D z}$,
Euler's equations \eqref{e:euler} in Lagrangian coordinates take the form
\begin{equation}\label{e:eulerL}
F^T \ddot X + \nabla\tilde p +\nabla\tilde\varphi=0, \qquad \det F = 1,
\end{equation}
where $\tilde p(z,t)=p(X(z,t),t)$ and $\tilde \varphi(z,t)=\varphi(X(z,t),t)$
respectively represent the pressure and force potential in Lagrangian coordinates,
since by the chain rule, e.g.,
\[
\frac{\D \tilde p}{\D z}= \frac{\D p}{\D x}\frac{\D X}{\D z}
\qquad\mbox{so}\quad
\nabla \tilde p = F^T\nabla p.
\]
\section{Self-gravitation and Dirichlet's symmetry}
\label{s:gravity}
In an effort to understand the shape of the earth and other celestial bodies,
many prominent investigators, starting with Isaac Newton, have studied the shape of a
rotating body of fluid with self-gravitation.
Much historical information on this topic can be found in the book of Chandrasekhar~\cite{chandra}.
In particular, Dirichlet, in a posthumously published paper edited by Dedekind,
was the first to develop equations for \emph{time-dependent} motions
that preserve ellipsoidal shape \cite{dirichlet1860}.
Of the numerous interesting developments following Dirichlet's work, we mention only a few.
From Dirichlet's equations, Dedekind explictly deduced a surprising symmetry,
and used it to find ellipsoids with nontrival internal flows,
conjugate to the rigidly rotating fluid ellipsoids discovered earlier by Jacobi.
In a remarkable paper, Riemann subsequently showed that all rotating, shape-preserving ellipsoids
fall into three simple classes, and initiated a study of their stability by energy critera~\cite{riemann}.
The reason we bring up this subject is to describe how Dirichlet's ellipsoidal motions
can be characterized by through a finite-dimensional least-action principle,
and to thereby provide a simple derivation of Dedekind's symmetry.
The first descriptions of a reduced least-action principle for Dirichlet ellipsoids
appeared only a few years after Riemann's work, in papers by Padova~\cite{Padova1871} and Lipschitz~\cite{Lipschitz1874}; see the excellent review by Borisov et al.~\cite{borisov2009}. In the absence of gravitation,
critical paths of action correspond to constant-speed geodesic motion
on a determinant-constrained surface in the space of matrices describing the deformation,
as noted by O. M. Lavrenteva \cite{Lav80}.
We proceed to details. Following Dirichlet, we seek motions for which the domain
$\Omega_t\subset \mathbb{R}^3$ is ellipsoidal, with time-dependent semi-axes $a_j(t)$, $j=1,2,3$,
having a constant product $a_1a_2a_3$.
We require the Lagrangian map $z\mapsto X(z,t)$ to be linear, taking the convenient form
\begin{equation}
X_i(z,t) = \sum_{j=1}^3 P_{ij}(t) \frac{z_j}{a_j(0)}, \qquad z\in\Omega_0,
\end{equation}
or in more succinct matrix-vector form,
\begin{equation}\label{d:Xz}
X(z,t) = P(t)\Lambda_0\inv z, \quad \Lambda_t = \diag\{a_j(t):j=1,2,3\}.
\end{equation}
We presume the initial ellipsoid is $\Omega_0 = \Lambda_0 B_1$, where $B_1$ is the unit ball.
That is, $z\in\Omega_0$ if and only if $z=\Lambda_0 y$ with $y\in B_1$.
After a rotation of coordinates, $X(z,t)$ should lie in $\Lambda_t B_1$.
Thus there should exist orthogonal matrices $R(t)$ and $S(t)$ such that
\begin{equation}\label{d:Xy}
X(z,t) = R(t)\Lambda_t S(t)^Ty, \quad\mbox{with $y=\Lambda_0\inv z$.}
\end{equation}
The modern eye will recognize that this provides the \emph{singular value decomposition}
\[
P = R\Lambda S^T, \quad RR^T=I=SS^T,
\]
with the semi-axes $a_j$ being the singular values of $P$.
The matrix $P(t)$ should satisfy
\begin{equation}\label{e:P0}
P(0)=\Lambda_0 \quad\mbox{ and }\quad \det P(t)=a_1a_2a_3= \mbox{ constant.}
\end{equation}
The deformation gradient will be a function of time alone, taking the form
\begin{equation}\label{d:Ft}
F(t) = \frac{\D X}{\D z}(z,t) = P(t)\Lambda_0\inv.
\end{equation}
Substituting into Euler's equations written in Lagrangian coordinates,
we require
\begin{equation}\label{e:EulerF}
F^T\ddot F z + \nabla \tilde p +\nabla\tilde\varphi=0,
\end{equation}
where $\tilde p$ here is pressure divided by mass density.
It is a remarkable fact, due to Gauss and Rodrigues (see \cite{chandra,Fitzpatrick}) that the self-gravitation potential is quadratic in the spatial variables, taking the following form.
With respect to the coordinates $\hat x=R^Tx$ taken along the principal axes of the ellipsoid,
\begin{equation}
\varphi(x,t) = - G\rho_0\pi\, a_1a_2a_3
\left(\alpha_0-\sum_{i=1}^3 \alpha_i \hat x_i^2\right),
\end{equation}
\begin{equation}\label{d:alphas}
\alpha_0 = \int_0^\infty \frac{du}{\Delta}, \qquad
\alpha_i = -\frac{1}{a_i}\frac{\D\alpha_0}{\D\alpha_i}= \int_0^\infty \frac{du}{\Delta(a_i^2+u)}, \qquad
\Delta^2 = \prod_{i=1}^3 (a_i^2+u).
\end{equation}
In Lagrangian variables, using \eqref{d:Xy} and noting $R^TX = \Lambda S^T\Lambda_0\inv z$ we may then write
\begin{equation} \label{d:tildephi}
\tilde\varphi(z,t) = -G\rho_0\pi (\alpha_0+z^TQz)\det\Lambda
\end{equation}
where
\begin{equation}
Q = \Lambda_0\inv S \Lambda \frac{\D\alpha_0}{\D\Lambda}
S^T \Lambda_0\inv ,
\quad \frac{\D\alpha_0}{\D\Lambda} = \diag\left\{\frac{\D\alpha_0}{\D\alpha_i}\right\}.
\end{equation}
Then the Lagrangian potential gradient is linear in $z$, with
\begin{equation}
\nabla\tilde\varphi(z,t) = -2G\rho_0\pi(\det\Lambda)\, Qz
\end{equation}
In light of \eqref{e:EulerF}, the pressure must be quadratic in the spatial variables. In order to vanish on the ellipsoid boundary, it must therefore be that for some scalar function $\beta(t)$,
\begin{equation}
\tilde p(z,t) = \frac12 \beta(t)(1-|\Lambda_0\inv z|^2)
\quad\mbox{and}\quad
\nabla\tilde p(z,t) = -\beta(t)\Lambda_0^{-2}z.
\end{equation}
Substituting the above expressions directly into \eqref{e:EulerF}, with $\gamma_0=2\pi G\rho_0 $ we find Dirichlet's result in the following form.
\begin{lemma}
The linear Lagrangian map in \eqref{d:Xz} provides a solution to the Euler equations if
$P(t)$ satisfies
\begin{align}\label{e:Ptt}
P^T\ddot P = \beta(t)I + (\gamma_0\det\Lambda) S\Lambda
\frac{\D\alpha_0}{\D\Lambda} S^T ,
\end{align}
along with the conditions in \eqref{e:P0}.
\end{lemma}
Next we want to show how \eqref{e:Ptt} arises from reduced least action,
and derive Dedekind's symmetry. Using the fact that
$ 3\int_{B_1} y_i^2\,dy = \frac{4\pi}5 $, the kinetic energy in \eqref{d:calK} is
reduced to an expression in terms of $\dot P$ via
\begin{equation}\label{e:K(P)}
\calK(P) =
\frac{2\pi}{15} (\rho_0\det \Lambda_0)
\int_0^\tbar
\tr(\dot P^T\dot P)
\,dt
\end{equation}
The gravitational potential energy is reduced to an expression in terms of $P$ via
\begin{equation}
\calV(P) = -\frac12 G
(\rho_0\det\Lambda_0)^2
\int_0^\tbar\int_{B_1^2}
\frac1{|P(y-y')|}\,dy\,dy'\,dt
\end{equation}
Note that $\tr(\dot P^T\dot P)=\tr(\dot P\dot P^T)$,
and since the singular value decomposition of $P^T$ is $S\Lambda R^T$,
orthogonal changes of variables in the last integral yields
\begin{equation}\label{d:U(P)}
\int_{B_1^2} \frac1{|P(y-y')|}\,dy\,dy'
=\int_{B_1^2} \frac1{|\Lambda(y-y')|}\,dy\,dy'
=\int_{B_1^2} \frac1{|P^T(y-y')|}\,dy\,dy'
\end{equation}
By consequence we infer
\begin{lemma}
The reduced action $\calA(P)=\calK(P)-\calV(P)$ of every matrix path $P$ satisfies
\begin{equation}
\calA(P) = \calA(P^T).
\end{equation}
\end{lemma}
Since $P$ is a smooth function of $P^T$ and vice versa,
the chain rule implies that $P^T$ is a critical path for the (determinant-constrained)
action if and only if $P$ is.
This is \emph{Dedekind's symmetry}, which he used to discover that Jacobi's rigidly rotating ellipsoids correspond to ellipsoids with steady internal flows.
Lastly, we wish to indicate how the evolution equation \eqref{e:Ptt} arises by least action from the reduced action, due to the orthogonal invariance of the reduced potential energy.
\begin{lemma}\label{l:dUdP}
Given any $C^1$ function $\calU:\mathbb{R}^{m\times n}\to \mathbb{R}$ invariant with respect to both right and left multiplication by orthogonal matrices,
its derivative at a matrix $P$ can be expressed in terms of the
singular value decomposition $P=R\Lambda S^T$, $\Lambda=\diag\{a_i\}$,
in the form
\[
\frac{\D U}{\D P} = R \frac{\D U}{\D\Lambda} S^T,
\]
where
\[
\frac{\D U}{\D P} = \left( \frac{\D }{\D P_{ij}}U(P) \right)
\quad\mbox{and}\quad
\frac{\D U}{\D \Lambda} =
\diag\left\{ \frac{\D U}{\D P_{ii}}(\Lambda)\right\}.
\]
\end{lemma}
\begin{proof}
By density we may assume the singular values $a_i$ of $P$ are distinct.
Then for any perturbation direction $\tilde P$ there is a $C^1$-smooth singular value decomposition
\[
P+\varepsilon \tilde P = R(\varepsilon)\Lambda(\varepsilon)S(\varepsilon)^T
\]
for $|\varepsilon|$ small enough. Letting $'$ denote the derivative in $\varepsilon$,
evaluated at $\varepsilon=0$, we note
\[
\Lambda' = R^T\tilde P S - R^TR'\Lambda - \Lambda S'^TS
\]
Since $\tr(AB)=\tr(BA)$ for any square matrices $A$ and $B$,
we then find by invariance that
\begin{align*}
&
\tr\left(\frac{\D U}{\D P}^T \tilde P \right)
= \left. \frac{d}{d\varepsilon} U(P+\varepsilon\tilde P) \right|_{\varepsilon=0}
= \left. \frac{d}{d\varepsilon} U(\Lambda(\varepsilon)) \right|_{\varepsilon=0}
=
\tr\left(
\frac{\D U}{\D\Lambda}
\Lambda'
\right)
\\& =
\tr\left(
S\frac{\D U}{\D\Lambda}
R^T\tilde P
\right)
-
\tr\left(
R^TR'
\Lambda
\frac{\D U}{\D\Lambda}
\right)
-
\tr\left(
\frac{\D U}{\D\Lambda}
\Lambda
S'^TS
\right)
=
\tr\left(
S \frac{\D U}{\D\Lambda}
R^T\tilde P
\right) .
\end{align*}
The last equality holds because $R^TR'$ and $S'^TS$ are skew while
$\Lambda\frac{\D U}{\D\Lambda}= \frac{\D U}{\D\Lambda}\Lambda$
is symmetric.
\end{proof}
The reduced gravitational potential energy takes a classic expression \cite[p.~700]{Lamb} in terms
of the singular values $a_j$ of $P$, using
the function $\alpha_0=\alpha_0(\Lambda)$ from \eqref{d:alphas}, as
\begin{align*}
\calV(P) &=
\frac12 \rho_0 \int_0^\tbar
\int_{R\Omega_t}\varphi(R\hat x,t)\,d\hat x\,dt
=-\frac3{10} GM^2
\int_0^\tbar \alpha_0(\Lambda(t)) \,dt ,\qquad M = \rho_0\frac{4\pi}3a_1a_2a_3.
\end{align*}
Therefore the quantity in \eqref{d:U(P)} can be expressed in the form
\begin{equation}
\int_{B_1^2} \frac1{|P(y-y')|}\,dy\,dy' =
\frac{(4\pi)^2}{15} U(P),
\qquad\mbox{where}\quad
U(P)=U(\Lambda) = \alpha_0(\Lambda).
\end{equation}
Incorporating the constraint $\log\det P(t)=$ const
yields the augmented reduced action
\[
\tilde\calA = \frac{4\pi}{15}\rho_0\det\Lambda_0
\int_0^\tbar \left(\frac12\tr(\dot P^T\dot P)
+\gamma_0(\det\Lambda_0)U(P)+\beta(t)\log\det P\right)\,dt
\]
Applying Lemma~\ref{l:dUdP} after noting $\det P =\det\Lambda$,
we find that the criticality condition $\delta \tilde A=0$
subject to the constraint $\det P(t)=$ const corresponds to
the equation
\begin{equation}
\ddot P
=R\left(\beta(t)\Lambda\inv+(\gamma_0\det\Lambda)\frac{\D\alpha_0}{\D\Lambda}\right)S^T,
\end{equation}
which is equivalent to \eqref{e:Ptt}.
(It is curious that Lemma~\ref{l:dUdP} provides Abel's formula
for the derivative of $\log\det P$.)
There is a considerable body of modern literature studying the Hamiltonian dynamics of the reduced dynamics; we refer to
Borisov et al.~\cite{borisov2009}, Morrison et al. 2009\cite{morrison2009},
and Lewis~\cite{lewis2013} for further discussion and references.
\section{Dirichlet ellipsoids and hyperboloids}
\label{s:dirichlet}
Next we specialize the discussion to review properties of
a family of simple exact solutions to the zero-gravity water wave equations
with pressureless free boundaries given by conics.
In particular we pay attention to the possible singular features of such flows,
focussing on 2D and the development of fluid jets.
We remark also upon a geodesic interpretation that proved useful in our
study \cite{LPS} that was motivated by a droplet splitting scenario.
The flows that we study here are all simple straining flows.
The ellipsoids are special cases of solutions found by
Dirichlet \cite{dirichlet1860}, and hyperboloids
were found by Longuet-Higgins \cite{Longuet1972}.
\subsection{Geodesic curves of conics.}
We now describe some potential flows with conic free surface in any dimension $d\ge2$.
The Lagrangian flow map associated to the velocity field $v=\nabla\phi$ will satisfy
\begin{equation}\label{e:Xdot}
\dot X(z,t) = \nabla\phi(X(z,t),t),
\qquad
X(z,0)= z,
\end{equation}
for all $z\in \Omega_0\subset\mathbb{R}^d$ and all $t$.
All our flows here will correspond to quadratic potentials of the form
\begin{equation}\label{d:phi0}
\phi(x,t) = \frac12 \sum_{j=1}^d \alpha_j(t)x_j^2 -\lambda(t),
\qquad \mbox{with}\quad \Delta\phi=\sum_{j=1}^d \alpha_j(t)=0,
\end{equation}
so that the components of the Lagragian map evolve in a purely dilational way according to
\begin{equation}\label{e:dotXj}
\dot X_j = \alpha_j(t) X_j\,, \quad j=1,\ldots,d .
\end{equation}
Fixing some $\sigma_0\in\mathbb{R}$ and some choice of signs $\sigma_j=\pm 1$ for $j=1,\ldots,d$,
the fluid will be taken to occupy a domain of the form
\begin{equation}\label{d:Omegat}
\Omega_t=\{x\in\mathbb{R}^d: S(x,a(t))<\sigma_0\},
\end{equation}
where we define
\begin{equation}\label{d:Sxa}
S(x,a) = \sum_{j=1}^d \sigma_j \frac{x_j^2}{a_j^2} \,,
\qquad
a=(a_1,\ldots,a_d)\in \mathbb{R}^d_+\,.
\end{equation}
The kinematic condition that the boundary flows with the fluid requires that for $z\in\D\Omega_0$,
\[
0 = \frac12 \frac{d}{dt}S(X,a)
= \sum_{j=1}^d \sigma_j\frac{X_j^2}{a_j^2}\left(\alpha_j - \frac{\dot a_j}{a_j}\right)
\]
Leaving degenerate cases aside, it suffices to suppose that
\begin{equation}\label{e:dotaj}
\dot a_j = \alpha_j a_j \,,\quad j=1,\ldots,d.
\end{equation}
Due to the incompressibility constraint in \eqref{d:phi0} it follows that the product
\begin{equation}\label{ed:c}
a_1\cdots a_d=r^d
\end{equation}
remains constant in time.
We recall the simple proof of the following result from \cite{LPmajda} (with a slight change of notation)
that provides a geodesic interpretation for solutions of the kind considered here.
\begin{proposition}\label{p:Edrop} Given a constant $r>0$,
let $a(t)=(a_1(t),\ldots,a_d(t))$ be any constant-speed geodesic on
the surface determined by the relation \eqref{ed:c}
in the space $\mathbb{R}_+^d$ with metric of signature $(\sigma_1,\ldots,\sigma_d)$
(possibly indefinite).
Then this determines an ideal potential flow with $\Omega_t$ as in \eqref{d:Omegat},
pressure given by
\begin{equation}\label{d:pressure}
p(x,t) = \frac{\beta(t)}2(\sigma_0- S(x,a)),
\qquad
\beta(t) = \frac{\sum_j \dot a_j^2/a_j^2}{\sum_j \sigma_j/ a_j^2 } ,
\end{equation}
and potential $\phi$ given by \eqref{d:phi0} with $\alpha_j=\dot a_j/a_j$
and $\dot\lambda = \frac12\beta\sigma_0$.
\end{proposition}
\begin{proof} The path $t\mapsto a(t)$
is a geodesic on the surface defined by \eqref{ed:c} with constant squared speed
$\sum_j \sigma_j\dot a_j^2$
if and only if the acceleration $\ddot a$ is parallel to the surface normal.
Here this means that there is some scalar $\beta=\beta(t)$,
\begin{equation}\label{e:ddotaj}
\ddot a_j = \frac{\beta\sigma_j}{a_j}, \qquad j=1,\ldots,d.
\end{equation}
The reason is that such a geodesic is a critical
path for the augmented action
\[
\tilde\calA = \int_0^T \sum_j \left(\frac12 \sigma_j\dot a_j^2 + \beta(t)\log \frac{a_j}r\right)\,dt.
\]
The value of $\beta(t)$ must be as stated in \eqref{d:pressure} since we require
\[
0 = \frac{d^2}{dt^2} \sum_{j}\log a_j = \sum_j \frac{a_j\ddot a_j - \dot a_j^2}{a_j^2}.
\]
Define $\phi$ by \eqref{d:phi0} with $\alpha_j$ and $\dot\lambda$ as stated in the Proposition.
Because $ \dot\alpha_j+\alpha_j^2= \ddot a_j/a_j=\beta\sigma_j/a_j^2$,
the pressure from the Bernoulli equation \eqref{1.bernoulli} must satisfy
\begin{align*}
p &= -\phi_t-\frac12|\nabla\phi|^2 = \dot\lambda -\frac12\sum_j (\dot\alpha_j+\alpha_j^2)x_j^2
= \frac\beta2(\sigma_0- S(x,a)).
\end{align*}
Thus $p=0$ on $\D\Omega_t$, and the ideal droplet equations all hold.
\end{proof}
Under the present conventions, we note that
the Taylor sign condition \eqref{e:taylor} holds exactly when
$p>0$ in $\Omega_t$, and this occurs exactly when
$\beta>0$ in \eqref{d:pressure}.
\subsection{Ellipsoidal droplets}\label{ss:ellipse}
The fluid domains $\Omega_t$ always remain bounded and ellipsoidal
in case $\sigma_j=1$ for all $j=0,1,\ldots,d$.
These Dirichlet ellipsoids played an important role in the study of action-infimization
for free boundary droplet flows carried out in \cite{LPS},
particularly the ones corresponding to length-minimizing paths.
The solution remains smooth globally for $t\in\mathbb{R}$, since the vector $a(t)$ of semi-major axis lengths
moves at a constant (Euclidean) speed $c=|\dot a|$ on the surface \eqref{ed:c}
and cannot reach any singular point in finite time.
The pressure $p>0$ in $\Omega_t$
because $\beta>0$ in \eqref{d:pressure}, so the Taylor sign condition holds,
consistent with well-known results on well-posedness for water wave dynamics
\cite{Wu97,Wu99,Lindblad,CoutShko2007}.
Each velocity component $\dot a_j$ is increasing,
because it turns out that $\ddot a_j=\beta\sigma_j/a_j>0$ for all $j$.
The speed $c$ bounds $|\dot a_j|$ for all $j$ as well.
As $t\to+\infty$, necessarily some component $a_j\to\infty$,
and as $t\to-\infty$, some component $a_k\to\infty$, since
$\sum \dot a_j/a_j=0$.
\subsection{Ellipsoidal voids} The fluid can be considered to occupy the domain
\emph{exterior} to the ellipsoids above by taking $\sigma_j=-1$ for all $j$.
In this case, the pressure $p<0$ in $\Omega_t$ because $\beta<0$ in \eqref{d:pressure}.
The Taylor sign condition fails by consequence, and
we can expect this `bubble' flow to be highly unstable.
\subsection{Hyperbolas in 2D}
For the case when the signs of $\sigma_j$ can differ,
the planar case $d=2$ admits the most simple and complete description.
We set $\sigma_0=\sigma_1=-1 = -\sigma_2$, so that the domain $\Omega_t$ corresponds to
\begin{equation}
\frac{x_1^2}{a_1^2} > 1 + \frac{x_2^2}{a_2^2}.
\end{equation}
The equations of motion derive solely from
incompressibility and geodesic speed constraints:
\begin{equation}
a_1a_2 = r^2,\qquad -\dot a_1^2 + \dot a_2^2 = \hat s \in\mathbb{R}.
\label{e:a1a2}
\end{equation}
Eliminiating $\dot a_2$ we find $\dot a_1^2 (a_2^2-a_1^2) = \hat s a_1^2$,
whence
with $\tau=\pm\sqrt{|\hat s|}$ we have
\begin{equation}
\dot a_1
= \frac{\tau}{|\tan^2\theta-1|^{1/2}}\,,
\qquad \tan\theta = \frac{a_2}{a_1} = \frac{r^2}{a_1^2}\,.
\label{ev:a1dot}
\end{equation}
Here $\theta=\theta(t)$ is the angle that the hyperbola's asymptote makes with the $x_1$ axis.
The pressure from \eqref{d:pressure} has the same sign as $\beta$,
which is given here by
\[
\beta = \frac{ a_2^2 \dot a_1^2+a_1^2\dot a_2^2}{a_1^2-a_2^2} = \frac{2\dot a_2^2}{1-\tan^2\theta}.
\]
The pressure is positive and the Taylor sign condition \eqref{e:taylor} holds
when $0<\theta<\pi/4$ ($a_1>a_2$),
and pressure is negative and the Taylor sign condition violated
when $\pi/4<\theta<\pi/2$ ($a_1<a_2$).
{\it Singularities.}
No solution exists globally for $t\in\mathbb{R}$. The solution becomes singular in
finite time when $a_1-a_2$ reaches zero, which means that the
asymptotic angle $\theta$ reaches $\pi/4$.
If initially $\theta<\pi/4$ and $\dot a_1<0$
the solution becomes singular as $t$ increases,
but exists globally for $t<0$ with $a_1\to\infty$ as $t\to-\infty$.
The same happens if $\theta>\pi/4$ and $\dot a_1>0$.
The reverse happens if $\theta<\pi/4$ and $\dot a_1>0$,
or if $\theta>\pi/4$ and $\dot a_1<0$---the solution exists globally
for $t>0$ with $a_1\to\infty$ as $t\to+\infty$.
In all cases, the {\em free surface shape remains smooth} approaching a singular time.
If the Taylor sign condition holds and $t$ increases approaching
singularity, the angle between the asymptotes widens and approaches $90^{\circ}$.
The pressure and fluid velocity blow up {\em everywhere},
since $\alpha_1=\dot a_1/a_1$ blows up.
Of course, the domain is unbounded and the energy is infinite, so
it is unclear whether this is relevant for any finite energy flow.
{\it Corners.} No free-surface singularity occurs in any solution we have discussed so far.
As Longuet-Higgins \cite{Longuet1980} pointed out, one obtains a simple flow
containing a corner for all time, in a limit obtained by ``zooming out.''
Here it corresponds to taking $\sigma_0=0$, so that for example $\Omega_t$ corresponds
to the sector of the plane where
\[
\frac{x_1}{a_1(t)}>\frac{|x_2|}{a_2(t)}
\]
The same equations \eqref{e:a1a2} and
\eqref{ev:a1dot} govern the evolution of the sector opening angle.
As above, the Taylor sign condition holds if the corner angle
$2\theta$ is less than $90^\circ$ and is violated if it is greater
than $90^\circ$. Blowup occurs in the same ways as before.
The condition $2\theta<90^\circ$ is consistent
with the theory for water waves with persistent corners
developed by Kinsey and Wu \cite{KinseyWu2018} and Wu~\cite{wu2015blow},
since corners with angles less than $90^\circ$ have the
finite ``energy'' defined in \cite{KinseyWu2018} necessary to apply their theory.
\section{Locally singular ballistic interfaces}
\label{s:ZK}
Recently, Zubarev and Karabut \cite{ZK2018} and Zhuravleva et al.~\cite{zhuravleva2020new}
have described rather explicit examples of ideal fluid flows on unbounded fluid domains
that are capable of developing local singularities on the free surface.
These examples provide solutions of the ideal droplet equations
\eqref{1.laplace}--\eqref{1.pzero}
that are derived from particular holomorphic solutions of the complex Hopf equation
or inviscid Burgers equation
\begin{equation}\label{e:hopf}
V_t + V V_z = 0 \quad \mbox{for $z\in\Omega_t$.}
\end{equation}
Here $z=x+iy\in\Omega_t\subset\C$ corresponds to Eulerian variables in the fluid domain.
A solution of \eqref{e:hopf} corresponds to a solution of \eqref{1.laplace}--\eqref{1.pzero} via
\begin{equation}\label{d:ZK1}
V= u-iv=\phi_x-i\phi_y \,,
\end{equation}
{\em provided} (i) $V$ is holomorphic in $z$ on $\Omega_t$ (ii) the pressure-velocity relation
\begin{equation}\label{e:zkp}
p = - v^2 \quad\mbox{in $\Omega_t$}
\end{equation}
holds, and (iii) the pressure vanishes on $\D\Omega_t$, i.e., \eqref{1.pzero} holds.
This last condition means that fluid particles on the boundary move purely horizontally,
and indeed the boundary must satisfy
\begin{equation}\label{e:imU}
\im V = 0 \quad \mbox{on $\D\Omega_t$.}
\end{equation}
As one can verify by straightforward computation,
the real and imaginary parts of the Hopf equation yield Euler's equations,
noting $p_x = -2vu_y$, $p_y=-2vv_y$.
The characteristic curves $Z(t)$ for the Hopf equation are straight lines that satisfy
\[
\frac{d Z}{d t} = V(Z(t),t), \qquad \frac{d}{dt} V(Z(t),t)= 0 .
\]
When $v\ne0$, these curves are not fluid particle paths.
However, {\em on the free surface where $p=0$ they are particle paths}.
Consequently, particle paths on the surface evolve
{\em in straight lines, horizontally at constant speed}.
In \cite{ZK2018}, the authors find solutions
by solving implicitly characteristic equations in the form
\begin{equation}\label{e:zF}
z = Vt + F(V).
\end{equation}
Here $F(V)\to0$ as $V\to\infty$ for the values of $V$ relevant to the solution,
and $F$ should be chosen to avoid singularities when $z$ is in the fluid domain.
The case $F(V)=1/(V+i)$ is the simplest one that provides local singularities.
In this case one can use the horizontal velocity $u$ to parametrize
the free surface via
\begin{equation}\label{e:zgplot}
z = tu + \frac{1}{u+i} = tu+\frac{u}{u^2+1} - \frac{i}{u^2+1}, \quad u\in\mathbb{R}.
\end{equation}
We plot this surface for $t=-4,-3,-2,-1$ in Fig.~\ref{f:ZGplot}.
The surface is a smooth graph $y=\gamma(x,t)$ for $t<-1$,
since $dx/du<0$ for all $u$.
A cusp develops at $t=-1$, having $y\sim-1+|x|^{2/3}$.
\begin{figure}
\includegraphics[width=3.5in]
{Figsyau/ZGplot.pdf}
\put(-126,2){\large $x$}\put(-247,93){\large $y$}
\put(-52,96){\vector(-1,0){30}}
\put(-195,96){\vector(1,0){30}}
\caption{The interface in \eqref{e:zgplot} for $t=-4,-3,-2,-1$ (from bottom to top)}
\label{f:ZGplot}
\end{figure}
Very recently, Zhuravleva et al.~\cite{zhuravleva2020new} have described a different family
of solutions of the ideal droplet equations that describe unbounded flows surrounding a
collapsing cavity. They use holomorphic solutions to the complex Hopf equation \eqref{e:hopf}
to determine fluid velocity in a different way, namely
\begin{equation}\label{d:ZK2}
u-iv = \frac1V,
\end{equation}
and impose a different pressure-velocity relation, namely
\begin{equation}\label{zkp2}
p = \frac12 \log(u^2+v^2) -\frac{u^2+v^2}2 + \frac12
\end{equation}
On the fluid boundary in this case, vanishing pressure necessitates the condition
\begin{equation}
|V|=1 \quad\mbox{for $z\in\D\Omega_t$.}
\end{equation}
Then on the free surface, one finds ballistic particle paths that coincide with
characteristics according to the relations
\begin{equation}\label{e:Vinv}
z = (u+iv)t + z_0 = Vt + G(V) \,.
\end{equation}
The fluid interface can be determined parametrically by using \eqref{e:zF} with the relation
$V=e^{i\theta}$
on the fluid boundary. Corresponding to the choice
\begin{equation}\label{e:Fcavity}
G(V) = \frac{4a V}{1-b^4V^4}, \quad a=-0.2,\quad b=1.2,
\end{equation}
the authors in \cite{zhuravleva2020new} show that the cavity collapses to a splash singularity, as shown in
the left panel of Fig.~\ref{f:cavity},
where the interface is shown at times $t=-3,-2,-1.03$ as in \cite{zhuravleva2020new}.
In the right panel, we take $a=0.2$ instead and plot at the times $t=-11,-8,-5,-2$.
The figure indicates that a local singularity forms at a time $t\approx -5$ and loses physical meaning
after a self-intersection appears.
Indeed, a local singularity must appear at the time $t=-G'(1)\approx -5.01176$
when the boundary parametrization degenerates.
(For sufficiently large negative times, $\partial z/\partial V\ne0$ for $|V|=1$ and
injectivity of the map $V\mapsto z$ for $|V|<1$ follows by classical criteria,
see section~\ref{ss:criteria} below.)
\begin{figure}
\includegraphics[width=3.0in]
{Figsyau/cavity-splash-timem3m2m1_03.pdf}
\includegraphics[width=3.0in]
{Figsyau/cavity-cusp-timesm11m8m5m2.pdf}
\caption{Collapsing cavity with splash and local singularities. \newline
(Left: $a=-0.2$, $t=-3,-2,-1.03$. Right: $a=0.2$, $t=-11,-8,-5,-2$.) }
\label{f:cavity}
\end{figure}
For all of the singular solutions found in \cite{ZK2018} and \cite{zhuravleva2020new},
the fluid particles on the free surface experience zero acceleration. Indeed, the gradient
of the pressure vanishes at the free surface
in each of the respective cases \eqref{e:zkp} and \eqref{zkp2}.
By consequence we have
\begin{equation}
\frac{\D p}{\D n} =0 \quad\mbox{on $\D\Omega_t$},
\end{equation}
so the strict Taylor sign condition \eqref{e:taylor} does not hold.
Necessarily, $p<0$ inside the fluid domain, in fact, for both cases \eqref{e:zkp} and \eqref{zkp2}.
While the solutions in \cite{ZK2018} are certainly interesting, then,
it seems difficult to imagine how they
might approximate solutions of \eqref{1.laplace}--\eqref{1.pzero} in bounded domains,
since for the latter, $-p$ is always subharmonic due to \eqref{1.bernoulli},
so $p>0$ in the fluid domain.
\section{Numerical evidence for 2D local singularities}
\label{s:numerics}
In this section our goal is to study the possible development of local singularities
in smooth ideal potential flows through the use of several numerical illustrations and experiments.
Initially we expected that with zero gravity and surface tension, corners in
the free surface would form rather easily, as the fluid `tries to move ballistically'
except for the pressure term that maintains incompressibility.
As illustrated in the first examples below, however, our experience is consistent
with the observations and remarks of Longuet-Higgens \cite{Longuet1983},
who used three-dimensional Dirichlet hyperboloids to
explain jets in several kinds of fluid experiments,
and argued that such hyperboloidal jets may be characteristic of
other types of unsteady free-surface flows.
As we illustrated in \cite{LPmajda}, it is not difficult to find and compute flows
that exhibit a splash singularity, with interface that self-intersects at some positive time.
By varying parameters, we attempted to find flows with local singularities forming as
self-intersection points merge together.
But instead we found a tendency for strongly curved interfaces to be unstable through
the formation of small-scale (presumably hyperbolic) jets.
In \cite{LPmajda}, this led us to consider the expedient of exploiting the time-reversal
symmetry of the Euler equations. We computed solutions \emph{expanding away}
from a corner. Starting with a sequence of smooth approximations to a nonsmooth
fluid domain, our computations suggested convergence to a smooth interface
with bounded curvature at positive time.
In the last subsection below, we extend these computations using
equations \eqref{e:QU} instead of \eqref{e:ZF}, and with different initial data.
The results are consistent with the previous ones in \cite{LPmajda}, and are suggestive of
a self-similar scaling hypothesis for a two-parameter family of smooth solutions
starting from an interface formed by an infinite wedge with power-law initial velocity.
We also provide a heuristic explanation of the scaling exponents that are observed here
and were first seen in \cite{LPmajda}.
\subsection{Conformal formulations and a pseudospectral scheme}
We perform our computations using a filtered pseudospectral discretization
of the equations of motion in a conformal formulation.
An advantage of this approach that is well known is that the Dirichlet-to-Neumann map
for the fluid domain is replaced by that for the reference domain, which is easier to compute.
For the case that we study here, we will take the reference domain to be the unit disk $\bbD\subset\C$.
With this choice we can make use of the M\"obius automorphisms of $\bbD$ to concentrate grid points
in some zone of high curvature. An analogous transformation for periodic water waves was
described in \cite{Lushnikov2017}. This method is convenient, but is limited in its capability
to resolve fine-scale flow features, as compared to more flexible
boundary integral methods with adaptive grid refinement, say.
{\it Formulations.}
We refer to the appendix for a detailed derivation of the two conformal formulations
that we make use of. Briefly, we let $z=x+iy$ denote
complexified Eulerian coordinates in the fluid domain $\Omega_t\subset\C$.
This domain is assumed to be parametrized by a conformal map
$w\mapsto \mb Z(w,t)$, $w\in\bbD$.
The boundary $\D\Omega_t$ is then parametrized by $\theta\in\bbT=\mathbb{R}/2\pi\bbZ$
via
\[
z = Z(\theta,t) := {\mb Z}(e^{i\theta} ,t) ,
\quad \theta\in\bbT.
\]
Since $Z = X+iY$ provides the boundary values of a holomorphic function
in $\bbD$, the real part determines the imaginary part by the
Hilbert transform. With the expansion
\begin{equation}\label{e:Zk}
Z = \sum_{k\in\bbZ} \hat Z_k(t) e^{ik\theta}, \quad Z_k = X_k+iY_k,
\end{equation}
we have (presuming $\hat Y(0,t)=0$ for convenience)
\[
\mbox{$Y=HX$,\quad meaning}\quad
\hat Y_k(t) = (-i\sgn k)\hat X_k(t).
\]
The first conformal formulation involves
$\bm Z(w,t)$, the conformal parametrization of the fluid domain,
and $\bm F(w,t)$, the complex velocity potential.
Under the simplest conditions
that uniquely fix the fluid parametrization, which are
\begin{equation}
\frac{d}{dt} \bm Z(0,t) = 0, \qquad \frac{d}{dt}\arg \bm Z_w(0,t) = 0,
\end{equation}
the evolution equations for these quantities take the following form:
\begin{align}
\label{e:ZF}
\bm Z_t &= \bm Z_w \bm G,
\qquad
\bm F_t = \bm F_w \bm G - \bm R,
\end{align}
where the traces $G, R$ of the holomorphic functions $\bm G, \bm R$ are respectively given by
\begin{align}
\label{d:GR}
G &= w(I+iH) \re\left(\frac Un\right),
\qquad R = (I+iH)\left(\frac12{|U|^2} \right).
\end{align}
Here surface pressure and body forces have been taken as zero.
In these expressions, $U$ and $n$ are the traces of the (anti-holomorphic)
velocity ${\bm U}$ and (unnormalized) normal vector $\bm n$, given by
\begin{equation}
\bar{\bm U} = \frac{\bm F_w}{\bm Z_w}, \qquad \bm n = w\bm Z_w.
\end{equation}
We make use of a second conformal formulation in order to study dynamics in
a very large domain approximating an infinite wedge.
The holomorphic function
\begin{equation}
\bm Q = \frac 1{\bm Z_{w}},
\qquad
\end{equation}
evolves together with $\bar{\bm U}$ according to the equations
\begin{align}
\label{e:QU}
\bm Q_t &= \bm Q_w \bm G - \bm Q \bm G_w \,,
\qquad
\bar{\bm U}_t = \bar{\bm U}_w \bm G - \bm Q \bm R_w \,,
\end{align}
with the traces of $\bm G$ and $\bm R$ given as in \eqref{d:GR}.
Essentially this same formulation was described by A.~I.~Dyachenko in \cite{dyachenko2001dok}
and was used recently by S.~A.~Dyachenko \cite{dyachenko2019jfm}
to compute bounded ideal droplet solutions with and without surface tension.
In each of the two formulations, we compute by evolving just the real parts
of the traces and determining the imaginary parts using the Hilbert transform.
To recover the boundary parametrization $Z$ from the second formulation
in a nonsingular way for large domains not encircling $0$,
it is sometimes convenient to write
\begin{equation}
\label{d:S}
\bm S = \frac 1{\bm Z}
\end{equation}
(or some other analytic function of $1/\bm Z$)
and evolve $\bm S$ (actually the real part of its trace) along with \eqref{e:QU} according to
\begin{equation}
\label{e:S}
\bm S_t = \bm S_w \bm G \,.
\end{equation}
When $1/\bm Q$ is not singular, we recover $Z$ by integrating with respect to $w$
using the fast Fourier transform as indicated below.
{\it Verification of \eqref{e:QU}.} For completeness we derive \eqref{e:QU} from \eqref{e:ZF}.
Since $\bar{\bm U}=\bm Q\bm F_w$, we get
\[
\bm Q_w = -\bm Q^2 \bm Z_{ww}\,,
\qquad \bar{\bm U}_w = \bm Q \bm F_{ww} + \bm Q_w \bm F_w\,,
\]
\[
\bm Z_{wt} = \bm Z_{ww} \bm G + \bm Z_w \bm G_w\,,
\qquad \bm F_{wt} = \bm F_{ww} \bm G + \bm F_w \bm G_w-\bm R_w\,.
\]
Then it follows
$
\bm Q_t = -\bm Q^2 \bm Z_{wt} = \bm Q_w \bm G - \bm Q \bm G_w
$
and
$\bar{\bm U}_t = \bm Q_t \bm F_w + \bm Q \bm F_{wt}$, so
\begin{align*}
\bar{\bm U}_t &= (\bm Q_w \bm G- \bm Q \bm G_w)\bm F_w + \bm Q( \bm F_{ww} \bm G + \bm F_w \bm G_w - \bm R_w)
= \bar{\bm U}_w \bm G - \bm Q \bm R_w \,.
\end{align*}
{\it Discretization.}
We use a straightforward psuedospectral scheme to
discretize the equations in space, using grid points $\theta_j=jh$,
$j=1,\ldots,N$, $h=2\pi/N$. For the system \eqref{e:ZF},
we first expressed the equations in real form in terms of the operator
$\D_\theta = iw\D_w$, and then filter all derivatives by
replacing $\D_\theta$ with Fourier symbol $ik$
by $\calD_\rho$ with Fourier symbol
\[
\hat\calD_\rho(k) = ik\,\rho(hk), \qquad \rho(\xi)=
\exp(-10(\xi/\pi)^{15})
\]
This filter is similar to that used in \cite{HouLi07}.
We use a standard ODE solver in the julia OrdinaryDiffEq package
for time integration, with tolerance set to $10^{-9}$ or smaller.
For system \eqref{e:QU} we convert real parts to complex analytic form
by the discrete Hilbert transform, e.g., representing $Q(\theta_j,t)$,
$j=1,\ldots,N$ by
\begin{equation}
Q_j(t) = \sum_{k=0}^{N/2-1} \hat Q_k(t) e^{ik\th_j} \,,
\end{equation}
then compute filtered derivatives by using the fast Fourier transform to evaluate
\begin{equation}
(\calD_\rho Q)_j = \sum_{k=1}^{N/2-1} ik\rho(hk) Q_k e^{i(k-1)\th_j} .
\end{equation}
We sometimes found it useful for numerical stability to additionally filter
the solution after each time step.
We recover the interface position when $1/Q$ is nonsingular using the formula
\begin{equation}
Z_j(t) = \sum_{k=1}^{N/2-1} \frac{c_{k-1}(t)}k e^{ik\th_j} \,,
\end{equation}
assuming $\bm Z(0,t)=0$,
where the coefficients $\hat c_k(t)$ are the discrete Fourier coefficients of $1/Q$.
{\it Accuracy check.}
We checked the accuracy of the numerical scheme for a Dirichlet ellipse
as described in section~\ref{ss:ellipse} above,
with initial data for \eqref{e:ZF} given by
\begin{equation}
Z(\th,0) = e^{i\th}, \qquad F(\th,0)= e^{2i\th}.
\end{equation}
This corresponds to $a_1(0)=a_2(0)=1$ in \eqref{d:Omegat} and
$\alpha_1(0)=1$, $\lambda(0)=0$ in \eqref{d:phi0}.
To check how closely the solution conforms to an ellipse, we use
an explicit conformal map from ellipse to disk given by
\begin{equation}\label{e:Nehari}
z=x+iy \mapsto w=\calW_q(z) := \sqrt{k(q)}\sn\left(\frac{2K}\pi \sin\inv z;q\right),
\quad q = \left(\frac{a-b}{a+b}\right)^2\,.
\end{equation}
Here $\sn$ is the Jacobi elliptic function with parameters $q$, $k(q)$, and $K=K(q)$,
with notation as in \cite[p. 296]{Nehari}.
To evaluate this function, we ported the Matlab routine ELLIPJI by I.
Moiseev~\cite{elliptic} to julia.
For each system \eqref{e:ZF} and \eqref{e:QU}, we tabulate in Table~\ref{t:errs}
the maximum pointwise error $E=E_{ZF}$ or $E_{QU}$ respectively, given by
\[
E = \max_j |\calW_q(Z_j) - e^{i\th_j}|
\]
at time $t=0.25$, assuming the value $a=\frac1b\approx 1.278$ in \eqref{e:Nehari} is
given by $\re Z_j(t)$ with $j=0$ from the computed solution.
We specified a tolerance of $10^{-12}$ to the ODE solver for these computations.
\begin{table}
\begin{tabular}{|l | l | l|}
\hline
N & $E_{ZF}$ & $E_{QU}$
\\ [0.5ex] \hline
64 & 7.945e-4 & 9.475e-4
\\ \hline
128 & 8.078e-6 & 8.315e-6
\\ \hline
256 & 8.576e-9 & 3.175e-9
\\ \hline
512 & 1.342e-12 & 2.057e-13
\\ \hline
1024 & 1.898e-11 & 3.096e-14
\\ \hline
\end{tabular}
\medskip
\caption{Maximum-norm position errors for elliptical test case at $t=0.25$}
\label{t:errs}
\end{table}
\subsection{Examples with developing jets}
\subsubsection{Initial velocity with five-fold symmetry}
In our first example we take the initial shape as a circle,
with five-fold symmetric initial velocity, corresponding to
\[
\bm Z(w,0) = w, \quad F(w,0) = -0.15 w^5,
\quad \bm Q(w,0)=1, \quad \bar{\bm U}(w,0)= -0.75 w^4.
\]
We computed the solution from \eqref{e:QU} with $N=2^{14}$ grid points and plot
the solution along with a quiver plot of velocity,
at time $t=0.3$ in Fig.~\ref{fig5fold},
\begin{figure}
\includegraphics[width=4.5in]
{Figsyau/Euler-5gon-T0_3-N16384.pdf}
\vspace{-0.8cm}
\caption{Interface from five-fold symmetric initial velocity}
\label{fig5fold}
\end{figure}
The interface shows the development of regions of high curvature
that may be incipient jets or corners. The arrows, which are plotted
at consecutive grid points, indicate that the uniformly spaced grid
on the parametrizing circle is being stretched severely as the jets develop.
\subsubsection{Initial velocity with single mode}
To study whether the protruberances that develop in the previous example
might develop into corners, in \cite{LPmajda} we considered an initial velocity
that produces a single tip. This allows us to use a M\"obius transformation
to concentrate grid points in the single region of high curvature
and resolve the computation for a longer time.
Thus we solve equations \eqref{e:ZF} with initial data satisfying
\begin{equation}\label{e:nosedata}
{\mb Z}(w,0) = \zeta_r(w):= \frac{w+r}{1+rw}\,,
\qquad \re F(w,0) =
\left(\frac{\re \mb Z(w,0)+1}2\right)^5\,.
\end{equation}
Corresponding to compressing the grid by the factor
\begin{equation}\label{d:compress}
c = \left( \frac{1+r}{1-r}\right)^2 = 250 ,
\end{equation}
we take $r\approx0.881$. Figure~\ref{fig1mod}, taken from \cite{LPmajda},
shows the interface computed at time $t=0.6$ with $N=1024$ points,
compared with a hyperbola of the form
\begin{equation}
\frac{(x-x_0)^2}{a^2}+\frac{y^2}{b^2}=1,
\quad a=0.532,\ \ b=.199, \ \ x_0=2.398.
\end{equation}
This hyperbola was found using the polyfit function in julia by fitting 150
values of $Y^2$ to a quadratic function of $X$.
\begin{figure}
\includegraphics[height=3.5in]
{Figsyau/fithyperbola-t0_6.pdf}
\caption{Single-mode initial velocity with fit to hyperbola}
\label{fig1mod}
\end{figure}
The excellent fit of the hyperbola to the ``Pinocchio-like nose''
developing from the fluid domain suggests that no singularity
will ever form as time increases. Rather the nose should grow
without bound, with decreasing angle between the asymptotes
of the hyperbola, like Longuet-Higgens' exact Dirichlet hyperbola solutions
that we described in section~\ref{s:dirichlet}.
\subsection{A scenario for corner formation}
\label{ss:corners}
\subsubsection{Initial data.} As discussed earlier, we seek to approximate a smooth flow expanding
away from a sharp corner. To do so, we will specify an initial interface that is a
smooth approximation to a wedge-shaped domain $\Omega_\Theta$
with opening angle $\Theta\in(0,\pi)$ (as measured outside the fluid domain).
The wedge domain can be parametrized by the unit disk $\bbD$ by composing
the map defined by
\begin{equation}
\zeta_\Theta(w):= w^\pow, \qquad \pow = 2-\frac{\Theta}{\pi}\in(1,2),
\end{equation}
that takes the right half plane $\Re w>0$ onto $\Omega_\Theta$, with a map
\begin{equation}
\zeta_+(w) = C_+\left(-1 + \frac{2}{1-w}\right), \quad C_+>0,
\end{equation}
that takes the unit disk onto the right half plane.
To fashion a smooth, bounded approximation to this infinite, singular domain,
we use a map that takes the unit disk slightly inside itself,
creating a small `dimple' (with an approximately Gaussian shape)
near the point $w=-1$, according to the prescription
\[
\zeta_d(w ) =
w \exp\left( -(I+iH)
\left(\varepsilon_1\cos^{\pow_1}\frac\vartheta2+\varepsilon_2\cos^{\pow_2}\vartheta\right) \right ),
\quad \vartheta = \arg(-w),
\]
where we take $\varepsilon_1=0.1$, $\pow_1= 81$, $\varepsilon_2=10^{-5}$, $\pow_2=20\pow_1$.
Finally, the initial interface at time $t_0=1$ is determined by the composition
\begin{equation}
\bm Z(w,t_0) = \zeta_\Theta\circ \zeta_+\circ \zeta_d\circ \zeta_r(w),
\end{equation}
where the first map $\zeta_r$ is the M\"obius automorphism of the disk $\bbD$
from \eqref{e:nosedata} and is used to concentrate points on one side of the circle,
and we take $C_+=2/\varepsilon_1$ which results in $\bm Z(-1,t_0)\approx 1$.
For the wedge domain, a holomorphic velocity potential
$f_\Theta(z) = z^{\alpha}$ determines a velocity field $(u,v)$ via
\begin{equation}\label{e:zalpha}
u-iv = {f'_\Theta(z)} = \alpha z^{\alpha-1} \,.
\end{equation}
We use this velocity formula to determine initial data for the parametrized domain via
\[
\bar{\bm U}(w,t_0) = f'_\Theta\circ\bm Z(w,t_0).
\]
On the wedge boundary where $z\in\D\Omega_\Theta$, we have
$\arg z = \frac\pi2 \pow$ and $\arg(u+iv)=(1-\alpha)\arg z$.
If we make the choice (as was always done in \cite{LPmajda})
\begin{equation} \label{choice:a}
\alpha=\frac1\pow \,,
\end{equation}
then
$\alpha\arg z = \frac\pi2$ and the velocity is normal to the wedge boundary.
Below we exhibit examples both with and without this choice,
corresponding to the two cases $\alpha \pow = 1$ and $\alpha \pow = \frac34$.
\begin{figure}
\phantom{h}\hspace{-0.3cm}
\includegraphics
[height=3.0in]
{Figsyau/ic-dimple-N2p15small.pdf}
\qquad
\includegraphics
[height=3.0in]
{Figsyau/ic-dimple-N2p15large.pdf}
\caption{The interface $Z(\th,t_0)$ for $\pow=1$, $N=2^{15}$ }
\label{figdimp}
\end{figure}
{\em Parameters.}
For all the computations reported here, we discretize $\theta\in[0,2\pi)$ using $N=2^{15}$ points.
We take the grid compression ratio in \eqref{d:compress} to be
$c=20000$ in the case $\alpha \pow=1$,
and $c=4000$ in the case $\alpha \pow=\frac34$, and determine $r$ accordingly.
In Fig.~\ref{figdimp} we illustrate the effect of the regularizing map $\zeta_d$
by taking $\pow=1$ and plotting $Z(\th,t_0)=\bm Z(e^{i\th},t_0)$.
The left panel indicates there is a large region around $0$ where the interface is
very close to flat, aside from a roughly Gaussian-shaped depression.
The right panel shows how the behavior of the interface near infinity is regularized
by the factor in $\zeta_d$ with $\varepsilon_2$, yielding $\bm Z(1,t_0)\approx 4/(\varepsilon_1\varepsilon_2)=4\times10^6$.
In Fig.~\ref{f:IC60deg} we plot $-X$ vs $Y$ and corresponding velocity
for the initial interface in the case $\Theta=60^\circ$, $\pow=\frac53$, $\alpha \pow=\frac34$.
In this orientation, `water' is below `air' in the zoomed-in left panel.
The arrows inidicate the initial velocity in the case $\alpha \pow=1$,
but we use the same initial interface also in the case $\alpha \pow=\frac34$.
\begin{figure}
\phantom{h}\hspace{-0.3cm}
\includegraphics
[width=3.0in]
{Figsyau/wedge-ap075-angle60deg-time1zoom.pdf}
\includegraphics
[width=3.0in]
{Figsyau/wedge-ap075-angle60deg-time1.pdf}
\caption{Initial interface for $\Theta=60^\circ$
and velocity for $\alpha =\frac9{20}$ in `water' orientation}
\label{f:IC60deg}
\end{figure}
\subsubsection{Time evolution.} The computations reported here are carried out using
the (less singular) variables $\bm Q=1/\bm Z_w$, $\bm V=\bar{\bm U}$ and
$\bm S=\bm Z^{-1/\nu}$ satisfying equations \eqref{e:QU} and \eqref{e:S}.
The results are generally consistent with those reported in
\cite{LPmajda} which were performed using equations \eqref{e:ZF} on somewhat less singular domains.
E.g., Fig.~\ref{f:IC60deg} indicates that at the initial time $t_0=1$,
$\bm Z(1,t_0)\approx 10^{10}$ whereas this was $\approx 10^3$ for the case
$\Theta=90^\circ$ considered in \cite{LPmajda}.
\newcommand{\cwidth}{4.5in}
\begin{figure}[ht]
\includegraphics[width=\cwidth]
{Figsyau/sol-tinc200-angle60deg.pdf}
\put(-161,7){\large $y$}
\put(-320,121){\large $-x$}
\caption{Interface for $t=10$ and $200n$ for $1\le n\le5$, with $\Theta=60^\circ$, $\alpha =\frac35$. }
\label{f:60deg}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=\cwidth]
{Figsyau/sol-tinc200-angle60deg-alphap0_75.pdf}
\put(-161,7){\large $y$}
\put(-320,121){\large $-x$}
\caption{Interface for $t=10$ and $200n$ for $1\le n\le5$, with $\Theta=60^\circ$, $\alpha =\frac9{20}$. }
\label{f:60deg075}
\end{figure}
As shown in Fig.~\ref{f:60deg} (for $\alpha \pow=1$)
and in Fig.~\ref{f:60deg075} (for $\alpha \pow = 0.75$),
the interface expands away from the origin and the curvature decreases by a large factor.
Upon rescaling as described with small $\varepsilon$ as described above, this is consistent
with the possible development of an interface with an initial tiny radius of curvature
to one with radius of curvature hundreds or thousands of times larger.
\subsubsection{Evidence for self-similarity.}
We emphasize here that the Euler equations here are invariant under scaling time and space by the same factor.
Thus, although we take $\bm Z(-1,1)\approx 1$ in our computations, this corresponds to
another solution with $\bm Z(-1,\varepsilon)\approx \varepsilon$ for any $\varepsilon>0$.
The numerical results above and in \cite{LPmajda} lead us to expect
a power-law scaling in time with $\bm Z(-1,t)\sim ct^\beta$ for some
$c,\beta>0$.
In Figs.~\ref{f:scaledZinv}
and \ref{f:scaledZinv075}
we plot the {\em reciprocals} of
time-scaled interfaces in the water orientation,
for the choice (explained below)
\begin{equation} \label{e:betaalpha}
\beta = \frac 1{2-\alpha}\,.
\end{equation}
For $\alpha=\frac1\pow=\frac35$ this yields $\beta = \frac57$,
and for $\alpha = \frac{3}{4\pow}$ we get $\beta = \frac{20}{31}.$
Precisely, we plot
\begin{equation}\label{e:scaleZinv}
\frac{-i t^\beta}{Z(\theta,t)}
\end{equation}
at a sequence of times $t=t_n$ for $n=1,2,\ldots 10$
($t_n = 500n$ for $\alpha \pow=1$, $t_n=1000n$ for $\alpha \pow=\frac34$).
These plots allows one to visualize the entire inverted fluid domain,
with the origin of the plot corresponding to the fluid far field.
\begin{figure}
\includegraphics[width=\cwidth]
{Figsyau/scaled-Zinv-time500to5000inset.pdf}
\caption{Scaled inverse interfaces at $t= 500n$, $1\le n\le10$, $\Theta=60^\circ$,
$\alpha =\frac35$}
\label{f:scaledZinv}
\end{figure}
\begin{figure}
\includegraphics[width=\cwidth]
{Figsyau/scaled-Zinv-alp0_75-t1000to10000inset.pdf}
\caption{Scaled inverse interfaces at $t= 1000n$, $1\le n\le10$, $\Theta=60^\circ$,
$\alpha =\frac9{20}$}
\label{f:scaledZinv075}
\end{figure}
The plots demonstrate a very tight collapse, with the zoomed-in view
suggesting convergence to an invariant limit shape.
By the scaling invariance argument mentioned above,
this result suggests the possible existence of a
self-similar solution starting at time $t=0$ from the exact wedge
$\Omega_\Theta$ with initial power-law potential $f_\Theta(z)=z^\alpha$.
\subsubsection{Scaling argument}
We can provide a two-step heuristic argument
to explain the scaling exponent $\beta$ in \eqref{e:betaalpha}.
Motivated by the results above, it is natural to seek a self-similar solution
to the governing equations \eqref{1.laplace}--\eqref{1.pzero} by scaling
the space variables and potential according to
\begin{equation}
(x,y) = (t^\beta \tilde x, t^\beta\tilde y), \qquad
\phi(x,y,t) = t^\gamma\tilde\phi(\tilde x,\tilde y)\,.
\end{equation}
Simple substitution shows that the linear terms in the
Bernoulli equation \eqref{1.bernoulli} can balance
with the nonlinear terms on the boundary only if
\begin{equation} \label{e:gbrel}
\gamma = 2\beta -1\,.
\end{equation}
This is the first step. In the second step, writing
$\tilde\phi = \re\tilde f(\tilde z)$ with $\tilde z=\tilde x+i\tilde y$,
we find that the initial data requirement in \eqref{e:zalpha}
as $t\downarrow0$ imposes the limit relation
\begin{equation}\label{e:ulimit}
u - iv = t^{\gamma - \beta}\tilde f'(t^{-\beta} z) \to \alpha z^{\alpha-1}
\end{equation}
when taking $t\downarrow0$ with $z=x+iy$ fixed.
Thus we can expect that
\[
\tilde f'(\tilde z) \sim \alpha \tilde z^{\alpha-1}
\quad\mbox{as $\tilde z=t^{-\beta}z\to\infty$},
\]
which entails the relation
\begin{equation}\label{e:gabrel}
\gamma = \alpha\beta .
\end{equation}
Putting \eqref{e:gabrel} together with \eqref{e:gbrel} yields \eqref{e:betaalpha}.
\section{Discussion}
Our experience in computing protruberances that emerge from the main body
of fluid is consistent with the observations and suggestions of
Longuet-Higgens, who pointed out that such protrusions may often
take the form of Dirichlet hyperboloids.
Such hyperboloidal jets exist with narrowing ``nose'' that persist and
remain smooth forever, despite lacking any regularizing
effect from surface tension, viscosity, or gravity.
Thus ideal droplets with smooth boundary do not seem to readily
form local singularities, consistent with the analytical constraints
provided in the work of Kinsey and Wu~\cite{KinseyWu2018} and Wu~\cite{wu2015blow}.
However, our computations presented in section \ref{ss:corners}
strongly suggest that corner formation may occur in an unstable manner
for specially prepared initial data in bounded domains.
After time-reversal, it appears that a smooth interface may emerge from
initial data containing a corner.
``Zooming in,'' it appears plausible that a corner can form in
an asymptotically self-similar way, with a rather general exterior angle
and power-law velocity profile approaching the corner.
Thus we conjecture that a two-parameter family of
self-similar solutions of the ideal droplet equations may exist
in infinite, asymptotically wedge-shaped domains.
Determining whether such solutions do indeed exist appears to be a difficult
challenge for analysis.
We point out that if such solutions exist with $x$-intercept proportional
to $t^\beta$, then the acceleration of the interface at this point
would blow up strongly, like $t^{\beta-2}$.
\section*{Acknowledgements}
The authors are grateful to Sergey Gavriluk for historical references
regarding ellipsoidal solutions.
This material is based upon work supported by
the National Science Foundation under
grants DMS 1812673 and DMS 2106988 (JGL)
and grants DMS 1812609 and DMS 2106534 (RLP).
|
1,477,468,751,418 | arxiv | \section{Introduction}\label{s:intro}
This paper provides a new approach to the investigation of \textit{Cauchy data spaces} (made of the normal traces at the boundary up to the order $d-1$ of the kernel of elliptic differential operators of order $d\ge 1$, that can be obtained as images of certain pseudo-differential projections over the boundary, called \textit{\Calderon\ projections}) under continuous or smooth variation of the underlying operators over a fixed manifold with boundary. The concept of the \textit{\Calderon\ projection} originated from
\textsc{\Calderon}'s observation in \cite{Cal63}.
Previous approaches to the variational problem were based either on purely functional-analytic, symplectic and topological arguments or on geometric and holomorphic analysis. For the first type of approach we refer to \cite{BoFu98} that dealt with symmetric operators admitting self-adjoint Fredholm extensions and a certain unique continuation property (UCP). The variation was restricted to compact perturbations. By those assumptions, the authors achieved the continuous variation of the Cauchy data spaces in symplectic quotient Hilbert spaces, namely as Lagrangian subspaces.
The second type of approach is based on investigating spectral projec\-tions and exploiting the pseudo-differential calculus, via canonical and explicit constructions of Poisson operators and the \Calderon\ projection. See, e.g., the classical \cite{Ni95,Ni97} for Dirac type operators, based on the invertible double via gluing of \cite{BoWo93}, or our \cite{BoLe:2009,BoLeZh08,BCLZ} for arbitrary elliptic differential operators of first order with UCP, based on the ideas of general invertible doubles via a system of boundary value problems in \cite{Himpel-Kirk-Lesch:2004}.
Our present approach is a hybrid, changing repeatedly between the calculus of closed subspaces of the graph-theoretical approach and the geometric analysis of the \Calderon\ projections of the pseudo-differential approach. In that way we obtain the wanted generalization to linear elliptic differential operators of order $d\ge 1$ with weakened UCP requirements and, as a bonus, a much shorter path to the quoted results.
\subsection{Structure of the paper}
This paper consists of three sections. In this Section \ref{s:intro}, we explain the structure of the paper and state our main result.
In Section \ref{s:common-knowledge}, we fix the notations. The main topics are Sobolev spaces and domains of elliptic differential operators on manifolds with boundary; Green's forms; Cauchy data spaces; the homogenized Cauchy trace operator; the classical properties of the \Calderon\ projection; and \textsc{Neubauer}'s classical $\cap$ and $+$ \textit{arithmetic} of pairs of families of closed subspaces in Banach space \cite{Ne68}.
The proof of Theorem \ref{t:main} is in Section \ref{s:proof}. In Section \ref{ss:proof-for-s-ge-d-half}, assuming $s\ge \frac{d}{2}$, we obtain first the continuous variation of the solution spaces in the Sobolev space of order $s +\tfrac d2 $. By the continuity and surjectivity of the adjusted trace operator, that yields a continuous variation of the Cauchy data spaces in the Sobolev space of order $s$ over the boundary, and, furthermore, the continuous variation of the family of $L^2$-orthogonalized
Calder{\'o}n projections in the operator norm of these Sobolev spaces.
This part of our results has been announced in \cite[Proof of Proposition 4.5.2, first part]{BoZh14}.
In Section \ref{ss:s<halfd}, we use the results of Section \ref{ss:proof-for-s-ge-d-half} to prove our theorem for $s<\frac{d}{2}$ by duality and interpolation property of spaces and operators in Sobolev scales. In the following Appendix, we show that the assumption about the constant dimension of the spaces of inner solutions
in Theorem \ref{t:main} can be weakened a little by finer analysis above.
\subsection{Our main result}
\begin{notation}\label{n:basic-notations}
Let $B$ be a topological space and $\mathscr{M}$ a compact smooth Riemannian manifold with boundary $\Si$. Let $\left(A_b\right)_{b\in B}$ be a family of linear elliptic differential operators of order $d\ge 1$, acting between sections of complex finite-dimensional Hermitian vector bundles $E,F$ over $\mathscr{M}$.
Let $\rho^d$ denote the {\em Cauchy trace operator}, mapping sections of $E$ over $\mathscr{M}$ to $d$-tuples of jets over $\Si$ in normal direction (these jets can be {\em adjusted}, i.e., homogenized to sections of the bundle $E'^d:=(E|_\Si)^d$, for details see Proposition \ref{p:trace}). Let
$$Z_{+,0}(A_b)\ :=\ \{u \in H^d(\mathscr{M}; E) \mid A_bu = 0 \tand \rho^d u=0\}$$
denote the space of all {\em inner solutions}. It is the finite-dimensional kernel of the {\em closed minimal realization} associated with $A_{b}$. Correspondingly, $Z_{-,0}(A_b) := Z_{+,0}(A^t_b)$
denotes the kernel of the closed minimal realization associated with the formal adjoint $A^t_{b}$.
\end{notation}
For the interesting case of Cauchy data spaces and $L^2$-orthogonalized (and so uniquely determined) \Calderon\ projections (see Section \ref{ss:weak-traces}), we shall prove
\begin{theorem}[Main result]\label{t:main}
Assume that
\begin{enumerate}[(i)]\label{e:ucp-assumption}
\item for $s\ge \frac{d}{2}$\/, the two families of bounded extensions
\[
\bigl(A_{b, s+\frac{d}{2}}\colon
H^{s+\frac{d}{2}}(\mathscr{M};E) \too H^{s-\frac{d}{2}}(\mathscr{M};F)\bigr)_{b\in B}
\]
and
\[\bigl(A_{b, s+\frac{d}{2}}^t\colon H^{s+\frac{d}{2}}(\mathscr{M};F) \too H^{s-\frac{d}{2}}(\mathscr{M};E)\bigr)_{b\in B}
\]
are continuous in the respective operator norms $\norm{\cdot}_{s+\frac{d}{2},s-\frac{d}{2}}$\/, and that the family of adjusted Green's forms (of Equation \eqref{e:J-adjusted}) $\bigl(\tilde J^t_{b,s} \colon H^s(\Si;F'^d) \to H^s(\Si;E'^d)\bigr)_{b\in B}$
is continuous in the operator norm $\norm{\cdot}_{s,s}$\/;
\item $\dim Z_{+,0}(A_b) \tand \dim Z_{-,0}(A_b)$ do not depend on $b\in B$.
\end{enumerate}
Then for any $s\in \RR$, the family of $L^2$-orthogonalized Calder{\'o}n projections $\bigl(C^{\ort}_s(A_b)\bigr)_{b\in B}$ is continuous in the operator norm of the corresponding Sobolev space $H^s(\Si;E'^d)$.
\end{theorem}
\begin{remark}\label{r:continuity-assumptions}
(a) Assumption (i) can be weakened by demanding continuous variation only
for $s\geq \frac{d}{2}$ and $s+\frac{d}{2}\in\NN$.
\par
(b) Let $M$ be covered by coordinate charts $(U,\varphi)\in \mathscr{A}$ for an atlas $\mathscr{A}$ with local trivializations $E|_{U},F|_{U}$. If for any $(U,\varphi)\in \mathscr{A}$, all multiple-order partial derivatives of the coefficients of the operators $A_b$ are continuous and uniformly bounded on $U\times B$ (cf. \cite[Section 1]{Atiyah-Singer:1971}), then Assumption (i) follows.
\par
(c) In the literature on families of elliptic operators over manifolds with boundary, strict weak inner unique continuation property is commonly assumed (that is, $Z_{\pm,0}(A_b)=\{0\}$). Relaxing that assumption to Assumption (ii), i.e., the constant dimensions of the spaces of inner solutions, was suggested in \cite{Himpel-Kirk-Lesch:2004}.
To prove the continuity of $\bigl(\ker A_{b,s+\frac{d}{2}}\bigr)_{b\in B}$ for $s\geq \frac{d}{2}$ (see Proposition \ref{p:kernel-cont-for-s-ge-dhalf}), we assume that the spaces $Z_{-,0}(A_b)$ are of finite constant dimension. Actually, the two statements are equivalent by Lemma \ref{l:closed-continuous}a. Once we have obtained the continuity of $\bigl(\ker A_{b,s+\frac{d}{2}}\bigr)_{b\in B}$, the assumption that the spaces $Z_{+,0}(A_b)$ are of finite constant dimension is equivalent to our conclusion
that the family of the images of the corresponding \Calderon\ projections is continuous (see Proposition \ref{p:Cauchy-traces-varying}).
In some special example, Assumption (ii) can be weakened. Please see the Appendix.
\par
(d) In \eqref{e:cauchy-data-space-basic} we define the \textit{Cauchy data space} $\Lambda_{-\frac{d}{2}}(A_b) \< H^{-\frac{d}{2}}(\Si;E'^d)$ as the space of the homogenized Cauchy traces of the weak solutions $u$ of $A_bu=0$ and also in \eqref{e:cauchy-data-space-strong} the Cauchy data spaces $\Lambda_s(A_b)$
for $s\ge\frac{d}{2}$. According to Theorem \ref{t:Se69} and Corollary \ref{c:generalization-of-calderon-in-seeley}, these spaces are precisely the images of the corresponding \Calderon\ projections.
Clearly, the continuity of a family of projections of a Banach space in operator norm implies the continuity of their images in the gap topology, see also \cite[Section I.4.6]{Ka95}. Hence one can read Theorem \ref{t:main} as the claim of a continuous variation of the Cauchy data spaces depending on the parameter $b$ for each of these Sobolev orders $s$ --- under the assumption of constant dimensions of the spaces of inner solutions.
\par
(e) One of the most fundamental examples is the continuous variation of the Riemannian metric on a fixed smooth manifold, i.e., that in local coordinates all derivatives of the component functions of the metric vary continuously. Then the induced Laplace operators vary continuously in the sense of (b). The weak inner unique continuation property holds for Laplace operators. So both Assumptions (i) and (ii) are satisfied, and as a consequence the corresponding Cauchy data spaces vary continuously.
\par
(f) For applications of our results we refer to the spectral flow formulae for operator families with varying maximal domains as in \cite{BoZh14};
and, consequently, to the possibility of determining the precise number of negative eigenvalues in stability analysis of an essentially positive differential operator $A$ (appearing in the descriptions of, e.g., reaction-diffusion, wave propagation and scattering systems) by calculating the spectral flow of $\bigl((1-b)A+bA_+\bigr)_{b\in [0,1]}$, where $A_+$ is a suitably chosen strictly positive differential operator, or more advanced expressions, cf. \cite{Latushkin-et-al:2018,BoZhu:2004,Zhu:2006} in the tradition of \textsc{Bott}'s Sturm type theorems \cite{Bo56}.
\end{remark}
\section{Main tools and notations of elliptic operators on manifolds with boundary} \label{s:common-knowledge}
Before
proving the theorem, we fix the notations and recall the most basic concepts and tools.
We begin with a single operator.
\subsection{Our data}\label{ss:our-data}
\begin{enumerate}
\item $\mathscr{M}$ is a smooth compact Riemannian manifold of dimension $n$ with boundary $\partial \mathscr{M}=:\Si$.
\item $E,F\to \mathscr{M}$ are Hermitian vector bundles of fiber dimension $m$ with metric connections $\nabla^E$, $\nabla^F$. As in Notation \ref{n:basic-notations}, we set $E':=E|_\Si$ and $F':=F|_\Si$\/.
\item $\Ci(\mathscr{M};E)$ denotes the space of smooth sections of $E$;
$\mathscr{M}^\circ$ denotes the interior of $\mathscr{M}$, $\Ci_c(\mathscr{M}^\circ; E)$ denotes the space of smooth sections of $E$ with compact support in $\mathscr{M}^\circ$.
\item $A\colon \Ci(\mathscr{M};E)\to\Ci(\mathscr{M};F)$ is an elliptic differential operator of order $d$.
\item $A_0\colon \Ci_c(\mathscr{M}^\circ; E)\to\Ci_c(\mathscr{M}^\circ;F)$, where $A_0=A|_{\Ci_c(\mathscr{M}^\circ; E)}$.
\item $A_0^t\colon \Ci_c(\mathscr{M}^\circ; F)\to\Ci_c(\mathscr{M}^\circ;E)$, where $A^t$ denotes the formal adjoint of $A$.
\item $A_{\mmin}:= \overline{A_0}$, $A_{\mmin}^t:= \overline{A_0^t}$, where we consider
$A_0\colon \Dd(A_0)\to L^2(\mathscr{M};F)$
as an unbounded densely defined operator from $L^2(\mathscr{M};E)$ to $L^2(\mathscr{M};F)$, and denote its closure by $\overline{A_0}$, see Section \ref{ss:sobolev}, in particular Proposition \ref{p:minimal-domain} below. We write $A_{\mmax}:=(A_0^t)^{\ast}$\/, i.e.,
\[
\mathcal{D}(A_{\mmax})=\{u\in L^2(\mathscr{M};E)\mid Au\in L^2(\mathscr{M};F) \text{ in the distribution sense}\},
\]
where $\mathcal{D}(\cdot)$ denotes the domain of an operator.
\end{enumerate}
Note that $A_{\mmin},A_{\mmax}$ are the closed \textit{minimal} and \textit{maximal extensions} of $A_0$. For a section $u\in \mathcal{D}(A_{\mmax})$,
the \textit{intermediate derivatives} $D^\a u$ (with $|\a|\le d$) need not exist as sections on $\mathscr{M}$, even though $Au$ does in the distribution sense, see \cite[Section 4.1, p. 61]{Grubb:2009}.
\subsection{The Sobolev scale and special relations for elliptic operators}\label{ss:sobolev}
For real $s$, we recall the definition of the Sobolev scale $H^s(\mathbf{M};\mathbf{E})$ for a complete smooth Riemannian manifold $\mathbf{M}$ without boundary. Then, for a compact manifold $\mathscr{M}$ with smooth boundary, the Sobolev scale is induced for non-negative $s$ by embedding and restriction.
We follow mostly \textsc{\Calderon} \cite[Section 3.1]{Cal76}, as reproduced and elaborated in \textsc{Frey} \cite[Chapters 0 and 1]{Frey2005On}, supplemented by \textsc{Lions} and \textsc{Magenes} \cite[Sections 1.7 and 1.9]{LM72} and \textsc{Tr{\`e}ves} \cite[Section III.2]{Treves:1}. We replace the regular subsets of $\RR^n$ in the classical literature by a smooth compact manifold with boundary embedded in a complete manifold without boundary. So, without restricting the general validity of our results, we assume, as we may, that
\begin{itemize}
\item our compact Riemannian manifold $(\mathscr{M},g)$ \textit{with} boundary is embedded in a (metrically) complete smooth Riemannian manifold $( \mathbf{M}, \mathbf g)$ of the same dimension $n$ \textit{without} boundary,
\item our bundles $E,F$ are extended to smooth Hermitian vector bundles $ \mathbf E, \mathbf F$ over $ \mathbf{M}$,
\item the elliptic differential operator $A$ is defined on $\mathscr{M}\cup \mathscr{N}$ where $\mathscr{N}$ denotes a collar neighbourhood of $\Si$ in $\mathbf M\setminus \mathscr{M}^{\circ}$.
\end{itemize}
\paragraph{Sobolev scale on complete manifolds without boundary.} First we recall the concept of the Sobolev scale for functions. The immediate generalization for sections of Hermitian bundles follows then.
On $ \mathbf{M}$ with Riemannian metric $ \mathbf g$, let $|\dop \Vol|$ denote the volume density derived from the metric.
Recall the \textit{Hodge--Laplace operator}
\begin{equation*}
\Delta_0^{ \mathbf{M}}\ :=\ \dop^t\dop\colon C_c^{\infty}( \mathbf{M})\too C_c^{\infty}( \mathbf{M}),
\end{equation*}
acting on functions,
where $\dop^t$ denotes the formal adjoint of the exterior differential
$\dop\colon C^{\infty}( \mathbf{M})\to C^{\infty}( \mathbf{M};\Lambda^1( \mathbf M))$.
The operator $-\Delta_0^{ \mathbf{M}}$ is equal to the \textit{Laplace--Beltrami operator} on the Riemannian manifold $(\mathbf{M}, \mathbf g)$.
Let $L^2(\mathbf{M})$ denote the completion of $C_{c}^{\infty}( \mathbf{M})$ with respect to the norm induced by the $L^2$-inner product
\[
(u,v)_{L^2(\mathbf{M})}:=\int_{\mathbf{M}}u\bar{v}\,|\dop \Vol|,
\]
where $\bar{v}$ denotes the complex conjugate of $v$.
Since $\mathbf{M}$ is complete with respect to $\mathbf g$, $\Delta_0^{\mathbf{M}}$ is essentially self-adjoint (see \cite[Theorem 3]{Cordes:1972} or \cite[Section 3(A)]{Chernoff:1973}). So the closure of $\Delta_0^{ \mathbf{M}}$, $\Delta^{ \mathbf{M}}$, is a non-negative self-adjoint operator.
It gives rise to the \textit{Sobolev spaces} on $ \mathbf{M}$
\begin{equation}\label{e:sobolev-functions}
H^s( \mathbf{M})\ :=\ \mathcal{D}((\Delta^{ \mathbf{M}})^{s/2}),\ \ s\geq0,
\end{equation}
equipped with the graph norm.
By \cite[Theorem 1.1.2]{LM72}, we regain,
for $ \mathbf M=\RR^n$ and $s\in\NN\cup\{0\}$
the usual Hilbert space
\begin{equation*
H^s( \mathbf{M})\ =\ \{u\in L^2( \mathbf M)\mid D^\alpha u \in L^2( \mathbf M) \text{ for }
|\alpha|\le s\},
\end{equation*}
where the partial differentiation $D^\alpha$ with multi-index $\alpha$ is applied in the distribution sense and the scalar product and norm are defined by
\[
\langle u,v\rangle_s\ :=\ \sum_{|\alpha|\le s} (D^\alpha u,D^\alpha v)_{L^2(\mathbf{M})}\
\tand\ \norm{u}_s\ :=\ \sqrt{\scalar{u}{u}_s}\,.
\]
For $s>0$, we define the space $H^{-s}(\mathbf{M})$ of distributions to be the so-called $L^2$-\textit{dual} of $H^s( \mathbf{M})$, i.e.,
\begin{equation}\label{e:L2-anti-dual}
H^{-s}( \mathbf{M}):=\{u\in\mathscr{D}'( \mathbf{M}) \mid \exists_c\forall_{v\in H^s(\mathbf{M})} \abs{u(\bar{v})} = \abs{\langle v,u\rangle_{s,-s}} \leq c \|v\|_{H^s(\mathbf{M} )}\},
\end{equation}
here $\langle v,u\rangle_{s,-s}:=\overline{u(\bar{v})}$ with a distribution $u\in\mathscr{D}'( \mathbf{M})$ acting on a test function $v$. Hence for
$u\in L^2(\mathbf{M})$ we have $\langle v,u\rangle_{s,-s}=
(v,u)_{L^2(\mathbf{M})}$, as nicely explained in \cite[Section 8.2]{Grubb:2009} and \cite[Section 1.1]{Gilkey:1995}.
The above constructions can be generalized for sections of any bundle $ \mathbf E\rightarrow \mathbf{M}$ carrying an Hermitian structure $\mathbf{M}\ni p\mapsto \langle.,.\rangle|_{ \mathbf E_p}$ and an Hermitian connection.
Let
\begin{align*}
\nabla^{ \mathbf E}&\colon C^{\infty}( \mathbf{M}; \mathbf E)\longrightarrow C^{\infty}( \mathbf{M};T^* \mathbf{M}\otimes \mathbf E)\ \tand\\
\nabla^{ \mathbf F}&\colon C^{\infty}( \mathbf{M}; \mathbf F)\longrightarrow C^{\infty}( \mathbf{M};T^* \mathbf{M}\otimes \mathbf F)
\end{align*}
be Hermitian connections, i.e., connections that are compatible with the Hermitian metrics on $ \mathbf E$ and $ \mathbf F$ respectively. To define Sobolev spaces of sections in vector bundles, one replaces the Laplacian $\dop^t\dop$ in the previous definition \eqref{e:sobolev-functions} by the \textit{Bochner--Laplacian}s $(\nabla^{ \mathbf E})^t\nabla^{ \mathbf E}$ and $(\nabla^{ \mathbf F})^t\nabla^{ \mathbf F}$.
\paragraph{Sobolev scale on compact smooth manifolds with boundary.} For functions, the corresponding Sobolev space on the compact submanifold $\mathscr{M}$ with boundary $\Si$ is defined as the quotient
\begin{equation*}
H^s(\mathscr{M})\ :=\ H^s( \mathbf{M})/\left\{u\in H^s( \mathbf{M})\big|\ u|_{\mathscr{M}}=0\right\}, s\in\R\tand s\ge 0.
\end{equation*}
In other words, $H^s(\mathscr{M})$ coincides algebraically with the space of restrictions to $\mathscr{M}^{\circ}$ of the elements of $H^s(\mathbf{M})$. The norm of $H^s(\mathscr{M})$ is given by the quotient norm, that is,
\[\norm{u}_{H^s(\mathscr{M})}=\inf\norm{U}_{H^s(\mathbf{M})} \ \ \text{for all $U\in H^s(\mathbf{M})$ with $U=u$ a.e. on $\mathscr{M}^{\circ}$.}
\]
In our smooth case, the definition coincides with the interpolation $H^s(\mathscr{M}) = [H^m(\mathscr{M}) , H^0(\mathscr{M})]_{\theta}$,
$(1-\theta)m=s$, $m$ integer, $0\leq\theta\leq1$.
See \cite[Theorems 1.9.1 and 1.9.2]{LM72}. For $s\ge 0$, an important subspace is the function space
$H_0^s(\mathscr{M}):=\overline{C_c^{\infty}(\mathscr{M}^{\circ})}^{\|.\|_{H^s(\mathscr{M})}}$.
More generally and quite similarly, we can define Sobolev spaces of sections in vector bundles such as $H^s(\mathscr{M};E)$ and
\begin{equation}\label{e:H_0}
H_0^s(\mathscr{M};E)\ :=\ \overline{C_c^{\infty}(\mathscr{M}^{\circ};E|_{\mathscr{M}^{\circ}})}^{\|\cdot\|_{H^s(\mathscr{M};E)}} \ \text{for $s\in\RR$, $s\ge 0$}.
\end{equation}
\paragraph{Sobolev scale on closed manifolds.}
For any Hermitian vector bundle $G$ over the closed manifold $\Sigma$,
we can define the Sobolev spaces $H^s(\Sigma;G)$ for all $s\in \RR$ as in \cite[Section 8.2]{Grubb:2009} or \cite[Section 1.3]{Gilkey:1995}.
Note that $C^{\infty}(\Sigma;G)$ is dense in $H^s(\Sigma;G)$ for all $s\in\RR$.
Then the $L^2$-scalar product for smooth sections can be extended to a \textit{perfect pairing} between $H^s(\Sigma;G)$ and $H^{-s}(\Sigma;G)$ for all $s\in\RR$.
That is, from \cite[Lemma 1.3.5(e)]{Gilkey:1995}, the pairing $(f,h)_{L^2(\Si;G)}$ extends continuously to a perfect pairing
\[
H^s(\Sigma;G)\times H^{-s}(\Sigma;G)\too \CC,
\]
which we denoted by $\langle \cdot,\cdot\rangle_{s,-s}$ in \eqref{e:L2-anti-dual}.
\begin{remark}\label{r:perfect-pairing}
Let $H, K$ be Hilbert spaces. A bounded sesquilinear form $\F\colon H\times K\to\CC$ is called a \textit{perfect pairing} if it induces on each of $H, K$ an isomorphism to the dual of the other. More precisely, we obtain
the induced conjugate linear map from $K$ to the space of bounded linear functionals on $H$ by
\[
v\mapsto \F(\cdot,v)\ \ \text{for $v\in K$,}
\]
the induced linear map from $H$ to the space of bounded conjugate linear functionals on $K$ by
\[
u\mapsto \F(u,\cdot)\ \ \text{for $u\in H$.}
\]
Both are isomorphisms; moreover, the functionals are bounded by
\begin{equation*
\norm{v}_K\ =\ \sup_{0\ne u\in H}\frac{\abs{\F(u,v)}}{\norm{u}_H}\ \tand\
\norm{u}_H\ =\ \sup_{0\ne v\in K}\frac{\abs{\F(u,v)}}{\norm{v}_K}\,.
\end{equation*}
\end{remark}
\begin{notation}
Let $X,Y$ be normed spaces, we denote the normed algebra of bounded linear operators from $X$ to $Y$ by $\Bb(X,Y)$; for $X=Y$, $\Bb(X):=\Bb(X,X)$.
We use shorthand $\norm{\cdot}_s$ for the norm in $H^s(\cdot;\cdot)$, $s\in\RR$; and $\norm{\cdot}_{r,s}$ for the operator norm in $\Bb(H^r(\cdot;\cdot),H^s(\cdot;\cdot))$, $r,s\in\RR$.
\end{notation}
\paragraph{Special relations for elliptic operators.}
We fix the notation, in particular the sign conventions.
Let
$d_g(\cdot,\cdot)$ be the \textit{distance function}; (locally) it is the arc length of the minimizing geodesic.
In a collar neighbourhood of
$\Si$ in $\mathscr{M}$, say $V$, the function
\[
V\ni p\mapsto x_1(p)\ :=\ d_g(p,\Sigma),\ p \in \mathscr{M}
\]
is smooth and defines the \textit{inward unit normal field} $\nu:=\grad x_1$ and \textit{inward unit co-normal field} $\nu^\flat:=\dop x_1$.
Let $T^*\mathscr{M}$ denote the \textit{cotangent vector bundle} of $\mathscr{M}$, $S(\mathscr{M})$ the \textit{unit sphere bundle} in $T^*\mathscr{M}$ (relative to the Riemannian metric $g$), and $\pi \colon S(\mathscr{M})\rightarrow \mathscr{M}$ the projection. Then associated with any linear differential operator $A$ of order $d$ there is a vector bundle homomorphism
$$\sigma_d(A)\colon \pi^*E\rightarrow \pi^*F\/,$$
which is called the \textit{principal symbol} of $A$. In terms of local coordinates, $\sigma_d(A)$ is obtained from $A$ by replacing $\partial/\partial x_j$ by $\mathrm{i}\xi_j$ in the highest order terms of $A$ (here $\xi_j$ is the $j$th coordinate in the cotangent bundle). $A$ \textit{elliptic} means that $\sigma_d( A)$ is an isomorphism.
For elliptic operators there is an important relation (the \textit{G{\aa}rding inequality}) between the graph norm, originating from the basic $L^2$ Hilbert space, and the corresponding Sobolev norm. More precisely, we recall from \cite[Proposition 1.1.1]{Frey2005On}
\begin{proposition}\label{p:minimal-domain}
Assume that $A$ is an elliptic operator of order $d$. Then
\begin{enumerate}[(a)]
\item The graph norm of $A$ restricted to $\Ci_c(\mathscr{M}^\circ;E)$ is equivalent to the Sobolev norm $\norm{\cdot}_{H^d(\mathscr{M};E)}$\/.
\item In particular, $\mathcal{D}(A_{\mmin})=H^d_0(\mathscr{M};E)$ and $\mathcal{D}(A^t_{\mmin})=H^d_0(\mathscr{M};F)$.
\item $H^d(\mathscr{M};E)\< \mathcal{D}(A_{\mmax})$ is dense.
\end{enumerate}
\end{proposition}
\subsection{Green's formula, traces of Sobolev spaces over the boundary, and weak traces for elliptic operators }\label{ss:traces}
Let $j\in \NN\cup\{0\}$.
Let $\gamma^j \colon C^{\infty}(\mathscr{M};E)\rightarrow C^{\infty}(\Sigma;E')$ denote the trace map $\gamma^j u:= (\nabla_{\nu}^E)^ju|_{\Sigma}$ yielding the $j$th jet in normal direction. Set
\begin{equation}\label{e:rho-d}
\rho^d\ :=\ \left(\gamma^0,...,\gamma^{d-1}\right)\colon C^{\infty}(\mathscr{M};E)\too C^{\infty}({\Si};E'^d).
\end{equation}
Analogously, $\nabla^F$ gives rise to trace maps $\gamma^j \colon C^{\infty}(\mathscr{M};F)\rightarrow C^{\infty}(\Sigma;F')$. The corresponding maps for $F$ will also be denoted by $\gamma^j$ and $\rho^d$.
We recall \textit{Green's Formula}, e.g., from \textsc{Seeley} \cite[Equation 7]{See66}, \textsc{Tr\`eves} \cite[Equation III.5.41]{Treves:1}, \textsc{Grubb} \cite[Proposition 11.3]{Grubb:2009}, or \textsc{Frey} \cite[Proposition 1.1.2]{Frey2005On}, with a description of the operator $J$ in the error term:
\begin{proposition}[Green's Formula for differential operators of order $d\ge 1$]\label{Green's formula}
Let $A \colon C^{\infty}(\mathscr{M};E)\too C^{\infty}(\mathscr{M};F)$ be a linear differential operator of order $d$. Then
there exists a (uniquely determined) differential operator
\begin{equation*}
J \colon C^{\infty}(\Sigma;E'^d)\too C^{\infty}(\Sigma;F'^d),
\end{equation*}
such that for all $u\in C^{\infty}(\mathscr{M};E), v\in C^{\infty}(\mathscr{M};F)$ we have
\begin{equation}\label{e:Green}
( Au,v)_{L^2(\mathscr{M};F)}- (u,A^tv)_{L^2(\mathscr{M};E)}\ =\ (J\rho^du,\rho^dv)_{L^2(\Sigma;F'^d)}.
\end{equation}
$J$ is a matrix of differential operators $J_{kj}$ of order $d-1-k-j$, $0\leq k,j\leq d-1$, and $J_{kj}=0$ if $k+j>d-1$ ($J$ is upper skew-triangular). Moreover, for $j=d-1-k$ we have explicitly given homomorphisms
\begin{equation}\label{e:greens-form-explicit}
J_{k,d-1-k}\ =\ \mathrm{i}^d (-1)^{d-1-k}\sigma_d(A)(\nu^\flat).
\end{equation}
\end{proposition}
\begin{remark}\label{r:greens-form}
(a) For $d=1,2,3$, we visualize the structure of the matrix $J$,
\[
\begin{pmatrix} J^{[0]}_{00}\end{pmatrix},\
\begin{pmatrix} J^{[1]}_{00}& J^{[0]}_{01}\\
J^{[0]}_{10} & 0\end{pmatrix},\
\begin{pmatrix} J^{[2]}_{00}& J^{[1]}_{01}& J^{[0]}_{02}\\
J^{[1]}_{10} & J^{[0]}_{11} & 0\\
J^{[0]}_{20} & 0 & 0\end{pmatrix},\ \text{etc.},
\]
where the orders of the differential operators of the entries were marked by a superscript $[\langle order\rangle]$.
\newline
(b) From the explicit form of the skew diagonal elements of
$J$ in \eqref{e:greens-form-explicit}, we get that $J$
is invertible for any elliptic operator $A$.
\newline
(c) If $J$ is invertible, Green's Formula \eqref{e:Green} extends to $(u,v)\in \mathcal{D}(A_{\mmax})\times H^d(\mathscr{M};F)$, where the right-hand side is interpreted as the $L^2$-dual pairing
$$\oplus_{j=0}^{d-1}H^{-d+j+\frac{1}{2}}(\Sigma;F')\ \times\ \oplus_{j=0}^{d-1}H^{d-j-\frac{1}{2}}(\Sigma;F')\ \too\ \CC.$$
\end{remark}
With \cite[Theorem 1.1.4]{Frey2005On}, we obtain a slight reformulation, sharpening, and generalization of the classical \textit{Sobolev Trace Theorem} (see also \cite[Section 9.1]{Grubb:2009} and \cite[Lemma 16.1]{Tar07}):
\begin{proposition}[Sobolev Trace Theorem]\label{p:trace}
\begin{enumerate}[(1)]
\item We have continuous trace maps $\rho^d$ (obtained by continuous extension):
\begin{eqnarray*}
(a) &&\rho^d\colon H^{d+s}(\mathscr{M};E)\too \displaystyle\oplus_{j=0}^{d-1}H^{d+s-j-\frac{1}{2}}(\Sigma;E') \text{ for $s>-\frac{1}{2}$}\, ,\\
(b) &&\rho^d\colon\mathcal{D}(A_{\mmax})\too \displaystyle\oplus_{j=0}^{d-1}H^{-j-\frac{1}{2}}(\Sigma;E').
\end{eqnarray*}
Moreover, the map (a) is surjective and has a continuous right-inverse $\eta^d$.
\item If $u\in\mathcal{D}(A_{\mmax})$, then $u\in H^d(\mathscr{M};E)$ if and only if
$$\rho^du\ \in\ H^{d-\frac{1}{2}}(\Sigma;E')\oplus\cdots\oplus H^{\frac{1}{2}}(\Sigma;E').$$
\item For $\rho^d$ on $\mathcal{D}(A_{\mmax})$ we have
$
\ker\/\rho^d\ = \ H_0^d(\mathscr{M};E).
$
\end{enumerate}
\end{proposition}
\begin{remark}\label{r:cauchy-trace-map}
(a) Following \textsc{Grubb} \cite[Section 9.1]{Grubb:2009}, we call the preceding scale of operators $\rho^d$ with domains in different Sobolev spaces by one name: the \textit{Cauchy trace operator} associated with the order $d$.
\newline
(b) It is well known that the trace operators do not extend to the whole $L^2(\mathscr{M};E)$. For the special case of the half-space in $\RR^n$, it is shown in \cite[Remark 9.4]{Grubb:2009} that the 0-trace map $\gamma^0$ makes sense on $H^s(\RR^n_+)$ if and only if $s > \12$. The Cauchy trace operator $\rho^d$ extends, however, to $\mathcal{D}(A_{\mmax})$, though in a way that depends on the choice of the elliptic operator $A$ but with $\ker\rho^d=H^d_0(\mathscr{M};E)$ independent of $A$. That makes the claims 1b and 2 of the preceding proposition particularly interesting.
\newline
(c) Claim 3 admits replacing the abstract definition of $H_0^d(\mathscr{M};E)$ in \eqref{e:H_0} by
a concrete check of the Cauchy boundary data of a given section.
The inclusion $H_0^d(\mathscr{M};E)\<\ker\bigl(\rho^d|_{\mathcal{D}(A_{\mmax})}\bigr)$ is obvious. There remains to be shown that the converse holds. Via local maps and claim 2, this reduces to the Euclidean case (see \cite[Theorem 9.6]{Grubb:2009}).
\end{remark}
\paragraph{Homogenization of Sobolev orders.}
Following \textsc{\Calderon}\ \cite[Section 4.1, p. 76]{Cal76} and using the notation of \cite[p. 26]{Frey2005On}, we introduce a \textit{homogenized} (\textit{adjusted}) Cauchy trace operator $\wt\rho^d$.
We set $\Delta^{E'}:=(\nabla^{E'})^t\/\nabla^{E'}$, where $\nabla^{E'}$ denotes the
restriction of $\nabla^{E}$
to $\Sigma$. Since $\Delta^{E'}+1$ is a positive symmetric elliptic differential operator of second order on the closed manifold $\Sigma$, it possesses a discrete spectral resolution (e.g., \cite[Section 1.6]{Gilkey:1995}). Then $\Phi :=(\Delta^{E'}+1)^{1/2}$ is a pseudo-differential operator of order 1 which induces an isomorphism of Hilbert spaces
\[
\Phi_{(s)} \colon H^s(\Si;E')\ \too\ H^{s-1}(\Si;E') \text{ for all $s\in\RR$}
\]
and, in fact, generates the Sobolev scale $H^s(\Sigma;E')$, see the Sobolev scale of an unbounded operator in \cite[Section 2.A]{BrLe01}.
In order to achieve that all boundary data are of the same Sobolev order, we introduce the matrix
\begin{equation*
\Phi_d\quad :=\quad\left(
\begin{array}{cccc}
\Phi^{\frac{d-1}{2}} & 0 & \cdots & 0 \\
0 & \Phi^{\frac{d-3}{2}} & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & \Phi^{\frac{-d+1}{2}}\\
\end{array}
\right).
\end{equation*}
We set $\wt{\rho}^d:=\Phi_d\circ\rho^d$, $\wt{\eta}^d:=\eta^d\circ\Phi_d^{-1}$. So, we obtain a condensed and adjusted Trace Theorem as a corollary to Proposition \ref{p:trace}:
\begin{corollary}[Homogenized trace map]\label{c:tilde-rho}
We have continuous trace maps $\wt\rho^d$ (obtained by continuous extension):
\begin{enumerate}[(a)]
\item $\wt\rho^d\colon H^{s+\frac{d}{2}}(\mathscr{M};E)\ \too\ H^s(\Sigma;E'^d)$ for $s>\frac{d}{2}-\frac{1}{2}$,
\item $\wt\rho^d\colon \mathcal{D}(A_{\max})\ \too\ H^{-\frac{d}{2}}(\Sigma;E'^d)$.
\end{enumerate}
Furthermore, the map (a) is surjective and has a continuous right-inverse, $\wt\eta^d$. \end{corollary}
\begin{remark}
(a) We have
$\mathcal{D}(A_{\min})= \ker\bigl(\wt\rho^d\colon \mathcal{D}(A_{\max})\to H^{-\frac{d}{2}}(\Sigma;E'^d)\bigr).$
\newline
(b) Analogous constructions for the bundle $F$ lead to $\Delta^{F'}\colon C^{\infty}(\Sigma;F')\to C^{\infty}(\Sigma;F')$. Whenever this causes no ambiguity we will denote the corresponding matrices,
$\Phi^{F'}$ and $\Phi_d^{F'}$, again by $\Phi$ and $\Phi_d$, respectively.
\end{remark}
We can replace the boundary operator $J$ of Green's Formula (Proposition \ref{Green's formula}) by its adjusted version
\begin{equation}\label{e:J-adjusted}
\wt{J}\ :=\ (\Phi_d^{F'})^{-1}\circ J\circ (\Phi_d^{E'})^{-1}.
\end{equation}
It follows that all components of $\wt{J}$,
\[
\wt{J}_{ij}=\Phi^{\frac{2i+1-d}{2}}J_{ij}\Phi^{\frac{2j+1-d}{2}}
\]
are pseudo-differential operators of order $i+j+(1-d)+(d-1)-i-j=0$.
So for any $s\in\RR$, \begin{equation*
\wt J\colon\ H^s(\Si,E'^d)\ \too\ H^s(\Si,F'^d).
\end{equation*}
$\wt J$ is upper skew-triangular, with invertible elements on the skew diagonal for elliptic operator $A$.
For later use we give the adjusted version of Green's Formula.
It is valid for arbitrary linear differential operators of order $d\ge 1$ with smooth coefficients acting between sections of smooth Hermitian vector bundles $E$ and $F$ over a smooth compact Riemannian manifold $\mathscr{M}$ with boundary $\Si$:
With the preceding notations of $\wt J$ for the adjusted Green boundary operator of \eqref{e:J-adjusted} and $\wt\rho^d$ for the adjusted Cauchy trace operators of Corollary \ref{c:tilde-rho}a, we have
\begin{equation}\label{e:green-adjusted}
\ (Au,v)_{L^2(\mathscr{M};F)}-(u,A^tv)_{L^2(\mathscr{M};E)}\ =\ (\wt J\wt \rho^du,\wt \rho^dv)_{L^2(\Sigma;F'^d)}
\end{equation}
for $s\geq\frac{d}{2}$ and all $u\in H^{s+\frac{d}{2}}(\mathscr{M};E)$, $v\in H^{s+\frac{d}{2}}(\mathscr{M};F)$. For elliptic differential operators, \eqref{e:green-adjusted} remains valid
for $u\in \mathcal{D}(A_{\mmax})$, $v\in H^d(\mathscr{M};F)$ in the following extended version
\begin{equation*
(Au,v)_{L^2(\mathscr{M};F)}- (u,A^tv)_{L^2(\mathscr{M};E)}\ =\ \lla\wt J\wt \rho^du,\wt \rho^dv\rra_{-\frac{d}{2},\frac{d}{2}}\,,
\end{equation*}
where $\wt \rho^du\in H^{-\frac{d}{2}}(\Si;E'^d)$ and $\wt \rho^dv\in H^{\frac{d}{2}}(\Si;F'^d)$.
\subsection{Properties of the \Calderon\ projection}\label{ss:weak-traces}
We give our version of perhaps not widely known classical results concerning the \Calderon\ projection.
In his famous note \cite{Cal63} of 1963, \textsc{A. \Calderon} introduced the concept of a pseudo-differential projection onto the Cauchy data of solutions of a system of homogeneous elliptic differential equations over smooth compact manifolds with boundary, later called the \Calderon\ projection, see below Theorem \ref{t:Se69} and Corollary \ref{c:generalization-of-calderon-in-seeley}.
While the \textit{Cauchy trace operator} was introduced for \textit{sections} in Equation \eqref{e:rho-d} and Corollary \ref{c:tilde-rho}, we define the \textit{Cauchy data spaces} for elliptic differential operators.
We recall
\begin{definition}[Cauchy Data Spaces]\label{d:cauchy-data-spaces}
Let $A, d, \mathscr{M}, \Si, E, F$ be given as in Section \ref{ss:our-data}.
Based on the homogenized Cauchy trace operators in Corollary \ref{c:tilde-rho}, we define the Cauchy data spaces as follows:
\newline
(a
\begin{equation}\label{e:cauchy-data-space-basic}
\Lambda_{-\frac{d}{2}}(A)\ :=\ \{h\in H^{-\frac{d}{2}}(\Sigma;E'^d)\mid \exists u \in \ker A_{\max} \text{ with }\wt{\rho}^du=h\},
\end{equation}
i.e., the space of boundary values of weak solutions to $A$ (= sections belonging to the maximal domain of $A$ that vanish under the operation of $A$ in the distribution sense).
\newline
(b)
For $s\ge \frac{d}{2}$\/,
\begin{equation}\label{e:cauchy-data-space-strong}
\Lambda_{s}(A):=\{h\in H^s(\Sigma;E'^d)\mid \exists u \in H^{s+\frac{d}{2}}(\mathscr{M};E),
Au=0 \text{ with }\wt{\rho}^du=h\}.
\end{equation}
\end{definition}
\begin{remark}
(a) For later use, we rewrite the Cauchy data spaces:
\begin{gather}
\Lambda_{-\frac{d}{2}}(A)\ = \ \wt{\rho}^d(\ker A_{\mmax}), \quad \tand \notag\\
\Lambda_{s}(A)= \wt{\rho}^d(\ker A_{s+\frac{d}{2}})= \wt\rho^d\bigl(\ker A_{\mmax}\cap H^{s+\frac{d}{2}}(\mathscr{M};E)\bigr)\ \text{ for $s\ge \frac{d}{2}$\/,} \label{e:cauchy-data-spaces-s-ge=d/2}
\end{gather}
where
$A_{s+\frac{d}{2}} \colon H^{s+\frac{d}{2}}(\mathscr{M};E)\to H^{s-\frac{d}{2}}(\mathscr{M};F)$ is the extension of $A$.
In fact,
for $u\in H^{s+\frac{d}{2}}(\mathscr{M};E)$, $v\in \Ci_c(\mathscr{M}^\circ;F)$,
we have
\[\bigl(A_{s+\frac{d}{2}}u,v\bigr)_{L^2(\mathscr{M};F)}\ =\ (u,A^t_0v)_{L^2(\mathscr{M};E)},\] so
$A_{\mmax}|_{H^{s+\frac{d}{2}}(\mathscr{M};E)}=A_{s+\frac{d}{2}}$, thus \eqref{e:cauchy-data-spaces-s-ge=d/2} follows.
\newline
(b)
For a subspace $V$ of $L^2(\Si;F'^d)$, we denote by \[V^{\bot {L^2}}\ :=\ \{g\in L^2(\Si;F'^d)\mid (f,g)_{L^2(\Si;F'^d)}=0 \ \text{for all $f\in V$}\}.\]
Let $\wt{J}$ be the adjusted Green's form of \eqref{e:J-adjusted}.
Now we recall an important relationship of the Cauchy data spaces between $A$ and $A^t$:
\begin{equation}\label{e:cauchy-data-space-transposed}
\Lambda_{\frac{d}{2}}(A^t)\ =\ \left(\wt{J}\Lambda_{\frac{d}{2}}(A)\right)^{\bot {L^2}}\cap\, H^{\frac{d}{2}}(\Si;F'^d).
\end{equation}
It was proved in \cite[Proposition 2.1.1]{Frey2005On} and will be used in the proof of our Corollary
\ref{c:l2-orthogonality-of-calderon-in-seeley}.
\end{remark}
There is a vast literature on pseudo-differential operators and the symbolic calculus. We shall only draw on the general knowledge regarding pseudo-differential operators over \textit{closed} manifolds.
Let $k\in \RR$. Let $\Psi_k(\Si;G_1,G_2)$ denote the space of all $k$th order pseudo-differential operators mapping sections of a smooth vector bundle $G_1$
to sections of a smooth vector bundle $G_2$ over the same closed manifold $\Si$; for $P\in \Psi_k(\Si;G_1,G_2)$, let $\sigma_k(P)$ denote the principle symbol of $P$.
\begin{proposition}\label{p:pseudo-differential-property}(cf. \cite[Lemmas 1.34 and 1.35]{Gilkey:1995})\\
(a) For $P\in \Psi_k(\Si;G_1,G_2)$, there is a unique formal adjoint of $P$, denoted by $P^t$, such that
\[
(Pf,h)_{L^2(\Sigma;G_2)}=(f,P^th)_{L^2(\Sigma;G_1)}\ \ \text {for all $f\in C^{\infty}(\Sigma;G_1),h\in C^{\infty}(\Sigma;G_2)$},
\]
and $P^t\in \Psi_k(\Si;G_2,G_1)$, $\sigma_k(P^t)=\sigma_k(P)^*$.\\
(b) If $Q\in \Psi_k(\Si;G_1,G_2), P\in \Psi_e(\Si;G_2,G_3)$, then $PQ\in \Psi_{k+e}(\Si;G_1,G_3)$ and
$\sigma_{k+e}(PQ)=\sigma_k(P)\sigma_e(Q)$.\\
(c) Continuity property with respect to Sobolev spaces:
Each $P\in \Psi_k(\Si;G_1,G_2)$ extends to a continuous linear map form
$H^{s+k}(\Si;G_1)$ to $H^s(\Si;G_2)$ for all real $s$.
\end{proposition}
With the preceding notations we recall the classical knowledge about \Calderon\ projections.
\begin{theorem}[A. Calder\'{o}n 1963; R. T. Seeley 1966, 1969] \label{t:Se69}
Let $A$ be an elliptic differential operator of order $d$ over a smooth compact Riemannian manifold $\mathscr{M}$ with boundary $\Si$, acting between sections of Hermitian vector bundles $E,F$ over $\mathscr{M}$.
Then there exists a (in general, not uniquely determined) zeroth order classical pseudo-differential operator, called the {\em Calder\'{o}n projection} of $A$, $C(A)=C_\infty(A)\colon \Ci(\Sigma;E'^d)\to C^{\infty}(\Sigma;E'^d)$, such that
$C(A)$ is idempotent, i.e., $C(A)^2=C(A)$ and
\begin{equation}\label{e:range-of-calderon}
\image C_{-\frac{d}{2}}(A) \ = \ \wt\rho^d(\ker A_{\mmax})\/.
\end{equation}
Here, for every $s\in\RR$, we denote the extended projection by $C_s(A)\colon H^s(\Sigma;E'^d)$ $\to H^s(\Sigma;E'^d)$\/.
\end{theorem}
\begin{remark}\label{r:calderon-sources}
A detailed construction of the Calder\'{o}n projection and a careful proof of \eqref{e:range-of-calderon}
can be found in \cite[Section 2.3]{Frey2005On}, originally from
\textsc{Seeley}'s \cite{Palais-Seeley:1965,See66,Seeley:1968} and \textsc{Calder\'{o}n}'s \cite{Cal76} and inspired by \textsc{H{\"o}rmander}'s \cite{Ho66}; for the properties see also \cite[Section 11.1]{Grubb:2009} and, in the special case $d=1$, \cite[Chapters 12-13]{BoWo93} and \cite[Section 5]{BoLeZh08} .
\end{remark}
\begin{remark}\label{r:regular-wellposed-boundary}
The Calder\'{o}n projections are very useful in the treatment of boundary value problems for elliptic differential operators. We recall some relevant definitions and properties directly from \cite{Frey2005On}.
\par
(a) Assume $P\in \Psi_0(\Si;E'^d,E'^d)$ idempotent. To consider $P$ as a
boundary condition we associate with it the \textit{realisation}
$A_P\colon \mathcal{D}(A_{P})\to L^2(\mathscr{M};F)$ with
\[
\mathcal{D}(A_{P}):=\{u\in H^d(\mathscr{M};E)\mid P\wt{\rho}^du=0\}.
\]
We define the \textit{weak domain} by
\[
\mathcal{D}(A_{\mmax,P}):=\{u\in \mathcal{D}(A_{\mmax})\mid P\wt{\rho}^du=0\}.
\]
\par
(b) Following \cite[Definition 1.2.5]{Frey2005On},
we call $P$ a \textit{regular} boundary condition if
\[
\mathcal{D}(A_P)=\mathcal{D}(A_{\mmax,P}).
\]
$P$ is called \textit{well-posed} if it is regular and $\image A_P$ has finite codimension.
From \cite[Proposition 2.1.2]{Frey2005On} we recall equivalent conditions:
\begin{itemize}\label{i:boundary-condition-Fredholm}
\item $P$ is a regular boundary condition if and only if $A_P \colon \mathcal{D}(A_P)\to L^2(\mathscr{M};F)$ is left-Fredholm, i.e., $\dim\ker A_P< \infty$ and $\image A_P$ closed.
\item The boundary condition $P$ is well-posed if and only if $A_P\colon\mathcal{D}(A_P)\to L^2(\mathscr{M};F)$ is Fredholm.
\end{itemize}
\par(c) With these notations we obtain from \textsc{Seeley}'s achievements in \cite{Seeley:1968}, reproduced and worked out in \cite[Theorem 2.1.4(ii)]{Frey2005On} the following \textit{operational conditions} on the principle symbols of boundary pseudo-differential operators:
Regularity and moreover well-posedness hold respectively if and only if for all $q\in \Sigma$, $\xi\in T^*_q\Sigma$, $\xi\neq 0$
\[
\sigma_0(P)(q,\xi)\colon \image \sigma_0\bigl(C(A)\bigr)(q,\xi)\too E'^d|_{q} \text{ injective},
\]
respectively,
\[
\sigma_0(P)(q,\xi)\colon \image \sigma_0\bigl(C(A)\bigr)(q,\xi)\too \image \sigma_0(P)(q,\xi) \text{ invertible.}
\]
In particular,
since $\sigma_0(\Id)=\Id$ and
\[
\sigma_0\bigl(C(A)\bigr)(q,\xi)\colon \image \sigma_0\bigl(C(A)\bigr)(q,\xi) \too \image \sigma_0\bigl(C(A)\bigr)(q,\xi)
\]
is just the identity,
$\Id$ is a regular boundary condition and $C(A)$ is a well-posed boundary condition.
\par
(d)
For regular $P$ we have a lifting jack for regularity, also called \textit{higher regularity} (see \cite[Theorems 2.2.1 and 2.2.3]{Frey2005On}):
Let $s\in \NN \cup\{0\}$.
Assume that $u\in L^2(\mathscr{M};E)$ satisfies
\[Au\in H^s(\mathscr{M};F),\ \ \ \ P\wt \rho^du \in H^{s+\frac{d}{2}}(\Sigma;E'^d).\]
Then $u\in H^{s+d}(\mathscr{M};E)$.
When $P$ is also well-posed, then this regularity argument holds for all real $s\geq0$.
\par
(e) For later use we give the following simple description of $\image C_s(A)$ for all
$s\in \RR$\, :
\begin{equation}\label{e:calderon2cauchy-data-spaces-any}
\image C_s(A)\
=\ \image C_t(A)\, \cap\, H^s(\Si;E'^d)\ \text{ for any real $t\leq s$.}
\end{equation}
Since $s\ge t$, $H^s(\Si;E'^d)\subset H^t(\Si;E'^d)$, then $\image C_s(A) =
\image\left(C_t(A)|_{H^s(\Si;E'^d)}\right)$.
Hence, the inclusion $\<$ of (\ref{e:calderon2cauchy-data-spaces-any}) is trivial.
For the opposite inclusion we exploit that $C(A)$ is a zeroth order pseudo-differential \textit{idempotent}: so, for any $C_t(A)f\in H^s(\Si;E'^d)$, we have
\[C_t(A)f\ =\ C^2_t(A)f\ =\ C_s(A)C_t(A)f.\]
In particular, for all
$s\ge -\frac{d}{2}$\,:
\begin{equation}\label{e:calderon2cauchy-data-spaces}
\image C_s(A)\
=\ \image C_{-\frac{d}{2}}(A)\, \cap\, H^s(\Si;E'^d)\ =\ \Lambda_{-\frac{d}{2}}(A)\, \cap\, H^s(\Si;E'^d).
\end{equation}
As a side result, we obtain for all $s\in\RR$
\begin{equation}\label{e:chain-of-cds}
\image C_s(A)\ =\ \overline{\image C_\infty(A)}^{\|\cdot\|_{H^s(\Sigma;E'^d)}}=\ \overline{\image C_t(A)}^{\|\cdot\|_{H^s(\Sigma;E'^d)}}\ \text{for any real $t\geq s$}.
\end{equation}
\end{remark}
According to the preceding Remark \ref{r:regular-wellposed-boundary}e, the image of $C_s(A)$,
$s\in\RR$, does not depend on the choices of \Calderon\ projection $C(A)$ in the \Calderon --Seeley Theorem \ref{t:Se69}. Moreover we can prove the following generalization of Equation \eqref{e:range-of-calderon}.
\begin{corollary}\label{c:generalization-of-calderon-in-seeley}
The validity of the claim \eqref{e:range-of-calderon} in Theorem \ref{t:Se69} holds also for Sobolev orders $s\ge \frac{d}{2}$, yielding
\begin{equation}\label{e:range-of-calderon-all}
\image(C_s(A)) = \wt\rho^d\bigl(\ker A_{\mmax}\cap H^{s+\frac{d}{2}}(\mathscr{M};E)\bigr)\ \text{ for $s\ge \frac{d}{2}$}.
\end{equation}
\end{corollary}
\begin{proof}
By the Sobolev Trace Theorem (cf. Corollary \ref{c:tilde-rho}) and Equations \eqref{e:range-of-calderon} and \eqref{e:calderon2cauchy-data-spaces}, we obtain
\[
\image C_s(A)\ \>\ \wt\rho^d\bigl(\ker A_{\mmax}\cap H^{s+\frac{d}{2}}(\mathscr{M};E)\bigr) \text{ for $s> \frac{d}{2}-\frac{1}{2}$}\,,
\]
i.e., the inclusion $\>$ of \eqref{e:range-of-calderon-all}, actually for a wider range of $s$ than claimed.
\par
Now we turn to the proof of the inclusion $\<$ for $s\geq \frac{d}{2}$.
First by the preceding Remark \ref{r:regular-wellposed-boundary}c, $C(A)$ is a well-posed boundary condition. Let $s\geq \frac{d}{2}$.
If $f\in \image C_s(A)\fequal{\eqref{e:calderon2cauchy-data-spaces}}\Lambda_{-\frac{d}{2}}(A)\, \cap\, H^s(\Si;E'^d)$, then there is a $u\in \ker A_{\mmax}$, such that $f=\wt\rho^d u$, so $C(A)\wt\rho^d u=\wt\rho^d u \in H^s(\Si;E'^d)$. By the higher regularity for well-posed boundary conditions of the preceding Remark \ref{r:regular-wellposed-boundary}d,
we have $u\in H^{s+\frac{d}{2}}(\mathscr{M};E)$, so $f\in \wt\rho^d\bigl(\ker A_{\mmax}\cap H^{s+\frac{d}{2}}(\mathscr{M};E)\bigr)$.
Thus we get \eqref{e:range-of-calderon-all}.\qed
\end{proof}
\begin{remark}
In Definition \ref{d:cauchy-data-spaces}, we defined the Cauchy data spaces for $s=-\frac{d}{2}$ and $s\ge \frac{d}{2}$\/.
By Theorem \ref{t:Se69} and Corollary \ref{c:generalization-of-calderon-in-seeley}, these spaces coincide with the images of the extensions $C_s(A)$ of the Calder{\'o}n projection for those $s$. Moreover by \eqref{e:chain-of-cds} and Corollary \ref{c:generalization-of-calderon-in-seeley}, the images of $C_s(A)$ for all $s\in\RR$ are unique determined by the Cauchy data spaces of the elliptic operator.
That motivates us to define the Cauchy data spaces $\Lambda_s(A)$ for all $s\in\RR$ as the images of $C_s(A)$, yielding a \textit{chain of Cauchy data spaces}
\begin{equation}\label{e:cauchy-data-spaces}
\Lambda_s(A)\ :=\ \image C_s(A) \ \text{for all $s\in\RR$}.
\end{equation}
\end{remark}
By \eqref{e:cauchy-data-space-transposed} and Corollary \ref{c:generalization-of-calderon-in-seeley},
we immediately have
\begin{equation}\label{e:range-of-calderon-transposed}
\image C_{\frac{d}{2}}(A^t) \ =\ \left(\wt{J}\bigl(\image C_{\frac{d}{2}}(A)\bigr)\right)^{\bot {L^2}}\cap\, H^{\frac{d}{2}}(\Si;F'^d).
\end{equation}
Now we can prove the following generalization of Equation \eqref{e:range-of-calderon-transposed}.
The preceding corollary and the following one will be used in Section \ref{s:proof} in the proof of our Main Theorem (Theorem \ref{t:main}).
\begin{corollary}\label{c:l2-orthogonality-of-calderon-in-seeley}
There is an $L^2$-orthogonal decompositions of complementary closed subspaces
\begin{equation}\label{e:orthogonal-decomposition}
\image C_s(A)\ \oplus^{\bot {L^2}}\ \wt{J}^t\left(\image C_s(A^t)\right) \ =\ H^s(\Si;E'^d) \text{ for $s\ge 0$},
\end{equation}
where $\wt J$ is defined as in (\ref{e:J-adjusted}).
\end{corollary}
Our proof of Corollary \ref{c:l2-orthogonality-of-calderon-in-seeley} below
needs the (uniquely determined) \textit{$L^2$-orthogonalized \Calderon\ projection} which we are going to introduce now -- and use extensively later in our Section \ref{s:proof}.
\begin{lemma}\label{l:calderon-ort}
Let $C:=C(A)$ be a \Calderon\ projection as introduced in Theorem \ref{t:Se69}, i.e., a zeroth order classical pseudo-differential operator,
\[
C(A)\colon C^{\infty}(\Sigma;E'^d)\too C^{\infty}(\Sigma;E'^d)
\]
with $C^2=C$ and $\image C_{-\frac{d}{2}}(A)=\wt{\rho}^d(\ker A_{\mmax})$.
Then there exists a unique zeroth classical pseudo-differential operator $C^{\ort}=C^{\ort}(A)\colon C^{\infty}(\Sigma;E'^d)\to C^{\infty}(\Sigma;E'^d)$ with
\begin{equation}\label{e:orthogonal-calderonprojector-property}
(C^{\ort})^2=C^{\ort},\ \ CC^{\ort}=C^{\ort}, \ \ C^{\ort}C=C,
\end{equation}
with self-adjoint $L^2$-extension $C^{\ort}_0(A)$ on $H^0(\Si;E'^d)$ and with
\begin{equation}\label{e:invariant-L2orthogonal-Calderon}
\image C^{\ort}_s(A)= \image C_s(A)\ \ \ \text{for all $s\in\RR$}.
\end{equation}
\end{lemma}
\begin{proof}[of the lemma]
Since $CC^t+(\Id-C^t)(\Id-C)$ is a formally self-adjoint elliptic pseudo-differential operator with trivial kernel, it is invertible.
As in \cite[Lemma 12.8]{BoWo93}, we define $C^{\ort}$ by
\[
C^{\ort}:=CC^t\left(CC^t+(\Id-C^t)(\Id-C)\right)^{-1},
\]
and infer that it
is still a classical pseudo-differential operator of order $0$. Moreover, it is symmetric
\begin{equation}\label{e:orthogonal-calderonprojector-symmetry}
(C^{\ort}f,h)_{L^2(\Sigma;E'^d)}\ =\ (f,C^{\ort}h)_{L^2(\Sigma;E'^d)},\ \ \text{for all $f,h\in C^{\infty}(\Sigma;E'^d)$}
\end{equation}
which implies (over the closed manifold $\Si$) that its $L^2$-extension is self-adjoint. The symmetry property \eqref{e:orthogonal-calderonprojector-symmetry} and
the algebraic equalities of \eqref{e:orthogonal-calderonprojector-property} follow as in loc. cit. by calculation, then the invariance of the range in \eqref{e:invariant-L2orthogonal-Calderon}
and the uniqueness of $C^{\ort}$ follow.\qed
\end{proof}
\begin{proof}[of Corollary \ref{c:l2-orthogonality-of-calderon-in-seeley}]
For $s\geq\frac{d}{2}$\/, $\image C_s(A)\ni f=\wt{\rho}^du$ and $\image C_s(A^t)\ni g=\wt{\rho}^dv$ with $u\in\ker A_{\mmax}\cap H^{s+\frac{d}{2}}(\mathscr{M};E), v\in\ker A^t_{\mmax}\cap H^{s+\frac{d}{2}}(\mathscr{M};F)$, we obtain
\begin{eqnarray*}
(f,\wt{J}^tg)_{L^2(\Si;E'^d)}\ &=&\ (\wt{J}\wt{\rho}^du,\wt{\rho}^dv)_{L^2(\Si;E'^d)}\\ &\fequal{\eqref{e:green-adjusted}}&
(Au,v)_{L^2(\mathscr{M};F)} - (u,A^tv)_{L^2(\mathscr{M};E)}=0-0=0.
\end{eqnarray*}
That proves the $L^2$-orthogonality in \eqref{e:orthogonal-decomposition}.
\par
Now let $f\in H^s(\Si;E'^d)$ for $s=\frac{d}{2}$.
To prove \[f \in \image C_{\frac{d}{2}}(A)+ \wt{J}^t\bigl(\image C_{\frac{d}{2}}(A^t)\bigr),\] i.e., the claimed decomposition in \eqref{e:orthogonal-decomposition} for $s=\frac{d}{2}$, we rewrite
\begin{equation}\label{e:orthogonal-decomposition-d/2}
f\ =\ C^{\ort}_{\frac{d}{2}}(A)f+f-C^{\ort}_{\frac{d}{2}}(A)f.
\end{equation}
We shall show that $f-C^{\ort}_{\frac{d}{2}}(A)f\in \wt{J}^t\bigl(\image C_{\frac{d}{2}}(A^t)\bigr)$. First we observe that
\begin{equation}\label{e:step1}
f-C^{\ort}_{\frac{d}{2}}(A)f\in \bigl(\image C_{\frac{d}{2}}(A)\bigr)^{\bot {L^2}}\cap H^{\frac{d}{2}}(\Si;E'^d).
\end{equation}
In fact, $h'\in\image C_{\frac{d}{2}}(A)=\image C^{\ort}_{\frac{d}{2}}(A)$ implies that $h'= C^{\ort}_{\frac{d}{2}}(A)h$ for some $h\in H^{\frac{d}{2}}(\Si;E'^d)$. Then we have
\begin{multline*}
\bigl(f-C^{\ort}_{\frac{d}{2}}(A)f,h'\bigr)_{L^2(\Si;E'^d)}\ =\
\bigl(f-C^{\ort}_{\frac{d}{2}}(A)f,C^{\ort}_{\frac{d}{2}}(A)h\bigr)_{L^2(\Si;E'^d)}\\
=\ \bigl(C^{\ort}_{\frac{d}{2}}(A)(f-C^{\ort}_{\frac{d}{2}}(A)f),h\bigr)_{L^2(\Si;E'^d)}\ =\ (0,h')_{L^2(\Si;E'^d)}\ =\ 0.\end{multline*}
Next from the fact that $\wt{J}$ is an invertible zeroth
pseudo-differential operator, we obtain for any $h\in H^{\frac{d}{2}}(\Si;E'^d)$,
\begin{eqnarray*}
0 \ &\fequal{\eqref{e:step1}}&\ \bigl(f-C^{\ort}_{\frac{d}{2}}(A)f, C^{\ort}_{\frac{d}{2}}(A)h\bigr)_{L^2(\Si;E'^d)} \\
&=&\ \bigl(f-C^{\ort}_{\frac{d}{2}}(A)f, \wt J^{-1} \wt J C^{\ort}_{\frac{d}{2}}(A)h\bigr)_{L^2(\Si;E'^d)} \\
&=&\ \bigl((\wt J^t)^{-1}(f-C^{\ort}_{\frac{d}{2}}(A)f), \wt J C^{\ort}_{\frac{d}{2}}(A)h\bigr)_{L^2(\Si;F'^d)},
\end{eqnarray*}
so
\begin{equation*}\label{e:step2-3}
(\wt J^t)^{-1}\bigl(f-C^{\ort}_{\frac{d}{2}}(A)f\bigr)\in \left(\wt{J}\bigl(\image C_{\frac{d}{2}}(A)\bigr)\right)^{\bot {L^2}}\!\cap H^{\frac{d}{2}}(\Si;F'^d)
\fequal{\eqref{e:range-of-calderon-transposed}} \image C_{\frac{d}{2}}(A^t),
\end{equation*}
and we are done for $s=\frac{d}{2}$.
For $s>\frac{d}{2}$, the $L^2$-complement in $H^s(\Si;E'^d)$ of \eqref{e:orthogonal-decomposition} follows from the preceding result for $s=\frac{d}{2}$ and the facts that $C(A),C(A^t)$ and $C^{\ort}$ are pseudo-differential projections of order zero and $\wt{J}$ is an invertible
pseudo-differential operator of order zero.
Finally,
\eqref{e:orthogonal-decomposition} holds for $0\leq s< \frac{d}{2}$, since $H^{\frac{d}{2}}(\Si;E'^d)$ is dense in $H^s(\Si;E'^d)$ and $H^s(\Si;E'^d)\subset L^2(\Si;E'^d)$.
\qed\end{proof}
\begin{remark}\label{r:symplectic-form}
(a) In the proof of Corollary \ref{c:l2-orthogonality-of-calderon-in-seeley}, we got that
\[\ker C^{\ort}_{s}(A)=\image \bigl(\Id -C^{\ort}_{s}(A)\bigr)=\wt{J}^t\bigl(\image C_{s}(A^t)\bigr)=\wt{J}^t\bigl(\Lambda_s(A^t)\bigr)\ \text{ for $s\ge 0$},\] which will be used in Section \ref{ss:proof-for-s-ge-d-half}.
\newline
(b) For a symmetric elliptic differential operator $A$ and $s\geq 0$, $\wt J^t$ defines a (strong) symplectic form on ($L^2(\Si;E'^d)$) $H^s(\Si;E'^d)$ with the Cauchy data space as a Lagrangian subspace according to \eqref{e:orthogonal-decomposition}.
\end{remark}
\subsection{\textsc{Neubauer}'s arithmetic of families of closed linear subspaces in Banach space}
We refer to a functional analysis fact from our \cite[Appendix A.3]{BoZh14}
regarding the continuity of families of closed subspaces in Banach space. We restate it in the following lemma that is based on \textsc{Neubauer}'s elementary, but deeply original \cite{Ne68}. We impose the gap topology on the space of closed linear subspaces of a given Banach space.
We recall the concept of the gap between subspaces and the quantity $\gamma$ ("angular distance") that is useful in our estimates.
\begin{definition}\label{d:gap}(cf. \cite[Sections IV.2.1 and IV.4.1]{Ka95})
Let $X$ be a Banach space.
\newline
(a) Denote by $S_M$ the unit sphere of $M$ for any closed linear subspace $M$ of $X$. For any two closed linear subspaces $M,N$ of $X$, we set
\begin{eqnarray*}
\delta(M,N)&:=&\left\{
\begin{array}{ll}
\sup_{u\in S_M}\dist(u,N), & \hbox{if $M\neq \{0\}$,} \\
0, & \hbox{if $M=\{0\}$.}
\end{array}
\right.\\
\hat{\delta}(M,N)&:=& \max\{\delta(M,N),\delta(N,M)\}.
\end{eqnarray*}
$\hat{\delta}(M,N)$ is called the \textit{gap} between $M$ and $N$.
\newline
We set
\begin{eqnarray}\label{e:mimimal-gap}
\gamma(M,N)&:=&\left\{
\begin{array}{ll}
\inf_{u\in M \setminus N}\frac{\dist(u,N)}{\dist(u,M\cap N)}, & \hbox{if $M\not\subset N$,} \\
1, & \hbox{if $M\subset N$.}
\end{array}
\right.
\end{eqnarray}
(b) We say, a sequence $(M_n)_{n=1,2\dots}$ of closed linear subspaces \textit{converges} to $M$ if $\hat{\delta}(M_n,M)\rightarrow 0$ for $n\rightarrow \infty$. We write $M_n\to M$. Correspondingly we declare when a mapping $M$ from a topological space $B$ to the space of closed subspaces shall be called \textit{continuous} at $b_0\in B$.
\end{definition}
\begin{remark}\label{r:gap}
Denote the set of all closed operators from $X$ to $Y$ by $\mathcal{C}(X,Y)$. If $A_1,A_2\in \mathcal{C}(X,Y)$, their graphs $\Graph(A_1):=\{(x,A_1x)\in X\times Y \mid x\in \Dd(A_1)\}$, $\Graph(A_2)$ are closed linear subspaces in the product Banach space $X\times Y$. We use the gap $\hat{\delta}(\Graph(A_1),\Graph(A_2))$ to measure the "distance" between $A_1$ and $A_2$.
Obviously, for $A',A\in \mathcal{B}(X,Y)$, we have (cf. \cite[Theorem IV.2.14]{Ka95})
\begin{equation*
\hat{\delta}(\Graph(A'),\Graph(A))\leq \|A'-A\|.
\end{equation*}
\end{remark}
\begin{lemma}\label{l:gamma-closed-positive}(cf. \cite[Theorem IV.4.2]{Ka95})
Let X be a Banach space and let $M,N$ be closed subspaces of $X$.
In order that $M+N$ be closed, it is necessary and sufficient that $\gamma(M,N)$>0.
\end{lemma}
\begin{lemma}\label{l:closed-continuous}(cf. \cite[Poposition A.3.13 and Corollary A.3.14]{BoZh14})
Let $X$ be a Banach space and let $\left(M_b\right)_{b\in B}, \left(N_b\right)_{b\in B}$ be two families of closed subspaces of $X$, where $B$ is a parameter space. Assume that $M_{b_0}+N_{b_0}$ is closed for some $b_0\in B$, and $(M_b)_{b\in B}$, $(N_b)_{b\in B}$ are both continuous at $b_0$\,.
\newline
(a) Then $(M_b\cap N_b)_{b\in B}$ is continuous at $b_0$ if and only if $(M_b+N_b)_{b\in B}$ is continuous at $b_0$.
\newline
(b) Assume furthermore that for $b\in B$,
$\dim (M_b\cap N_b)\equiv$ constant $<+\infty$ or $\dim X/(M_b+N_b)\equiv$ constant $<+\infty$. Then the families $\left(M_b\cap N_b\right)_{b\in B}$ and $\left(M_b+N_b\right)_{b\in B}$ are both continuous at $b_0$.
\end{lemma}
\begin{remark}\label{r:closes-subspaces-sum}
For better understanding the proof of the preceding lemma in \cite{BoZh14}, note that:
according to \cite[Corollary A.3.12b]{BoZh14} and \cite[Theorem IV.4.2]{Ka95}), if $M_{b_0}+N_{b_0}$ is closed and $(M_b)_{b\in B}$, $(N_b)_{b\in B}$ and $(M_b\cap N_b)_{b\in B}$ are all continuous at $b_0$, then we get that $M_b+N_b$ is closed in a whole neighbourhood of $b_0$ in $B$.
\end{remark}
\section{Proof of our main theorem}\label{s:proof}
We shall divide the proof of Theorem \ref{t:main} into two cases, both under the assumption of
constant dimensions of the spaces of inner solutions $Z_{+,0}(A_b)= \ker A_{b,\mmin}$ and $Z_{-,0}(A_b)= \ker A^t_{b,\mmin}$\/:
\begin{enumerate}
\item[(1)] In Section \ref{ss:proof-for-s-ge-d-half} we deal with the case $s\geq \frac{d}{2}$ in three steps. (i) In Proposition \ref{p:kernel-cont-for-s-ge-dhalf}, we obtain that $\bigl(\ker A_{b,s+\frac{d}{2}}\bigr)_{b\in B}$ is continuous in $H^{s+\tfrac d2}(\mathscr{M};E)$.
(ii) Since the Cauchy trace map is bounded and surjective for $s\geq \frac{d}{2}$\/, we can deduce the main achievement of this Subsection, Proposition \ref{p:Cauchy-traces-varying}, and obtain that
the family $\bigl(\wt\rho^d(\ker A_{b,s+\frac{d}{2}})\bigr)_{b\in B}$ is continuous in $H^{s}(\Si;E'^d)$. That means $\bigl(\image C^{\ort}_s(A_b)\bigr)_{b\in B}$ is continuous in $H^{s}(\Si;E'^d)$.
(iii) So, according to Corollary \ref{c:sufficient-condition-families-of-projections}, we can conclude that the corresponding family $\bigl(C^{\ort}_s(A_b)\colon H^s(\Si;E'^d)\hookleftarrow\bigr)_{b\in B}$ of \Calderon\ projections is continuous in the operator norm for all $s\geq \frac{d}{2}$. That proves our Main Theorem for such $s$.
\item[(2)] We use the results of case (1) (i.e., $s\geq \frac{d}{2}$) to show that for $s<\frac{d}{2}$ the family $\bigl(C^{\ort}_s(A_b)\colon H^s(\Si;E'^d)\hookleftarrow\bigr)_{b\in B}$ of \Calderon\ projections is continuous in the operator norm by duality and interpolation property of spaces and operators in Sobolev scales.
That is the content of Section \ref{ss:s<halfd}.
\end{enumerate}
\begin{remark}\label{r:L2-orthogonalization-calderon}
We emphasize that all the \Calderon\ projections in this section are assumed to be $L^2$-orthogonalized, that is, with Lemma \ref{l:calderon-ort},
\[C=C^{\ort},\ \ \ \ \ C_s(A)=C^{\ort}_s(A) \ \text{for $s\in\RR$}.\]
\end{remark}
\subsection{Proof of our main theorem for $s\ge \frac{d}{2}$}
\label{ss:proof-for-s-ge-d-half}
We recall some of the technical ingredients and results obtained previously in our \cite[Proposition 4.5.2]{BoZh14}.
\begin{ass}\label{a:continuous-family-for-s-ge-dhalf}
Let $s\ge \frac{d}{2}$\/. We assume that the family
\[
\bigl(A_{b,s+\frac{d}{2}} \colon H^{s+\tfrac d2}(\mathscr{M};E) \to H^{s-\tfrac d2}(\mathscr{M};F)\bigr)_{b\in B}
\]
is a continuous family in the operator norm $\norm{\cdot}_{s+\frac{d}{2},s- \frac{d}{2}}$\/.
\end{ass}
For dealing with the case $s\ge \frac{d}{2}$, we introduce the following definition.
\begin{notation
Based on Remark \ref{r:regular-wellposed-boundary}a, for $s\geq\frac{d}{2}$, we denote by $A_{s+\frac{d}{2},P}$ the operators
\[
A_{s+\frac{d}{2},P}\colon\{u\in H^{s+\frac{d}{2}}(\mathscr{M};E)\mid P\wt \rho^du=0\}\too H^{s-\tfrac d2}(\mathscr{M};F),
\]
for any boundary condition $P\colon C^{\infty}(\Si;E'^d)\to C^{\infty}(\Si;E'^d)$.
We write shorthand $A_P:=A_{d,P}$\/.
\end{notation}
For any elliptic operator $A$ over a smooth compact manifold with boundary, recall $A_{\mmin}\colon H^d_0(\mathscr{M};E)\to L^2(\mathscr{M};F)$. It is well known and was emphasised above in Notation \ref{n:basic-notations} that $\ker A_{\mmin}$ consists only of smooth sections and is finite-dimensional. That follows from the interior regularity for elliptic operators (e.g., \cite[Theorem 5.11.1]{Taylor96}) and one can use the interior elliptic estimate to prove that $A_{\mmin}$ is left-Fredholm, i.e., $\dim \ker A_{\mmin}< +\infty$ and $\image A_{\mmin}$ is closed (e.g., \cite[Propositions 1.1.1 and A.1.4]{Frey2005On}). Later we shall use the following slight generalization:
\begin{lemma}\label{l:s-Amin-A-semifredholm}
For $s\geq \frac{d}{2}$,
$\ker A_{s+\frac{d}{2},\Id}=\ker A_{\mmin}$ is finite-dimensional and consists of smooth sections.
\end{lemma}
\begin{proof}
We only need to prove the equality.
By Proposition \ref{p:trace}(3), $\Dd(A_{\Id})=H^d_0(\mathscr{M};E)$, so $A_{\Id}=A_{\mmin}$.
As just emphasized, we have
$\ker A_{\mmin}\subset \{u\in C^{\infty}(\mathscr{M};E)\mid Au=0 \tand \wt\rho^du=0\}$.
Obviously we have for $s\geq \frac{d}{2}$
\[\{u\in C^{\infty}(\mathscr{M};E)\mid Au=0 \tand \wt\rho^du=0\}\subset\ker A_{s+\frac{d}{2},\Id}\subset\ker A_{\mmin}.\]
So we get the equality.
\qed
\end{proof}
In the following lemma we prove that $\image A_{s+\tfrac d2}$ is closed in $H^{s-\tfrac d2}(\mathscr{M};F)$ and get information about the quotient space
$H^{s-\tfrac d2}(\mathscr{M};F)/ \image A_{s+\tfrac d2}$ .
\begin{lemma}\label{l:strongdecomposition}
For $s\ge \frac{d}{2}$, there is an $L^2$-orthogonal decomposition of complementary closed subspaces
\begin{equation}\label{e:s>d/2-L^2-decomposition}
H^{s-\tfrac d2}(\mathscr{M};F)\ =\ \image A_{s+\tfrac d2}\oplus^{\bot {L^2}}\ker A^t_{\mmin}.
\end{equation}
\end{lemma}
\begin{proof}
The $L^2$-orthogonality follows directly from Green's Formula (Proposition \ref{Green's formula}) and in adjusted form \eqref{e:green-adjusted}. In fact, for $s\ge\frac{d}{2}$,
$u\in H^{s+\tfrac d2}(\mathscr{M};E)$ and $v\in \ker A^t_{\mmin}$\/, we have
\[(Au,v)_{L^2(\mathscr{M};F)}\ =\ (u, A^tv)_{L^2(\mathscr{M};E)}+(\wt J \wt \rho^du,\wt \rho^dv)_{L^2(\Si;F'^d)}\ = 0.\]
Next we prove
\begin{equation}\label{e:L^2-decomposition}
L^2(\mathscr{M};F)\ =\ \image A_C\oplus \ker A^t_{\mmin},
\end{equation}
where $C:=C^{\ort}(A)$ denotes the $L^2$-orthogonalized \Calderon\ projection defined in Lemma \ref{l:calderon-ort}.
By Remarks \ref{r:regular-wellposed-boundary}c, b, the \Calderon\ projection is a well-posed
boundary condition; hence $A_C \colon \mathcal{D}(A_C)\to L^2(\mathscr{M};F)$ is Fredholm, where
$\mathcal{D}(A_C)=\{u\in H^d(\mathscr{M};E) \mid C\wt{\rho}^d u=0\}$.
Thus $\image A_C$ is closed, then we have the decomposition $L^2(\mathscr{M};F) =\image A_C\oplus\ker\,(A_C)^*$, where we consider $A_C$ as an unbounded densely defined operator from $L^2(\mathscr{M};E)$ to $L^2(\mathscr{M};F)$ and denote its adjoint by $(A_C)^*$.
So \eqref{e:L^2-decomposition} will follow from
\begin{equation}\label{e:ker-calderon-boundary}
\ker\,(A_C)^*\ =\ \ker A^t_{\mmin}\/.
\end{equation}
Now we shall prove \eqref{e:ker-calderon-boundary}.
In fact, according to \cite[Proposition 1.2.6]{Frey2005On},
\begin{equation*
(A_C)^*= A^t_{\mmax,C^{\ad}} \text{ with } C^{\ad}:=(\wt J^t)^{-1}(\Id-C^t)\wt J^t.
\end{equation*}
Note that $C^{\ad}\in \Psi_0(\Si;F'^d, F'^d)$ is idempotent and defines a well-posed boundary condition for $A^t$:
According to our assumption $C=C^{\ort}$, we have $C=C^t$. By Corollary \ref{c:l2-orthogonality-of-calderon-in-seeley}, we have
\[\wt J^t \bigl(\image C_{\frac{d}{2}}(A^t)\bigr)= \image \bigl(\Id-C^{\ort}_{\frac{d}{2}}(A)\bigr).\] Then we get
$\image C_{\frac{d}{2}}(A^t)=\image C_{\frac{d}{2}}^{\ad}$, where $C_{\frac{d}{2}}^{\ad}\colon H^{\frac{d}{2}}(\Si;F'^d)\to H^{\frac{d}{2}}(\Si;F'^d)$. Thus
\begin{equation}\label{e:calderon-adjoint-boundary-condition}
C_{\frac{d}{2}}^{\ad} \colon \image C_{\frac{d}{2}}(A^t) \to \image C_{\frac{d}{2}}^{\ad}
\end{equation}
is just the identity.
So by \cite[Proposition 2.1.2]{Frey2005On}, $C^{\ad}$ is a well-posed boundary condition for $A^t$.
Then by Remark \ref{r:regular-wellposed-boundary}b, we have $A^t_{\mmax,C^{\ad}}=A^t_{C^{\ad}}$, thus
\begin{eqnarray*}
\ker A^t_{\mmax,C^{\ad}}\ &=&\ \ker A^t_{C^{\ad}} \\
&=&\ \{u\in H^d(\mathscr{M};F)\mid A^tu=0, C^{\ad}\wt\rho^du=0\} \\
&=&\ \{u\in H^d(\mathscr{M};F)\mid A^tu=0, \wt\rho^du=0\}=\ker A^t_{\mmin},
\end{eqnarray*}
where in the last line we used
\[\image C_{\frac{d}{2}}(A^t)= \{\wt\rho^du\mid u\in H^d(\mathscr{M};F), A^tu=0\} \tand \eqref{e:calderon-adjoint-boundary-condition}.\]
Now \eqref{e:ker-calderon-boundary} is done.
Note that $\ker A_{\mmin}^t$ consists of smooth sections and is finite-dimensional.
Thus we can use \eqref{e:L^2-decomposition}, i.e., the decomposition in $L^2(\mathscr{M};F)$ to get our results for $s\geq \frac{d}{2}$\/:
\begin{align*}
H^{s-\frac{d}{2}}(\mathscr{M};F)\ =&\ L^2(\mathscr{M};F)\cap H^{s-\frac{d}{2}}(\mathscr{M};F) \\
=&\ (\image A_C\oplus \ker A^t_{\mmin})\cap H^{s-\frac{d}{2}}(\mathscr{M};F)\\
=&\
\bigl( \image A_C\cap H^{s-\frac{d}{2}}(\mathscr{M};F)\bigr) \oplus \ker A^t_{\mmin}\\
=&\ \image A_{s+\frac{d}{2},C}\oplus \ker A^t_{\mmin},
\end{align*}
where $\mathcal{D}(A_{s+\frac{d}{2},C})=\{u\in H^{s+\frac{d}{2}}(\mathscr{M};E)\mid C\wt \rho^du=0\}$ and we have used higher regularity for well-posed boundary conditions of Remark \ref{r:regular-wellposed-boundary}d.
So $\image A_{s+\frac{d}{2},C}$ is finite-codimensional in $H^{s-\frac{d}{2}}(\mathscr{M};F)$.
Since \[\image A_{s+\frac{d}{2},C}\subset \image A_{s+\frac{d}{2}}\subset H^{s-\frac{d}{2}}(\mathscr{M};F),\]
the space $\image A_{s+\frac{d}{2}}$ is also finite-codimensional and thus closed in $H^{s-\frac{d}{2}}(\mathscr{M};F)$. So we get (\ref{e:s>d/2-L^2-decomposition}).
\qed
\end{proof}
\begin{proposition}\label{p:kernel-cont-for-s-ge-dhalf}
Let $s \geq \frac{d}{2}$\/. If $\dim A^t_{b,min}=\kappa_-$ constant for all $b\in B$, then Assumption \ref{a:continuous-family-for-s-ge-dhalf}, i.e., that the family $\bigl(A_{b,s+\frac{d}{2}}\bigr)_{b\in B}$ is continuous in the operator norm, implies that the family $\bigl(\ker A_{b,s+\frac{d}{2}}\bigr)_{b\in B}$ of closed linear subspaces is continuous in $H^{s+\tfrac d2}(\mathscr{M};E)$.
\end{proposition}
\begin{proof}
Assumption \ref{a:continuous-family-for-s-ge-dhalf} implies that the graphs $\bigl(\Graph(A_{b,s+\frac{d}{2}})\bigr)_{b\in B}$ make a continuous family of closed linear subspaces of $H^{s+\tfrac d2}(\mathscr{M};E) \x H^{s-\tfrac d2}(\mathscr{M};F)$.
Here we impose the gap topology of Definition \ref{d:gap} on the space of closed linear subspaces of the product space. Actually, the two claims are equivalent
by \cite[Theorem IV.2.23 a)]{Ka95}.
For Banach spaces $X,Y$ and any bounded linear map $Q \colon X\to Y$, we recall the elementary formulae
\[
\Graph(Q)+ X\x\{0\}\ =\ X\x\image Q \ \tand \ \Graph(Q)\cap (X\x\{0\})\ =\ \ker Q\x\{0\}.
\]
Together with Lemma \ref{l:strongdecomposition}, for $X:=H^{s+\tfrac d2}(\mathscr{M};E)$, $Y:=H^{s-\tfrac d2}(\mathscr{M};F)$ and $Q$ right-Fredholm, i.e., $\image Q$ is closed and finite-codimensional, we have
\begin{equation*
\dim \frac{X\x Y}{\Graph(Q)+X\x\{0\}}\ =\ \dim \frac{Y}{\image Q}\
\fequal{\text{for $Q=A_{b,s+\frac{d}{2}}$}}\ \dim\ker A^t_{b,\mmin}\ = \kappa_-\/.
\end{equation*}
Now we consider the two following continuous families of closed subspaces of $X\times Y$:
\newline
$M_b:= \Graph(A_{b,s+\frac{d}{2}})$ with $b$ running in $B$ and
the constant family
\newline
$N_b:= H^{s+\frac{d}{2}}(\mathscr{M};E)\x\{0\}$. By Lemma \ref{l:strongdecomposition},
\[M_b+N_b= \Graph(A_{b,s+\frac{d}{2}}) + H^{s+\frac{d}{2}}(\mathscr{M};E)\x\{0\} = H^{s+\tfrac d2}(\mathscr{M};E) \x \image A_{b,s+\frac{d}{2}}\] are closed. Then by Lemma \ref{l:closed-continuous}b, the constance of $\kappa_-$ implies that the family
\[
M_b\cap N_b= \Graph(A_{b,s+\frac{d}{2}}) \cap (H^{s+\frac{d}{2}}(\mathscr{M};E)\x\{0\}) = \ker A_{b,s+\frac{d}{2}}\x\{0\}
\]
is continuous on $B$ and so $\bigl(\ker A_{b,s+\frac{d}{2}}\bigr)_{b\in B}$, and the proposition is proved.\qed
\end{proof}
Now we turn to the Cauchy traces $\wt \rho^d(\ker A_{b,s+\frac{d}{2}})$. Note that for $s\ge \frac{d}{2}$ the Cauchy trace operator $\wt \rho^d\colon H^{s+\frac{d}{2}}(\mathscr{M};E)\to H^s(\Si;E'^d)$ is surjective and bounded.
\begin{proposition}\label{p:Cauchy-traces-varying}
Additionally to Assumption \ref{a:continuous-family-for-s-ge-dhalf}, we assume for all $b\in B$, $\dim \ker A_{b,\mmin}=\kappa_+$ constant and $\dim \ker A^t_{b,\mmin}=\kappa_-$ constant. Then the family \[\bigl(\wt \rho^d(\ker A_{b,s+\frac{d}{2}})\bigr)_{b\in B}=\bigl(\image C_s(A_b)\bigr)_{b\in B}\] makes a continuous family of closed subspaces in $H^s(\Si;E'^d)$ for all $s\geq\frac{d}{2}$\/.
\end{proposition}
Our proof of
Proposition \ref{p:Cauchy-traces-varying} will use the following functional-analytic estimate.
\begin{lemma}\label{l:generalized-projection}
Let $X$, $Y$ be Banach spaces, $p\colon X\to Y$ be surjective and bounded linear. Then there exist positive constants $c$ and $\bar{c}$ such that for any closed linear subspaces $M,N$ of $X$ with $M,N\>\ker p$, we have
\[
\bar{c}\delta(M,N) \leq \delta(p(M),p(N))\leq c \delta(M,N).
\]
where $\delta(\cdot,\cdot)$ is defined in Definition \ref{d:gap}a.
\end{lemma}
\begin{proof}
Note that $\ker p$ is a closed linear subspace of $X$.
For the quotient map $q \colon X\to X/\ker p$,
we have
$\delta(M,N)=\delta(q(M),q(N))$ (cf. \cite[Lemma A.3.1(d)]{BoZh14}).
We define the induced map $\tilde p\colon X/\ker p \to Y$ by $\tilde p(x+\ker p):=p(x)$, then $p=\tilde p\circ q$.
Since the bounded linear transformation $\tilde p\colon X/\ker p\to Y$ is bijective,
the Inverse Mapping Theorem implies that $\tilde p$ is a homeomorphism. So the lemma holds.\qed
\end{proof}
\begin{proof}[of Proposition \ref{p:Cauchy-traces-varying}]
By Proposition \ref{p:kernel-cont-for-s-ge-dhalf}, the family $\bigl(\ker A_{b,s+\frac{d}{2}}\bigr)_{b\in B}$ of closed linear subspaces is continuous in $H^{s+\tfrac d2}(\mathscr{M};E)$.
For closed subspaces $M_b:=\ker A_{b,s+\frac{d}{2}},N_b:=\ker \bigl(\wt \rho^d|_{H^{s+\frac{d}{2}}(\mathscr{M};E)}\bigr)$ of $H^{s+\frac{d}{2}}(\mathscr{M};E)$,
\[
\image C_s(A_b)=\wt \rho^d(\ker A_{b,s+\frac{d}{2}})\
=\ \wt \rho^d(M_b+N_b).
\]
Moreover by Lemma \ref{l:s-Amin-A-semifredholm} and this proposition's assumption, the spaces
\[
M_b\cap N_b\ =\ \ker A_{b,s+\frac{d}{2}}\cap \ker \wt \rho^d=\ker A_{b,\mmin}
\]
are of finite constant dimension $\kappa_+$ for all $b\in B$.
Since $\image C_s(A_b)$ is closed in $H^s(\Si;E'^d)$, the subspace
\[M_b+N_b = (\wt \rho^d)^{-1}\bigl(\image C_s(A_b)\bigr)\]
is closed in $H^{s+\frac{d}{2}}(\Si;E'^d)$. So by Lemma \ref{l:closed-continuous}b, the continuous variation of $M_b=\ker A_{b,s+\frac{d}{2}}$ and the constancy of the family $N_b=\ker \bigl(\wt \rho^d|_{H^{s+\frac{d}{2}}(\mathscr{M};E)}\bigr)$ imply that the family $\left(M_b+N_b\right)_{b\in B}$ is continuous. From Lemma \ref{l:generalized-projection} we get the continuous variation of $\wt \rho^d\bigl(\ker A_{b,s+\frac{d}{2}}\bigr)=\wt\rho^d(M_b+N_b)$\/.\qed
\end{proof}
Next we will provide a non-trivial jump from the continuity of the
Cauchy data spaces to the continuity of the \Calderon\ projections.
Our arguments are based on the following observation:
Given a family of bounded projections in a Banach space, if their images and kernels are continuous in the gap topology, then this family of bounded projections is continuous in the operator norm. More precisely we have
\begin{lemma}\label{l:projector-varying1}
Let $X$ be a Banach space and $B$ be a topological space.
Let $\left(P_b\in \Bb(X)\right)_{b\in B}$ be a family of projections, that is,
$P_b^2=P_b$ for every $b\in B$. If either
\begin{equation}\label{e:convergence-Pb-Pb0}
(1) \lim_{b\to b_0}\delta(\image P_b,\image P_{b_0})\to 0 \tand \lim_{b\to b_0}\delta(\ker P_b,\ker P_{b_0})\to 0,
\end{equation}
or \begin{equation}\label{e:convergence-Pb0-Pb}
(2) \lim_{b\to b_0}\delta(\image P_{b_0},\image P_b)\to 0 \tand \lim_{b\to b_0}\delta(\ker P_{b_0},\ker P_b)\to 0;
\end{equation}
then
\begin{equation}\label{e:convergence-operatornorm}
\lim_{b\to b_0}\|P_b-P_{b_0}\|\to 0.
\end{equation}
\end{lemma}
\begin{proof}
We will use the quantity $\gamma(\cdot,\cdot)$ in (\ref{e:mimimal-gap}) to get the estimate of the operator norm $\|P_b-P_{b_0}\|:=\sup_{z\in X, \|z\|=1}\|(P_b-P_{b_0})z\|$.
(1) We begin to prove (\ref{e:convergence-Pb-Pb0}) $\Rightarrow$ (\ref{e:convergence-operatornorm}).
First we recall the definition and properties of $\gamma(\cdot,\cdot)$\/.
Since $X=\image P_b\oplus \ker P_b$,
by the definition of $\gamma(\cdot,\cdot)$ in (\ref{e:mimimal-gap}), for any $x'\in\image P_b\/, y'\in \ker P_b$, we have
\begin{equation}\label{e:minimal-gap2}
\|x'+y'\|\geq \|x'\| \gamma(\image P_b, \ker P_b) \tand \|x'+y'\|\geq \|y'\| \gamma(\ker P_b, \image P_b);
\end{equation}
and $\gamma(\image P_b, \ker P_b)>0$, $\gamma(\ker P_b,\image P_b)>0$ (cf. \cite[Theorem IV.4.2]{Ka95}).
Then we use $\delta(\cdot,\cdot)$ and $\gamma(\cdot,\cdot)$ to give the estimate of the norm $\|P_b-P_{b_0}\|$.
Take $\delta_1:=\delta(\image P_b,\image P_{b_0})$, $\delta_2:=\delta(\ker P_b,\ker P_{b_0})$.
By the definition of $\delta(\cdot,\cdot)$ (see also \cite[IV (2.3)]{Ka95}), for any $\varepsilon>0$
and any $z'=x'+y'$ with $x'\in \image P_b$, $y'\in \ker P_b$, we can correspondingly choose $x\in \image P_{b_0}$, $y\in \ker P_{b_0}$ such that
\begin{equation}\label{e:x'-y'-x'prime-y'prime}
\|x'-x\|\leq (\delta_1+\varepsilon)\|x'\|,\quad \|y'-y\|\leq (\delta_2+\varepsilon)\|y'\|.
\end{equation}
So we have
\begin{align*}
\ &\ \|(P_b-P_{b_0})z'\|
\ =\ \|(P_b-P_{b_0})(x'+y')\|\\
=&\ \|x'-P_{b_0}(x'+y')+P_{b_0}(x+y)-x\|\\
=&\ \|x'-x+P_{b_0}(x'+y')-P_{b_0}(x+y)\|\\
\leq& \ \|x'-x\|+\|P_{b_0}\|(\|x'-x\|+\|y'-y\|)\\
\leq& \ (\|P_{b_0}\|+1)(\delta_1+\delta_2+\varepsilon)(\|x'\|+\|y'\|)\ \text{ by \eqref{e:x'-y'-x'prime-y'prime}}\\
\leq&\ (\|P_{b_0}\|+1)(\delta_1+\delta_2+\varepsilon) (\frac{\|x'+y'\|}{\gamma(\image P_b,\ker P_b)}
+\frac{\|x'+y'\|}{\gamma(\ker P_b,\image P_b)})\ \text{ by \eqref{e:minimal-gap2}}.
\end{align*}
Since $\varepsilon>0$ and $z'\in X$ are both arbitrary, we have
\begin{equation}\label{e:estimate-operatornorm-projection}
\|P_b-P_{b_0}\|\leq (\|P_{b_0}\|+1)(\delta_1+\delta_2)\left(\frac{1}{\gamma(\image P_b,\ker P_b)}+\frac{1}{\gamma(\ker P_b,\image P_b)}\right).
\end{equation}
Finally, we give the positive lower bound estimate of $\gamma(\image P_b,\ker P_b)$.
By \cite[Lemma 1.4]{Ne68}, if $\gamma(\image P_{b_0},\ker P_{b_0})-\delta_1\cdot
\gamma (\image P_{b_0},\ker P_{b_0})-\delta_1-\delta_2>0$, then
\[\gamma (\image P_b,\ker P_b) \geq \frac{\gamma(\image P_{b_0},\ker P_{b_0})-\delta_1\cdot
\gamma (\image P_{b_0},\ker P_{b_0})-\delta_1-\delta_2}{1+\delta_2}.\]
Together with \eqref{e:convergence-Pb-Pb0}, we have
\begin{equation}\label{e:lowerbound-gamma-imageP-kerP}
\liminf_{b\to b_0}\gamma (\image P_b,\ker P_b)\ge \gamma (\image P_{b_0},\ker P_{b_0})>0.
\end{equation}
Similarly,
\begin{equation}\label{e:lowerbound-gamma-kerP-imageP}
\liminf_{b\to b_0}\gamma (\ker P_b,\image P_b)\ge \gamma (\ker P_{b_0},\image P_{b_0})>0.
\end{equation}
Combining \eqref{e:estimate-operatornorm-projection}, \eqref{e:lowerbound-gamma-imageP-kerP} and \eqref{e:lowerbound-gamma-kerP-imageP},
we get \eqref{e:convergence-operatornorm}.
(2) Now we are going to prove (\ref{e:convergence-Pb0-Pb}) $\Rightarrow$ (\ref{e:convergence-operatornorm}). Take $\delta_3:=\delta(\image P_{b_0},\image P_b)$, $\delta_4:=\delta(\ker P_{b_0},\image P_b)$. Similar to \eqref{e:estimate-operatornorm-projection}, we have
\begin{equation}\label{e:estimate-operatornorm-projection2}
\|P_b-P_{b_0}\| \leq (\|P_b\|+1)(\delta_3+\delta_4)
\left(\frac{1}{\gamma(\image P_{b_0},\ker P_{b_0})}+\frac{1}{\gamma(\ker P_{b_0},\image P_{b_0})}\right).
\end{equation}
Take $\alpha:=\frac{1}{\gamma(\image P_{b_0},\ker P_{b_0})}+\frac{1}{\gamma(\ker P_{b_0},\image P_{b_0})}$.
Since $\|P_b\|\leq\|P_b-P_{b_0}\|+\|P_{b_0}\|$, we have
\[\|P_b\|(1-\alpha(\delta_3+\delta_4))\leq \|P_{b_0}\|+\alpha (\delta_3+\delta_4).
\]
Together with \eqref{e:convergence-Pb0-Pb} and \eqref{e:estimate-operatornorm-projection2}, we get \eqref{e:convergence-operatornorm}.\qed
\end{proof}
By the preceding lemma and the definition of the gap, we can conclude
\begin{corollary}\label{c:sufficient-condition-families-of-projections}
A sufficient and necessary condition for the continuity of a family of bounded projections in a fixed Banach space, parameterized by a topological space, is that their kernels and images are both continuous in the gap topology.
\end{corollary}
\begin{proof}[of Theorem \ref{t:main} for $s\geq\frac{d}{2}$]
According to Corollary \ref{c:l2-orthogonality-of-calderon-in-seeley}, we have, for $s\geq \frac{d}{2}$
\[\ker C^{\ort}_s(A)=\wt{J}^t\left(\image C_s(A^t)\right).\] So under the assumptions of Theorem \ref{t:main}, by Proposition \ref{p:Cauchy-traces-varying}, we get that, for $s\geq \frac{d}{2}$,
$\bigl(\image C^{\ort}_s(A_b)\bigr)_{b\in B}$ and $\bigl(\ker C^{\ort}_s(A_b)\bigr)_{b\in B}$ are both continuous in $H^s(\Si;E'^d)$. Then by Corollary \ref{c:sufficient-condition-families-of-projections}, we get that the family
$\bigl(C^{\ort}_s(A_b)\bigr)_{b\in B}$ is continuous in the operator norm $\|\cdot\|_{s,s}$ for all $s\geq \frac{d}{2}$.
\qed
\end{proof}
\subsection{Proof of our main theorem for $s< \frac{d}{2}$}\label{ss:s<halfd}
Interpolation theory can be applied easily for intermediate Sobolev spaces between two given Sobolev spaces to establish an estimate for the operator norm of an intermediate operator, see \textsc{\Calderon}'s \cite{Cal64-intermediate} or \cite{LM72} by \textsc{J.-L. Lions} and \textsc{Magenes}.
We give a slimmed-down version of interpolation theory for intermediate spaces.
\begin{definition}[Interpolation property] We follow \cite[Definitions 21.4 and 21.5]{Tar07}.
Let $\EE_0$ and $\EE_1$ be normed spaces with $\EE_1\hookrightarrow \EE_0$ continuously embedded and dense.
\newline (a) An {\em intermediate space} between $\EE_1$ and $\EE_0$ is any normed space $\EE$ such that $\EE_1 \< \EE \< \EE_0$ (with continuous embeddings).
\newline (b) An {\em interpolation space} between $\EE_1$ and $\EE_0$ is any intermediate space $\EE$ such that every linear mapping from $\EE_0$ into itself which is continuous from $\EE_0$ into itself and from $\EE_1$ into itself is automatically continuous from $\EE$ into itself. It is said to be of {\em exponent} $\theta$ (with $0 < \theta < 1$), if there exists a constant $c_1$ such that
\begin{equation}\label{e:interpolation}
\norm{A}_{\Bb(\EE,\EE)}\ \le\ c_1\, \norm{A}_{\Bb(\EE_1,\EE_1)}^{1-\theta}\, \norm{A}_{\Bb(\EE_0,\EE_0)}^{\theta}\ \text{ for all $A\in\Bb(\EE_1,\EE_1)\cap\Bb(\EE_0,\EE_0)$}.
\end{equation}
\newline (c)
Moreover, if $\EE_0$ and $\EE_1$ are Banach spaces, for $0 < \theta < 1$, we can define the \textit{complex interpolation space} $[\EE_1,\EE_0]_{\theta}$ in loc. cit.
\end{definition}
\begin{remark}
The construction of the complex interpolation space uses analytic functions with values in the Banach space
$\EE_0$. Using the classical Three Lines Theorem (mainly about the maximum modulus principle), one can show that $[\EE_1,\EE_0]_{\theta}$ with a kind of quotient norm is also a Banach space (cf. \cite[Section 1.14.1]{LM72} or \cite[Section 4.2]{Taylor96}). By \cite[Lemma 21.6]{Tar07}, the interpolation property holds for $[\EE_1,\EE_0]_{\theta}$ with $c_1=1$ in \eqref{e:interpolation}\/.
\end{remark}
\begin{definition}\label{d:sobolev scale}(cf. \cite[Definition 2.5]{BrLe01})
Slightly more generally, we call a family $(H^s)_{s\in\RR}$
a \textit{scale of Hilbert spaces} if
\begin{description}
\item[(1)] $H^s$ is a Hilbert space for each $s\in\RR$,
\item[(2)] $H^{s'}\hookrightarrow H^s$ embeds continuously for $s\leq s'$,
\item[(3)] if $s<t$, $0<\theta<1$, then the \textit{complex interpolation space} belongs to the scale with
\[
[H^t,H^s]_{\theta}=H^{(1-\theta)t+\theta s},
\]
\item[(4)] $H^{\infty}:=\cap_{s\in \RR_+}H^s$ is dense in $H^t$ for each $t\in\RR$,
\item[(5)] the $H^0$-scalar product, denoted by $(\cdot,\cdot)$, restricted to $H^{\infty}$ extends to a perfect pairing between $H^s$ and $H^{-s}$, denoted by $\langle\cdot,\cdot\rangle_{s,-s}$\/, for all $s\in\RR$.
\end{description}
\end{definition}
Let $(H^s)_{s\in \RR}$ be a scale of Hilbert spaces.
\begin{definition}\label{d:0order-operator}
A linear map $T\colon H^\infty\to H^\infty$ is called an \textit{operator of order $0$}, if it extends to a continuous linear map $T_s\colon H^s \to H^s$ for all $s\in \RR$. We denote the vector space of all operators of order $0$ by $\operatorname{Op}^0((H^s)_{s\in\RR})$\/. For $T_s\in \Bb(H^s)$, we denote its operator norm by $\|T_s\|_{s,s}$\/.
\end{definition}
\begin{lemma}\label{l:s<d-half}
Let $(H^s)_{s\in\RR}$ be a scale of Hilbert spaces and $T\in \operatorname{Op}^0((H^s)_{s\in\RR})$. We
assume that the continuous extension $T_0 \colon H^0\to H^0$ is self-adjoint.
Then
\begin{description}
\item[(1)] for $t>0$, $\|T_{-t}\|_{-t,-t}=\|T_t\|_{t,t}$\,;
\item[(2)] for $s_0<s<s_1$, $\|T_s\|_{s,s}\leq\ (\|T_{s_1}\|_{s_1,s_1})^{\tfrac{s-s_0}{s_1-s_0}}\,
(\|T_{s_0}\|_{s_0,s_0})^{\tfrac{s_1-s}{s_1-s_0}}$\/.
\end{description}
\end{lemma}
\begin{proof}
(1) Fix any $s\in\RR$. Let $(H^s)^*$ denote the the space of bounded linear functionals on $H^s$.
The norm of $\phi\in(H^s)^*$ is given by
\[\|\phi\|:=\sup_{f\in H^s, \|f\|_s\leq 1}|\phi(f)|.\]
By Definition \ref{d:sobolev scale}(5),
$H^{-s}$ can be identified with $(H^s)^*$ by the isometric isomorphism,
\begin{eqnarray*}
H^{-s} &\rightarrow& (H^s)^*, \\
h &\mapsto& \langle\cdot,h\rangle_{s,-s},
\end{eqnarray*}
where isometric means that: if $\phi(f):=\langle f,h\rangle_{s,-s}$\/ for every $f\in H^s$, then $\|\phi\|=\|h\|_{-s}$.
According to the above identification,
we can define the adjoint operator of $T_s$
\begin{equation}\label{e:adjiont-of-T}
\begin{gathered}
(T_s)^* \colon H^{-s}\to H^{-s}\ \text{ by setting for $h\in H^{-s}$}\\
\lla f,(T_s)^*h\rra_{s,-s}\ :=\ \lla T_sf,h\rra_{s,-s}\quad \text{for all $f\in H^s$},
\end{gathered}
\end{equation}
then $(T_s)^*$ is also a bounded linear operator and
\begin{equation}\label{e-T-adjoint}
\|(T_s)^*\|_{-s,-s}=\|T_s\|_{s,s}\/.
\end{equation}
For $t>0$, we claim that $(T_t)^*=T_{-t}$\/.
In fact, since $t>0$, $T_0|_{H^t}=T_t$\/, $T_{-t}|_{H^0}=T_0$\/, and for any $f\in H^t\subset H^0$, $h\in H^0\subset H^{-t}$, we have
\begin{equation}\label{e:selfadjoint-T0}
\lla T_tf,h\rra_{t,-t}=(T_tf,h)=(T_0f,h)=(f,T_0h)=\lla f,T_0h\rra_{t,-t}=\lla f,T_{-t}h\rra_{t,-t}\/,
\end{equation}
where we have used the assumption that $T_0 \colon H^0\to H^0$ is self-adjoint.
So by \eqref{e:adjiont-of-T} and \eqref{e:selfadjoint-T0}, for any $f\in H^t,h\in H^0$, we have
\[
\lla f,(T_t)^*h\rra_{t,-t}\ =\ \lla f,T_{-t}h\rra_{t,-t}\/.
\]
This implies
\[
(T_t)^*h=T_{-t}h\ \ \ \text{for any $h\in H^0$}.
\]
Since $(T_t)^*,T_{-t}$ are
bounded linear operators on $H^{-t}$ and since $H^0$ is dense in $H^{-t}$, we get
\begin{equation}\label{e:T-adjoint-negative}
(T_t)^*=T_{-t} \ \text{ on $H^{-t}$}.
\end{equation}
Finally by \eqref{e-T-adjoint} and \eqref{e:T-adjoint-negative}, we get $\|T_{-t}\|_{-t,-t}=\|(T_t)^*\|_{-t,-t}=\|T_t\|_{t,t}$\/.
\par
(2) Since $[H^{s_1},H^{s_0}]_{\theta}=H^{(1-\theta)s_1+\theta s_0}$, for $0<\theta<1$,
by the interpolation property for $[H^{s_1},H^{s_0}]_{\theta}$ (cf. \eqref{e:interpolation}) , we obtain, for $s_0<s<s_1$
\begin{equation*}\label{interpolation theory}
\|T_s\|_{s,s}\\ \leq\ (\|T_{s_1}\|_{s_1,s_1})^{\tfrac{s-s_0}{s_1-s_0}}\,(\|T_{s_0}\|_{s_0,s_0})^{\tfrac{s_1-s}{s_1-s_0}}.
\end{equation*} \qed
\end{proof}
\begin{theorem}\label{t:s<d-half}
Let $B$ be a topological space and $T_b\in \operatorname{Op}^0((H^s)_{s\in\RR})$ for all $b\in B$. Assume that the extended bounded linear maps $T_{b,0} \colon H^0\to H^0$ are self-adjoint for all $b\in B$.
If $\bigl(T_{b,t}\in \Bb(H^t)\bigr)_{b\in B}$ is continuous on $B$ in the operator norm for some $t\in \RR_+$, then $\bigl(T_{b,s}\in \Bb(H^s)\bigr)_{b\in B}$ is continuous on $B$ in the operator norm for all $s\in [-t,t]$.
\end{theorem}
\begin{proof}
For any $b_1,b_2\in B$, the linear map $T_{b_1}-T_{b_2}\in \operatorname{Op}^0((H^s)_{s\in\RR})$.
According to Lemma \ref{l:s<d-half}, we have
\[
\|T_{b_1,-t}-T_{b_2,-t}\|_{-t,-t}=\|T_{b_1,t}-T_{b_2,t}\|_{t,t}\/,
\]
and
\[
\|T_{b_1,s}-T_{b_2,s}\|_{s,s}\leq\ (\|T_{b_1,s_1}-T_{b_2,s_1}\|_{s_1,s_1})^{\tfrac{s-s_0}{s_1-s_0}}\,
(\|T_{b_1,s_0}-T_{b_2,s_0}\|_{s_0,s_0})^{\tfrac{s_1-s}{s_1-s_0}}\/,
\]
where $s_0\leq s\leq s_1$ and we use the situation $s_0:=-t,s_1:=t$. So for any $s\in[-t,t]$, the continuity
of $\bigl(T_{b,s}\in \Bb(H^s)\bigr)_{b\in B}$ on $B$ in operator norm follows.\qed
\end{proof}
For the chain of Sobolev spaces over our closed manifold $\Sigma$ and $s_0 < s_1$ we set $\EE_0\ :=\ H^{s_0}(\Sigma;E'^d)$ and $\EE_1\ :=\ H^{s_1}(\Sigma;E'^d)$.
We exploit that the Sobolev spaces are Hilbert (or Hilbertable) spaces and admit a densely defined self-adjoint positive operator $\Lambda$ in $\EE_0$\/ with domain $\mathcal{D}(\Lambda)=\EE_1$.
\begin{proposition}[Interpolation between Sobolev spaces]\label{p:interpolation}
For each $s\in ]s_0,s_1[$ the Sobolev space $H^{s}(\Sigma;E'^d)$ is an interpolation space between
$\EE_1:=H^{s_1}(\Sigma;E'^d)$ and $\EE_0:=H^{s_0}(\Sigma;E'^d)$ of exponent
\[
\theta(s)\ =\ \frac{s_1-s}{s_1-s_0}\,.
\]
More precisely, we have for all $\theta\in ]0,1[$ and corresponding $s=(1-\theta)s_1+\theta s_0$\/:
\begin{description}
\item [(1) Identifying Sobolev spaces with interpolation spaces,]\cite[Definition 1.2.1 and Section 1.7.1]{LM72}:
$H^{s}(\Sigma;E'^d)=\mathcal{D}(\Lambda^{1-\theta})=[\EE_1,\EE_0]_{\theta}$ with equivalent norms. The norm on $[\EE_1,\EE_0]_{\theta}$ is equivalent to the graph norm of $\Lambda^{1-\theta}$, i.e., $\bigl(\norm{u}_{\EE_0}^2 + \norm{\Lambda^{1-\theta} u}_{\EE_0}^2\bigr)^{1/2}$\/.
\item [(2) Interpolation property of (Sobolev) norms,]\cite[Proposition 1.2.3]{LM72}: There exists a constant $c$ such that
$\norm{u}_{[\EE_1,\EE_0]_{\theta}}\
\le\ c\, \norm{u}_{\EE_1}^{1-\theta}\, \norm{u}_{\EE_0}^{\theta}$\ for all $u\in \EE_1$\/.
\end{description}
\end{proposition}
\begin{remark}
For our Hilbert spaces we have $[\EE_1,\EE_0]_{\theta}=\mathcal{D}(\Lambda^{1-\theta})$. The proof can be found in \cite[Theorem 1.14.1]{LM72} or \cite[Section 4.2]{Taylor96}. Then statements (1), (2) are immediate from the definition of the Sobolev spaces; for (2) see also \cite[Theorem 7.22]{Grubb:2009} with \textsc{Grubb}'s four-line proof in the Euclidean case based on the H\"older Inequality.
\end{remark}
According to Proposition \ref{p:interpolation} and the facts about the chain of Sobolev spaces over a closed manifold (cf. Section \ref{ss:sobolev}), the family $\bigl( H^s(\Sigma;E'^d)\bigr)_{s\in\RR}$ satisfies
Definition \ref{d:sobolev scale}.
\begin{proof}[of Theorem \ref{t:main} for $s<\frac{d}{2}$]
We set $H^s=H^s(\Sigma;E'^d)$, $s\in\RR$, and $T_b=C^{\ort}(A_b)$\/, $b\in B$ in Theorem \ref{t:s<d-half}. By the continuity results for $s\geq \frac{d}{2}$ in Section \ref{ss:proof-for-s-ge-d-half}, we obtain our Main Theorem.\qed
\end{proof}
\section*{Appendix: Weaker conditions than Assumption (ii)}
In this Appendix, we will prove that Assumption (ii) in Theorem \ref{t:main} can be weakened a little by finer analysis above.
First, we give a kind of example about special perturbations of formally self-adjoint elliptic operator:
\begin{theorem}\label{c:appendix--A-b}
Let $A\colon \Ci(\mathscr{M};E)\to\Ci(\mathscr{M};E)$ be a formally self-adjoint elliptic operator of order $d$, i.e., $A=A^t$.
Denote by $I \colon E\to E$ the identity bundle map. Then for any $s\in \RR$, the family of $L^2$-orthogonalized Calder{\'o}n projections $\bigl(C^{\ort}_s(A-bI)\bigr)_{b\in \CC}$ is continuous at $b=0$ in the operator norm of the corresponding Sobolev space $H^s(\Si;E'^d)$.
\end{theorem}
\begin{proof}
According to Theorem \ref{t:s<d-half} and Proposition \ref{p:interpolation}, we only need to consider the case
$s\geq \frac{d}{2}$.
Then by Corollary \ref{c:l2-orthogonality-of-calderon-in-seeley}, Lemmas \ref{l:generalized-projection} and \ref{l:projector-varying1}, we only need to prove for $s\geq \frac{d}{2}$
\begin{equation}\label{e:convergence-ker+rho}
\lim_{b\to0}\delta\bigl(\ker (A_{s+\frac{d}{2}}-bI)+\ker \wt\rho^d,\ker A_{s+\frac{d}{2}}+\ker \wt\rho^d\bigr)=0,
\end{equation}
where $\ker (A_{s+\frac{d}{2}}-bI)+\ker \wt\rho^d=(\wt \rho^d)^{-1}\bigl(\image C_s(A-bI)\bigr)$, $\ker (A_{s+\frac{d}{2}}-bI)$ and $\ker \wt\rho^d$ are all closed subspaces of $H^{s+\frac{d}{2}}(\mathscr{M};E)$.
Let $s\geq \frac{d}{2}$\/.
Since $\ker A_{s+\frac{d}{2}}\times \{0\}=\Graph(A_{s+\frac{d}{2}})\cap (H^{s+\frac{d}{2}}(\mathscr{M};E)\times \{0\})$ and $\Graph(A_{s+\frac{d}{2}})+H^{s+\frac{d}{2}}(\mathscr{M};E)\times\{0\}=H^{s+\frac{d}{2}}(\mathscr{M};E)\times \image A_{s+\frac{d}{2}}$ is closed,
by \cite[Proposition A.3.5a]{BoZh14},
we have
\[\delta\bigl(\ker (A_{s+\frac{d}{2}}-bI),\ker A_{s+\frac{d}{2}}\bigr)\leq \frac{2\delta\bigl(\Graph(A_{s+\frac{d}{2}}-b),\Graph(A_{s+\frac{d}{2}})\bigr)}{\gamma(\Graph(A_{s+\frac{d}{2}}),H^{s+\frac{d}{2}}(\mathscr{M};E) \times \{0\})}.\]
So we get
\begin{equation}\label{e:convergence-kerA-bI}
\lim_{b\to0}\delta\bigl(\ker (A_{s+\frac{d}{2}}-bI),\ker A_{s+\frac{d}{2}}\bigr)=0.
\end{equation}
Again by \cite[Proposition A.3.5a]{BoZh14}, we have
\begin{equation}\label{e:convergence-ker-cap-ran}
\lim_{b\to0}\delta\bigl(\ker (A_{s+\frac{d}{2}}-bI)\cap \image A_{s+\frac{3d}{2}},\ker A_{s+\frac{d}{2}}\cap \image A_{s+\frac{3d}{2}}\bigr)=0
\end{equation}
Since $C:=C^{\ort}(A)$ is a well-posed boundary condition, by Lemma \ref{l:strongdecomposition},
we have $\image A_{s+\frac{d}{2}}=\image A_{{s+ \frac{d}{2}},C}$ and then for $b\neq 0$
\begin{align*}
\ker (A_{s+\frac{d}{2}}-bI)\subseteq \image A_{s+ \frac{d}{2}}\cap H^{s+\frac{d}{2}}(\mathscr{M};E) &=\image A_{s+ \frac{d}{2},C}\cap H^{s+\frac{d}{2}}(\mathscr{M};E) \\
&=\image A_{s+ \frac{3d}{2},C}= \image A_{s+\frac{3d}{2}}.
\end{align*}
So for $b\neq 0$,
\begin{equation}\label{e:kerA-bI-bnot0-imageA}
\ker (A_{s+\frac{d}{2}}-bI)\subseteq \ker (A_{s+\frac{d}{2}}-bI)\cap\image A_{{s+ \frac{3d}{2}}}.
\end{equation}
By Lemma \ref{l:strongdecomposition}, we also have
\begin{equation}\label{e:imageA-cap-kerAmin-0}
\ker A_{s+\frac{d}{2}}\cap\image A_{s+ \frac{3d}{2}}\cap \ker \wt\rho^d=\{0\}.
\end{equation}
By \cite[Lemma 1.4]{Ne68}, (\ref{e:kerA-bI-bnot0-imageA}), (\ref{e:convergence-ker-cap-ran}) and (\ref{e:imageA-cap-kerAmin-0}),
we have
\[\delta(\ker (A_{s+\frac{d}{2}}-bI)+\ker \wt\rho^d,\ker A_{s+\frac{d}{2}}+\ker \wt\rho^d)
\leq \frac{\delta(\ker (A_{s+\frac{d}{2}}-bI),\ker A_{s+\frac{d}{2}})}{\gamma(\ker (A_{s+\frac{d}{2}}-bI),\ker \wt\rho^d)},\]
and
\[\liminf_{0\neq b\to0}\gamma(\ker (A_{s+\frac{d}{2}}-bI),\ker \wt\rho^d)\geq \gamma(\ker A_{s+\frac{d}{2}}\cap\image A_{{s+ \frac{3d}{2}}},\ker \wt \rho^d)>0.\]
So together with (\ref{e:convergence-kerA-bI}), we get \eqref{e:convergence-ker+rho}.
\qed
\end{proof}
In general, we will show that Assumption (ii) in Theorem \ref{t:main}
can be weakened to Assumption (ii') in the following Theorem \ref{t:weaker(ii)}.
Let $B,\mathscr{M},\Sigma,E,F,d$, $(A_b)_{b\in B}$ be given as in Notation \ref{n:basic-notations}.
For $s\geq \frac{d}{2}$, fix $b_0\in B$ and let
\begin{align*}
A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})&=\{u\in H^{s+\frac{d}{2}}(\mathscr{M};E)\mid A_{b,s+\frac{d}{2}}u\in \ker A^t_{b_0,\min}\}, \\
(A^t)^{-1}_{b,s+\frac{d}{2}}(\ker A_{b_0,\min})&=\{u\in H^{s+\frac{d}{2}}(\mathscr{M};F)\mid A^t_{b,s+\frac{d}{2}}u\in \ker A_{b_0,\min}\}.
\end{align*}
Clearly,
\[\ker A_{b,s+\frac{d}{2}}\subset A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min}),\
\ker A^t_{b,s+\frac{d}{2}}\subset (A^t)^{-1}_{b,s+\frac{d}{2}}(\ker A_{b_0,\min}).\]
Without Assumption (ii), we still have
\begin{lemma}\label{l:gap-right-A'inverse-Amin}
Let $s \geq \frac{d}{2}$\/. Assumption \ref{a:continuous-family-for-s-ge-dhalf}, i.e., that the family $\bigl(A_{b,s+\frac{d}{2}}\bigr)_{b\in B}$ is continuous in the operator norm, implies
\begin{equation}\label{e:gap-right-A'inverse-Amin}
\hat{\delta}(\ker A_{b_0,s+\frac{d}{2}}, A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min}))\to 0,
\ \ \ \text{when $b\to b_0$.}
\end{equation}
\end{lemma}
\begin{proof}
Assumption \ref{a:continuous-family-for-s-ge-dhalf} implies that the graphs $\bigl(\Graph(A_{b,s+\frac{d}{2}})\bigr)_{b\in B}$ make a continuous family of closed linear subspaces of $H^{s+\tfrac d2}(\mathscr{M};E) \x H^{s-\tfrac d2}(\mathscr{M};F)$.
For $s\ge \frac{d}{2}$ and $b\in B$,
\begin{align*}
&\ \{(u,A_{b,s+\frac{d}{2}}u)\in H^{s+\frac{d}{2}}(\mathscr{M};E)\times \ker A^t_{b_0,\min}\} \\
= &\ \Graph(A_{b,s+\frac{d}{2}})\cap (H^{s+\tfrac d2}(\mathscr{M};E)\times \ker A^t_{b_0,\min}).
\end{align*}
By Lemma \ref{l:strongdecomposition},
$\Graph(A_{b_0,s+\frac{d}{2}})\cap (H^{s+\tfrac d2}(\mathscr{M};E)\times \ker A^t_{b_0,\min})=\ker A_{b_0,s+\frac{d}{2}}\times \{0\}$.
Since the following proof holds for any $s\ge \frac{d}{2}$,
we fix an $s\ge \frac{d}{2}$ and write shorthand $A_b:=A_{b,s+\frac{d}{2}}$,
$X:=H^{s+\frac{d}{2}}(\mathscr{M};E)$, $Y:=H^{s-\frac{d}{2}}(\mathscr{M};F)$.
First, we prove that Assumption \ref{a:continuous-family-for-s-ge-dhalf} implies
\begin{equation}\label{e:gap-cap-graphA+X-kerAtmin}
\lim_{b\to b_0}\hat{\delta}(\Graph(A_b)\cap (X\times \ker A^t_{b_0,\min}),\Graph(A_{b_0})\cap (X\times \ker A^t_{b_0,\min}))\to 0.
\end{equation}
By Lemma \ref{l:closed-continuous}a, we just need prove that Assumption \ref{a:continuous-family-for-s-ge-dhalf} implies
\begin{equation}\label{e:gap-graphA+X-kerAtmin}
\lim_{b\to b_0}\hat{\delta}(\Graph(A_b)+ (X\times \ker A^t_{b_0,\min}),\Graph(A_{b_0})+ (X\times \ker A^t_{b_0,\min}))\to 0.
\end{equation}
In fact, for any $b\in B$,
$\Graph(A_b)+X\times \ker A^t_{b_0,\min}=X\times (\image A_b+\ker A^t_{b_0,\min})$.
So by Lemma \ref{l:strongdecomposition},
$\Graph(A_{b_0})+X\times \ker A^t_{b_0,\min}=X\times Y$.
On one hand, the closed subspace $\Graph(A_b)+X\times \ker A^t_{b_0,\min}\subset X\times Y$.
On the other hand, by \cite[Lemma 1.4]{Ne68},
\begin{align*}
&\ \delta(\Graph(A_{b_0})+X\times \ker A^t_{b_0,\min},\Graph(A_b)+X\times \ker A^t_{b_0,\min})\\
\leq &\ \frac{\delta(\Graph(A_{b_0}),\Graph(A_b))}{\gamma(\Graph(A_{b_0}),X\times\ker A^t_{b_0,\min})}.
\end{align*}
So we get \eqref{e:gap-graphA+X-kerAtmin}. Thus \eqref{e:gap-cap-graphA+X-kerAtmin} holds.
Then, by the definition of the gap, Assumption \ref{a:continuous-family-for-s-ge-dhalf}
and \eqref{e:gap-cap-graphA+X-kerAtmin} imply \eqref{e:gap-right-A'inverse-Amin}.
In fact, one one hand,
\[ \delta(\ker A_{b_0},A^{-1}_b(\ker A^t_{b_0,\min}))
\leq \delta(\Graph(A_{b_0})\cap (X\times \{0\}),\Graph(A_b)\cap (X\times \ker A^t_{b_0,\min}));
\]
on the other hand,
\begin{eqnarray*}
&&\delta( A^{-1}_b(\ker A^t_{b_0,\min}),\ker A_{b_0})\\
&\leq & \sqrt{\|A_b\|^2+1} \cdot\delta(\Graph(A_b)\cap (X\times \ker A^t_{b_0,\min}),\Graph(A_{b_0})\cap (X\times \ker A^t_{b_0,\min})).
\end{eqnarray*}
\qed
\end{proof}
We also have the following lemma analogous to Lemma \ref{l:s-Amin-A-semifredholm}:
\begin{lemma}\label{l:s-Amin-A-inverse-Atmin}
For $s\geq \frac{d}{2}$, \[A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d=
A^{-1}_{b,d}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d\] is finite-dimensional and consists of smooth sections.
\end{lemma}
\begin{proof}
Since $\ker A_{b,\min}=\ker A_{b,s+\frac{d}{2}}\cap \ker \wt \rho^d $ and $\ker A^t_{b_0,\min}$ are both finite-dimensional, $A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d$ is finite-dimensional for $s\geq \frac{d}{2}$.
Since $\ker A^t_{b_0,\min}\subset C^{\infty}(\mathscr{M};F)$, by the interior regularity for elliptic operators, we have
\[A^{-1}_{b,d}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d\subset \{u\in C^{\infty}(\mathscr{M};E)\mid A_bu\in \ker A^t_{b_0,\min} \tand \wt \rho^d u=0\}.\]
Obviously, we have for $s\geq \frac{d}{2}$,
\begin{align*}
&\ \{u\in C^{\infty}(\mathscr{M};E)\mid A_bu\in \ker A^t_{b_0,\min} \tand \wt \rho^d u=0\}\\
\subset\ &\ A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d \\
\subset\ &\ A^{-1}_{b,d}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d
\end{align*}
So we get the equality.
\qed
\end{proof}
Moreover, we have
\begin{lemma}\label{l:dimension-Ainverse-Atmin-cap-rho}
Let $s\geq \frac{d}{2}$, (1) $\ker A_{b,\min}\subset A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d$;
(2) Assumption \ref{a:continuous-family-for-s-ge-dhalf} implies that
$\dim (A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d)\leq \dim Z_{+,0}(A_{b_0})$, when $b$ in a sufficient small neighbourhood of $b_0$ in $B$.
\end{lemma}
\begin{proof}
(1) is obvious.
(2) follows from Lemma \ref{l:gap-right-A'inverse-Amin}, \cite[Proposition A.3.5a]{BoZh14} and \cite[Corollary IV.2.6]{Ka95}.
In fact, $Z_{+,0}(A_{b_0})=\ker A_{b_0,\min}=\ker A_{b_0,s+\frac{d}{2}}\cap \ker \wt \rho^d$ and
$\lim_{b\to b_0}\delta\bigl(A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d,\ker A_{b_0,s+\frac{d}{2}}\cap \ker \wt \rho^d\bigr)\to0$.
\qed
\end{proof}
Now, we can prove
\begin{theorem}\label{t:weaker(ii)}
Assume that
\begin{enumerate}[(i)]
\item for $s\ge \frac{d}{2}$\/, the two families of bounded extensions
\[
\bigl(A_{b, s+\frac{d}{2}}\colon
H^{s+\frac{d}{2}}(\mathscr{M};E) \too H^{s-\frac{d}{2}}(\mathscr{M};F)\bigr)_{b\in B}
\]
and
\[\bigl(A_{b, s+\frac{d}{2}}^t\colon H^{s+\frac{d}{2}}(\mathscr{M};F) \too H^{s-\frac{d}{2}}(\mathscr{M};E)\bigr)_{b\in B}
\]
are continuous in the respective operator norms $\norm{\cdot}_{s+\frac{d}{2},s-\frac{d}{2}}$\/, and that the family of adjusted Green's forms (of Equation \eqref{e:J-adjusted}) $\bigl(\tilde J^t_{b,s} \colon H^s(\Si;F'^d) \to H^s(\Si;E'^d)\bigr)_{b\in B}$
is continuous in the operator norm $\norm{\cdot}_{s,s}$\/;
\item [(ii')] $\dim (A^{-1}_{b,d}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d)=\dim Z_{+,0}(A_{b_0})$ and \\
$\dim ((A^t)^{-1}_{b,d}(\ker A_{b_0,\min})\cap \ker \wt \rho^d)=\dim Z_{-,0}(A_{b_0})$ hold for $b$ in a neighbourhood of $b_0$ in $B$.
\end{enumerate}
Then for any $s\in \RR$, the family of $L^2$-orthogonalized Calder{\'o}n projections $\bigl(C^{\ort}_s(A_b)\bigr)_{b\in B}$ is continuous at $b_0$ in the operator norm of the corresponding Sobolev space $H^s(\Si;E'^d)$.
\end{theorem}
\begin{proof}
According to Theorem \ref{t:s<d-half} and Proposition \ref{p:interpolation}, we only need to prove the case
$s\geq \frac{d}{2}$.
Let $s\geq \frac{d}{2}$ in the following.
By Lemmas \ref{l:gap-right-A'inverse-Amin}, \ref{l:s-Amin-A-inverse-Atmin} and \ref{l:closed-continuous}b,
Assumption (3.1) and
$\dim (A^{-1}_{b,d}(\ker A^t_{b_0,\min})\cap \ker \wt \rho^d)=\dim Z_{+,0}(A_{b_0})$ imply
\begin{equation}\label{e:convergence-Ainverse+kerrho}
\lim_{b\to b_0}\delta(A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})+\ker \wt \rho^d, \ker A_{b_0,s+\frac{d}{2}}+\ker \wt \rho^d )=0.
\end{equation}
Since $\ker A_{b,s+\frac{d}{2}}\subset A^{-1}_{b,s+\frac{d}{2}}(\ker A^t_{b_0,\min})$, \eqref{e:convergence-Ainverse+kerrho} implies
\[\lim_{b\to b_0}\delta(\ker A_{b,s+\frac{d}{2}}+\ker \wt \rho^d,\ker A_{b_0,s+\frac{d}{2}}+\ker \wt \rho^d )\to0.\]
Similarly, (i) and $\dim ((A^t)^{-1}_{b,d}(\ker A_{b_0,\min})\cap \ker \wt \rho^d)=\dim Z_{-,0}(A_{b_0})$ imply
\[\lim_{b\to b_0}\delta(\ker A^t_{b,s+\frac{d}{2}}+\ker \wt \rho^d,\ker A^t_{b_0,s+\frac{d}{2}}+\ker \wt \rho^d )\to0.\]
According to Corollary \ref{c:l2-orthogonality-of-calderon-in-seeley}, we have
\[\image C_s^{\ort}(A_b)=\wt \rho^d(\ker A_{b,s+\frac{d}{2}})\ \ \tand \ \ \ker C_s^{\ort}(A_b)= \wt J^t\wt\rho^d(\ker A^t_{b,s+\frac{d}{2}}).\]
Then applying Lemmas \ref{l:generalized-projection} and \ref{l:projector-varying1}, we get the continuity of $L^2$-orthogonalized Calder{\'o}n projections in the operator norm of the corresponding Sobolev space $H^s(\Si;E'^d)$ for $s\geq \frac{d}{2}$, then by the discussion above we get the same conclusion for all $s\in \RR$.
\qed
\end{proof}
\begin{remark}
(a) Theorem \ref{c:appendix--A-b} can be seen as a direct corollary of Theorem \ref{t:weaker(ii)}.
For $A=A^t$ and $b\in\CC$, $(A-bI)^{-1}(\ker A_{\min})=\ker (A-bI)+\ker A_{\min}$.
So when $b\to 0$ and $b\neq 0$, we have $\dim((A-bI)^{-1}(\ker A_{\min})\cap\ker \wt\rho^d)=\dim \ker A_{\min}$ and $Z_{+,0}(A-bI)=\{0\}$.
(b) By Lemma \ref{l:dimension-Ainverse-Atmin-cap-rho},
Assumption (ii) in Theorem \ref{t:main} implies Assumption (ii') in Theorem \ref{t:weaker(ii)}.
\end{remark}
|
1,477,468,751,419 | arxiv | \section{Introduction}
\section{Introduction}
Satellite communication systems have been greatly developing in the domain of broadcasting, navigation, rescue, and disaster relief because of their potentiality to provide wide coverage and achieve high data rate transmission \cite{7230282}.
For most cases in previous generations, satellite systems were considered completely independent from the terrestrial communication \cite{8795462}. However, there exists a potential shortcoming that the satellite system will degenerate in the presence of shadowing, which occurs when the line-of-sight (LOS) link between the satellite and the terrestrial user is blocked by obstacles \cite{8081808,caixuesong2}.
The high-speed railway (HSR) is one of the most challenging scenarios in the fifth-generation mobile communication system (5G) \cite{Liuyu1}, whose demands for high data rate transmission and high reliability services have grown rapidly \cite{Liuyu2}.
On the one hand, the terrestrial cellular network can provide low cost coverage for high reliability applications in HSR environment through its non-LOS (NLOS) communication.
On the other hand, the satellite system can mitigate the problems of overload and congestion by providing wide coverage to complement and extend the dense terrestrial cells, especially for the terrestrial wireless communication in HSR areas.
Thus, it can be seen that a/an hybrid/integrated satellite-terrestrial cooperative communication system can realize genuine ubiquitous coverage for future HSR communications \cite{2011MIMO,7105655}.
In this case, the terrestrial mobile users can make full use of the spatial diversity gain by receiving independent multipath fading signals from satellites and terrestrial base stations. Consequently, the effectiveness and reliability of transmission will be greatly increased \cite{2011111}.
In order to realize this vision, the integrated satellite-terrestrial communication systems should be carefully designed, especially by analyzing the interference between the satellite link and the terrestrial link based on realistic channel characteristics \cite{fanwei2,fanwei3}.
Hence, it is essential to capture joint channel characteristics in cooperative satellite-terrestrial systems in order to make realistic performance assessments and channel modeling for future intelligent rail transportation \cite{ZhouTao1,ZhouTao3}.
Most of the existing performance analysis for satellite and terrestrial communication systems were studied based on pure mathematical models \cite{7343438,7373246,7308010,7156170,ZTE}.
However, the existing results can hardly be applied for the satellite-terrestrial channel characterization for HSR due to the particular geometrical and physical characteristics of the HSR environment.
Therefore, there is a lack of deep investigation for satellite-terrestrial channel based on a typical HSR scenario, which causes limited accuracy on coverage prediction and interference analysis.
Thus, in this study, we characterize the satellite-terrestrial channel at 22.6 GHz band comprehensively through simulation and modeling, with the following contributions:
\begin{itemize}
\item We reconstruct a typical HSR model and conduct extensive ray-tracing (RT) simulations in four terrestrial and satellite-terrestrial communication links with two weather conditions. Based on RT simulation results, the four links are characterized in terms of key channel parameters, containing root-mean square (RMS) delay spread (DS), Rician $K$-factor (KF), azimuth angular spread of arrival/departure (ASA/ASD), elevation angular spread of arrival/departure (ESA/ESD).
\item We predict the propagation behavior of all the objects in the simulation scenario, as well as the interaction between them. Besides, through calculating the signal-to-interference ratio (SIR), the interference between terrestrial HSR system and satellite-terrestrial system is evaluated.
\end{itemize}
The rest of this paper is organized as follows: Section II addresses the scenario modeling and the simulation setup. Key channel parameters are analyzed and characterized in Section III. Finally, conclusions and further work are drawn in Section IV.
\section{Railway Environment Reconstruction and Simulation Setup}
\subsection{Antenna Model}
For the satellite-terrestrial system, the satellite transmitter (Tx) employs the antenna called APSMLA609V01 which is provided by ITU, and the receiver (Rx) antenna is selected according to ITU-R S.456-6 \cite{465-6}. The antenna patterns of the satellite-terrestrial system are depicted in Fig. \ref{fig:Antenna}(a) and Fig. \ref{fig:Antenna}(b).
As for the terrestrial HSR system, both the Tx and Rx employ the same antenna. The antenna pattern is depicted in Fig. \ref{fig:Antenna}(c).
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/Antenna.pdf}\\
\caption{Antenna patterns used in ray-tracing simulations: a) Satellite antenna pattern; b) Satellite UE antenna pattern; and c) Terrestrial antenna pattern.}
\label{fig:Antenna}
\end{figure}
\subsection{Simulation Scenario}
For characterizing the channel in a HSR environment, it is important to define the scenario model with distinguished propagation features. In this paper, the three-dimensional (3D) model of a typical HSR scenario reconstructed via SketchUp tool is depicted in Fig. \ref{fig:3D model} \cite{8319730}.
Several buildings, traffic signs, billboards, crossing bridges, train stations, etc., are defined in the vicinity of the railway, composing a realistic HSR scenario.
All the objects are modeled according to their typical geometric shapes, and different materials are assigned to their surfaces.
Therefore, the electromagnetic behavior of each object could be effectively established, and the realistic description of the HSR environment allows the analysis of key channel parameters.
\begin{figure}[!t]
\center
\includegraphics[width=0.7\columnwidth,draft=false]{figure//scenario.pdf}\\
\caption{3D model of the railway scenario for ray-tracing simulation}
\label{fig:3D model}
\end{figure}
The train travels at a constant speed of 300 km/h during the 500 m movement in the scenario. 1441 samples are extracted in the simulation, corresponding to a sampling distance of 0.347 m.
For satellite-terrestrial links, the Tx is located on a geosynchronous satellite (GEO) called Koreasat 6, which is positioned at 116$^{\circ}$E overhead the equator and at a distance of approximately 37470 km from the target HSR scenario. The Rx is assembled to the rear of the train with a total height of 5.2 m, which includes the train height (4.5 m) and the antenna bracket.
The presence of terrestrial links will affect the SIR of satellite-terrestrial links, and vice versa. Hence, for the SIR analysis between the terrestrial HSR system and satellite-terrestrial system, an additional communication link is included in the simulation. The terrestrial Tx is placed at the top of the steep wall with a height of 26 m, and the Rx is similarly assembled to the rear of the train with a total height of 4.7 m. Both the terrestrial and satellite-terrestrial links are depicted in Fig. \ref{fig:Communication links}.
The abbreviations in this figure are noted as follows: BS and TrUE are short for base station (i.e. Tx) and train user equipment (i.e. Rx) for the terrestrial HSR system, respectively. SA and SaUE are short for satellite antenna (i.e. Tx) and satellite user equipment (i.e. Rx) for the satellite-terrestrial system, respectively.
\begin{figure}[!t]
\center
\includegraphics[width=0.65\columnwidth,draft=false]{figure/InterferenceScenario.pdf}\\
\caption{Communication links for interference analysis}
\label{fig:Communication links}
\end{figure}
Table \ref{Table:scenario configuration} conclude the scenario configurations for the terrestrial HSR system and the satellite-terrestrial system. The communication scenarios for both systems are the same. The most differences are the locations of Tx and the selected antennas for Tx and Rx.
\begin{table}[!t]
\centering
\caption{Scenario configurations for terrestrial HSR system and satellite-terrestrial system}
\label{Table:scenario configuration}
\begin{tabular}{c|c|l|l}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
Frequency & \multicolumn{3}{l}{22.1-23.1 GHz} \\ \hline\rule{0pt}{8pt}
Bandwidth & \multicolumn{3}{l}{1 GHz} \\ \hline\rule{0pt}{8pt}
Antenna & \multicolumn{3}{l}{Directional antenna} \\ \hline\rule{0pt}{8pt}
\multirow{8}{*} {Terrestrial HSR system} & \multirow{4}{*}{Tx} & Power & 20 dBm \\ \cline{3-4}\rule{0pt}{8pt}
& & Maximum antenna gain & 16 dBi \\ \cline{3-4}\rule{0pt}{8pt}
& & Antenna beamwidth & 20 degree \\ \cline{3-4}\rule{0pt}{8pt}
& & Height & 26 m \\ \cline{2-4}\rule{0pt}{8pt}
& \multirow{3}{*}{Rx} & Maximum antenna gain & 22 dBi \\ \cline{3-4}\rule{0pt}{8pt}
& & Antenna beamwidth & 20 degree \\ \cline{3-4}\rule{0pt}{8pt}
& & Height & 4.7 m \\ \hline\rule{0pt}{8pt}
\multirow{8}{*}{Satellite-terrestrial system} & \multirow{4}{*}{Tx} & Power & 40.6 dBm \\ \cline{3-4}\rule{0pt}{8pt}
& & Maximum antenna gain & 53 dBi \\ \cline{3-4}\rule{0pt}{8pt}
& & Antenna beamwidth & 1 degree \\ \cline{3-4}\rule{0pt}{8pt}
& & Height & 37469.3 km \\ \cline{2-4}\rule{0pt}{8pt}
& \multirow{3}{*}{Rx} & Maximum antenna gain & 32 dBi \\ \cline{3-4}\rule{0pt}{8pt}
& & Antenna beamwidth & 3 degree \\ \cline{3-4}\rule{0pt}{8pt}
& & Height & 5.2 m \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table}
Furthermore, the rainfall can significantly affect the performance of the wireless communication system since it causes additional attenuation to wave propagation, especially for the satellite-terrestrial link \cite{618-13}. Therefore, both communication systems are characterized for rainy and sunny weather conditions. In total, the simulation contains 8 cases which are summarized in Table \ref{Table:Analysis cases}. Suffix ``-R'' and ``-S'' represent the rainy and sunny weather condition, respectively.
\begin{table}[!t]
\centering
\caption{Analysis cases for the satellite-terrestrial channel}
\label{Table:Analysis cases}
\small
\begin{tabular}{c|c|c|c|c|c}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\textbf{Tx} & \textbf{Rx} & \textbf{Weather} & \textbf{Signal} & \textbf{Interference} & \textbf{Terminology} \\ \hline\rule{0pt}{8pt}
\multirow{4}{*}{BS} & \multirow{2}{*}{TrUE} & Rainy & \checkmark & & BS2TrUE-R \\ \cline{3-6}\rule{0pt}{8pt}
& & Sunny & \checkmark & & BS2TrUE-S \\ \cline{2-6}\rule{0pt}{8pt}
& \multirow{2}{*}{SaUE} & Rainy & & \checkmark & BS2SaUE-R \\ \cline{3-6}\rule{0pt}{8pt}
& & Sunny & & \checkmark & BS2SaUE-S \\ \hline\rule{0pt}{8pt}
\multirow{4}{*}{SA} & \multirow{2}{*}{SaUE} & Rainy & \checkmark & & SA2SaUE-R \\ \cline{3-6}\rule{0pt}{8pt}
& & Sunny & \checkmark & & SA2SaUE-S \\ \cline{2-6}\rule{0pt}{8pt}
& \multirow{2}{*}{TrUE} & Rainy & & \checkmark & SA2TrUE-R \\ \cline{3-6}\rule{0pt}{8pt}
& & Sunny & & \checkmark & SA2TrUE-S \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table}
\subsection{Simulation Setup}
As a deterministic modeling method, RT simulations can provide full information of multipath effects in multiple domains and build accurate site-specific channel models.
It has been successfully used for different applications \cite{ZhouTao2,fanwei1,Guanke2,Guanke3,caixuesong1,caixuesong3,chenxiaoming}.
The RT simulator employed in this study, CloudRT, is jointly developed by Beijing Jiaotong University and Technische Universit{\"a}t Braunschweig.
It can trace rays corresponding to various propagation mechanisms, such as direct rays, reflected rays, scattered rays, etc., and is validated and calibrated by a large number of measurements at sub-6 GHz \cite{Abbas2015Simulation} and terahertz (THz) band \cite{Priebe2013Stochastic}.
More than ten properties of each ray can be output from RT results, such as reflection order, time of arrival, received power, AoA, AoD, EoA, EoD, etc.
More information on the CloudRT can be found in tutorial \cite{hedanping} and at http://www.raytracer.cloud/.
The setup for RT simulations is detailed in Table \ref{Table:SimulationSetUp}.
\begin{table}[!t]
\centering
\caption{Ray-tracing simulation setup}
\label{Table:SimulationSetUp}
\small
\begin{tabular}{c|l|l|l}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Propagation\\ mechanism\end{tabular}}
& Direct & \multicolumn{2}{l}{\checkmark}\\ \cline{2-4}\rule{0pt}{8pt}
& Reflection & \multicolumn{2}{l}{up to the 2$^{nd}$ order}\\ \cline{2-4}\rule{0pt}{8pt}
& Diffraction & \multicolumn{2}{l}{Uniform theory of diffraction (UTD)} \\ \cline{2-4}\rule{0pt}{8pt}
& Scattering & \multicolumn{2}{l}{Directive scattering model} \\ \cline{2-4}\rule{0pt}{8pt}
& Transmission & \multicolumn{2}{l}{\checkmark} \\ \hline\rule{0pt}{8pt}
\multirow{5}{*}{Material} & Building & \multicolumn{2}{l}{Marble, Toughened glass} \\ \cline{2-4}\rule{0pt}{8pt}
& Steep wall, Cutting walls & \multicolumn{2}{l}{Brick} \\ \cline{2-4}\rule{0pt}{8pt}
& Railway furniture, Train & \multicolumn{2}{l}{Metal} \\ \cline{2-4}\rule{0pt}{8pt}
& Tree & \multicolumn{2}{l}{Wood} \\ \cline{2-4}\rule{0pt}{8pt}
& Ground & \multicolumn{2}{l}{Concrete} \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table}
Table \ref{Table:EM parameter} summarizes electromagnetism (EM) parameters of the involved materials, where $\varepsilon _{r}^{'}$ is the real part of the relative permittivity, $tan\delta $ is the loss tangent, $S$ and $\alpha$ are the scattering coefficient and scattering exponent of the directive scattering model \cite{Vittorio}\cite{wanglonghe2019hindawi}.
Particularly, parameters of the wood and concrete are calibrated \cite{Wang}.
\begin{table}[!t]
\centering
\caption{EM parameters of different materials}
\label{Table:EM parameter}
\small
\begin{tabular}{c|c|c|c|c|c|c}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
Material & Marble & Toughened glass & Brick & Metal & Wood & Concrete \\ \hline
$\varepsilon _{r}^{'}$ & 3.0045 & 1.0538 & 1.9155 & 1 & 6.6 & 5.4745 \\ \hline
$tan\delta$ & 0.2828 & 23.9211 & 0.0568 & 10$^{7}$ & 0.9394 & 0.0021 \\ \hline
$S$ & 0.0022 & 0.0025 & 0.0019 & 0.0026 & 0.0086 & 0.0011 \\ \hline
$\alpha$ & 15.3747 & 5.5106 & 49.5724 & 17.7691 & 13.1404 & 109 \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table}
\section{Excess Propagation Attenuation}
Apart from the attenuation due to classic propagation mechanisms already in the CloudRT, the additional propagation attenuation caused by several effects is significant for millimeter wave (mmWave) communication links, which must be considered accordingly and added into the CloudRT in this study.
\subsection{Excess Propagation Attenuation for Terrestrial Links}
For terrestrial links, the attenuation due to atmospheric gases and rain is considered to be of great influence \cite{530-17}.
The attenuation due to the absorption by oxygen and water vapour is always present, and should be included in the calculation of total propagation attenuation at frequencies above 10 GHz. The calculation method for the attenuation due to atmospheric gases is given in Recommendation ITU-R P.530-17 \cite{530-17}. Assuming that the maximum link length of the terrestrial HSR scenario we designed is 0.6 km, then the maximum value of attenuation by atmospheric gases in this case is around 0.12 dB.
Although the rain attenuation can be ignored at frequencies below 5 GHz, it must be included in attenuation calculations at higher frequencies, where its importance increases rapidly. Based on the technique for estimating long-term statistics of rain attenuation given in ITU-R P.530-17, the maximum value of the rain attenuation for terrestrial links should be no greater than 8.1074 dB.
\subsection{Excess Propagation Attenuation for Satellite-Terrestrial Links}
The excess propagation attenuation for satellite-terrestrial links is the sum of several different elements. Generally at elevation angles above $10^\circ$ (which is $45^\circ$ in this paper), only the attenuation due to atmospheric gases, rain, clouds and possible scintillation will be significant \cite{618-13}.
The gaseous attenuation which is entirely caused by absorption depends mainly on the frequency, elevation angle, altitude above sea level and water vapour density.
We can get the details for calculating the gaseous attenuation according to ITU-R P.676-11 \cite{676-11}.
Thus, the typical value of the gaseous attenuation ($A_G$) is 0.7071 dB.
Due to the uncertainty of the occurrence time and region of rainfall, it is impossible to calculate the rain attenuation accurately. After decades of observation and research, the calculation of long-term rain attenuation statistics from point rainfall rate have been summarized in ITU-R P.618-13 \cite{618-13}.
The typical value of attenuation by rain ($A_R$) is 30.0162 dB.
The attenuation due to clouds and fog has a great influence on wave propagation of satellite communication links, which can be calculated by the total columnar content of cloud liquid water in ITU-R P.840-7 \cite{840-7}.
The typical value of attenuation by clouds and fog ($A_C$) is 2.1677 dB.
The effect of the tropospheric scintillation on the signal will enhance with the increase of carrier frequency, especially above 10 GHz. A general technique for predicting the cumulative distribution of tropospheric scintillation is given in ITU-R P.618-13.
The typical value of attenuation by tropospheric scintillation ($A_S$) is 0.7638 dB.
\subsection{Total Excess Propagation Attenuation}
Since we consider two weather conditions (rainy day and sunny day) in the RT simulations, we should obtain the total excess propagation attenuation for different cases respectively.
For terrestrial links, the total excess propagation attenuation is 8.2274 dB for the rainy day and 0.12 dB for the sunny day.
For satellite-terrestrial links, the total attenuation represents the combined effect of atmospheric gases, rain, clouds and tropospheric scintillation. A general method for calculating total attenuation ($A_T$) is given by \cite{618-13}
\begin{equation}
\begin{split}
A_{T-Rainy}&=A_G+\sqrt{(A_R+A_C)^2+A^2_S}=32.90\; {\rm dB}\\
A_{T-Sunny}&=A_G+\sqrt{A^2_C+A^2_S}=3.01\; {\rm dB}
\end{split}
\end{equation}
Accordingly, the typical value of total attenuation is 32.90 dB for the rainy day and 3.01 dB for the sunny day in satellite-terrestrial links.
\section{Channel Characterization and Key Parameter Analysis}
Based on extensive RT simulation results, the channel characteristics for the terrestrial HSR system and the satellite-terrestrial system with two weather conditions are given by the following related key parameters: received power, RMS delay spread (DS), Rician $K$-factor (KF), ASA, ASD, ESA and ESD.
Based on the simulation results, it is identified that the impact of rainfall on DS, KF, ASA, ASD, ESA and ESD in this scenario setup is negligible. Thus, the impact of the weather condition on the channel characterization will mainly be considered on the SIR analysis.
If there is no special explanation, the data and results in the subsequent subsection are all obtained on the rainy weather condition.
\subsection{Received Power and Power Delay Profile}
The received power for both satellite-terrestrial and terrestrial links is depicted in Fig. \ref{fig:RP}, where the green solid lines represent the direct component (i.e. LOS path), the blue dotted lines depict the ensemble of multipath components (i.e. NLOS path), and the red solid lines indicate the total received power.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/ReceivedPower.pdf}\\
\caption{Received power of four communication links: a) SA2SaUE; b) BS2TrUE; c) SA2TrUE; and d) BS2SaUE.}
\label{fig:RP}
\end{figure}
For all the four links, there evidently exists deep fading in two consecutive sections where the moving distance of the Rx is approximately 20-40 m and 60-90 m, respectively. The deep fading results from the obstruction for the direct path, which is caused by crossing bridges over the railway.
As for satellite-terrestrial links (i.e. SA2SaUE and SA2TrUE), the received power is approximately a fixed value along most of the train displacement as depicted in Fig. \ref{fig:RP}(a) and Fig. \ref{fig:RP}(c).
This is because there exists a permanent direct path between the satellite and the train Rx antenna, and the impact of the multipath components on the received signal is extremely minimal due to the narrow antenna beamwidth used for satellite communications.
Moreover, compared with the TrUE, the quite narrow antenna beamwidth of the SaUE will cause more situations of the direct path obstructed by the pylons (see Fig. \ref{fig:Pylon}), and further lead to a series of deep fading at 150, 250, 350 and 450 m in Fig. \ref{fig:RP}(a) and Fig. \ref{fig:RP}(c).
\begin{figure}[!t]
\center
\includegraphics[width=0.6\columnwidth,draft=false]{figure/Pylon.pdf}\\
\caption{Pylons in the HSR environment}
\label{fig:Pylon}
\end{figure}
Furthermore, the received power of terrestrial links (i.e., BS2TrUE and BS2SaUE) decreases as the train gradually moves away from the base station.
This is not only due to the increase of the free space path loss caused by the increasing propagation distance, but also because that the direct path is not aligned with the main lobe of the Rx antenna in the elevation plane, which results in relatively low power of the direct path. This antenna misalignment is depicted in Fig. \ref{fig:MainLobe}.
Despite the fact that in terrestrial links, the LOS component is also obstructed when the train is running under the crossing bridges and pylons, it is not as pronounced as in satellite-terrestrial links, because of the reduced incidence elevation angle and the rich multipath components caused by the surrounding objects.
\begin{figure}[!t]
\center
\includegraphics[width=0.6\columnwidth,draft=false]{figure/MainLobe.pdf}\\
\caption{The antenna misalignment as the train moves}
\label{fig:MainLobe}
\end{figure}
Moreover, the power delay profiles (PDPs) for both satellite-terrestrial links and terrestrial links are depicted in Fig. \ref{fig:PDP}.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/PDP.pdf}\\
\caption{PDPs of four communication links: a) SA2SaUE; b) BS2TrUE; c) SA2TrUE; and d) BS2SaUE.}
\label{fig:PDP}
\end{figure}
\subsection{RMS Delay Spread}
RMS delay spread is an important measure that quantifies the dispersion effect due to propagation in the time delay domain, to which the communication systems might be sensitive. RMS delay spread is defined as the square root of the second central moment of the power delay profile (PDP) \cite{Rappaport2002Wireless} as in:
\begin{linenomath*}
\begin{equation}\label{eq:DS}
{\sigma _\tau } = \sqrt {\frac{{\sum\limits_{n = 1}^N {{\tau _n}^2 \cdot {P_n}} }}{{\sum\limits_{n = 1}^N {{P_n}} }} - {{\left( {\frac{{\sum\limits_{n = 1}^N {{\tau _n} \cdot {P_n}} }}{{\sum\limits_{n = 1}^N {{P_n}} }}} \right)}^2}}
\end{equation}
\end{linenomath*}
where ${\sigma _\tau }$ is the RMS delay spread, $P_n$ and $\tau _n$ are the power and the excess delay of the $n^{th}$ multipath, respectively.
In order to quantify the RMS delay spread for four links in the HSR environment, the obtained results are fitted by normal distribution of mean value $\mu$ and standard deviation $\sigma$. These values are depicted in Table \ref{Table:Channel_Parameters}, including the normal distribution fitting values of other key channel parameters described in the following subsections.
\begin{table*}[!t]
\centering
\caption{Extracted key channel parameters of four communication links}
\label{Table:Channel_Parameters}
\scriptsize
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\multirow{2}{*}{\textbf{Link}} & \multicolumn{2}{c|}{\textbf{DS {[}ns{]}}} & \multicolumn{2}{c|}{\textbf{KF {[}dB{]}}}& \multicolumn{2}{c|}{$\bm{ASA\ [^{\circ}]}$} & \multicolumn{2}{c|}{$\bm{ASD\ [^{\circ}]}$} & \multicolumn{2}{c|}{$\bm{ESA\ [^{\circ}]}$} & \multicolumn{2}{c}{$\bm{ESD\ [^{\circ}]}$} \\ \cline{2-13}\rule{0pt}{8pt}
& $\mu_{DS}$ & $\sigma_{DS}$ & $\mu_{KF}$ & $\sigma_{KF}$ & $\mu_{ASA}$ & $\sigma_{ASA}$ & $\mu_{ASD}$ & $\sigma_{ASD}$ & $\mu_{ESA}$ & $\sigma_{ESA}$ & $\mu_{ESD}$ & $\sigma_{ESD}$ \\ \hline\rule{0pt}{8pt}
BS2TrUE-R & 0.71 & 0.48 & 26.61 & 21.96 & 0.31 & 4.06 & 0.15 & 0.18 & 3.40 & 1.77 & 0.43 & 0.22 \\ \hline\rule{0pt}{8pt}
SA2TrUE-R & 2.41 & 0.36 & 53.44 & 7.67 & 0.02 & 0.02 & 0.16 & 0.39 & 0.06 & 0.08 & 0 & 0 \\ \hline\rule{0pt}{8pt}
BS2SaUE-R & 3.63 & 18.72 & 26.26 & 18.98 & 3.33 & 10.84 & 0.28 & 0.28 & 3.97 & 4.88 & 0.31 & 0.15 \\ \hline\rule{0pt}{8pt}
SA2SaUE-R & 2.42 & 0.33 & 56.76 & 1.27 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table*}
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/DS.pdf}\\
\caption{RMS delay spread values of four communication links: a) SA2SaUE; b) BS2TrUE; c) SA2TrUE; and d) BS2SaUE.}
\label{fig:DS}
\end{figure}
The cumulative distribution functions (CDFs) of the RMS delay spread values for satellite-terrestrial and terrestrial communication links are depicted in Fig. \ref{fig:DS}.
As depicted in Fig. \ref{fig:DS}(a) and Fig. \ref{fig:DS}(c), the RMS delay spread of satellite-terrestrial links are similar, with values less than 3 ns with a probability of 90\%, which means that most of the strong multipath components (MPCs) are concentrated around the LOS path and the effect of MPCs on the satellite link is limited. This is consistent with the simulation results in which the scattered rays are mainly concentrated on the top of the train.
However, an unanticipated RMS delay spread value of 200 ns is found for the BS2SaUE link, as depicted in Fig. \ref{fig:DS}(d). This is because of the relatively long time delay and high signal power of a reflected path, which is caused by the distant metallic noise barrier. This reflected path can be observed in Fig. \ref{fig:RMSsnapshot}.
\begin{figure}[!t]
\center
\includegraphics[width=0.6\columnwidth,draft=false]{figure//RMSsnapshot.pdf}\\
\caption{One reflected path in the simulation of the BS2SaUE link}
\label{fig:RMSsnapshot}
\end{figure}
\subsection{Rician $K$-factor}
The Rician $K$-factor is a significant parameter to quantify the channel fading severity, which is defined as the ratio of the power of the strongest component to the total power of the remaining components in the received signal \cite{6899647}. Thus, the Rician $K$-factor can be calculated according to its definition:
\begin{linenomath*}
\begin{equation}\label{eq:KF}
\centering
KF\left( {dB} \right) = 10 \cdot {\rm{lo}}{{\rm{g}}_{10}} \left(\frac{{{P_{{\rm{strongest}}}}}}{{\sum {{P_{{\rm{remaining}}}}} }} \right)
\end{equation}
\end{linenomath*} where $KF$ is the Rician $K$-factor, ${P_{\rm{strongest}}}$ and ${P_{\rm{remaining}}}$ are the power of the strongest component and each remaining component, respectively.
The fitting results of the Rician $K$-factor are summarized in Table \ref{Table:Channel_Parameters} and the CDFs are compared in Fig. \ref{fig:KF}.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/KF.pdf}\\
\caption{Rician $K$-factor values of four communication links: a) SA2SaUE; b) BS2TrUE; c) SA2TrUE; and d) BS2SaUE.}
\label{fig:KF}
\end{figure}
From the table and the figures, the $\mu_{KF}$ is around 55 dB for satellite-terrestrial links. The large mean values of the Rician $K$-factor are because of the strong contribution of the direct component, compared to the other rays.
However, the Rician $K$-factor values in terrestrial links are significantly smaller than that in satellite-terrestrial links, with values less than 0 dB in approximately 10\% of the recorded values. This is because of the richness of multipath components in terrestrial links, which results in an increasing ${P_{\rm{remaining}}}$.
\subsection{Angular Spread}
The four angular spreads (ASA, ASD, ESA, and ESD) are calculated through the same approach of the 3rd Generation Partnership Project (3GPP) standards \cite{3GPP}
\begin{linenomath*}
\begin{equation}\label{eq:AS}
\sigma _{\rm{AS}} = \sqrt {\frac{{\sum\limits_{n = 1}^N {{{\left( {{\theta _{n,\mu }}} \right)}^2} \cdot {P_n}} }}{{\sum\limits_{n = 1}^N {{P_n}} }}}
\end{equation}
\end{linenomath*}
where $\sigma _{\rm{AS}}$ is the angular spread, $P_n$ is the power of the $n^{th}$ multipath component, and ${\theta _{n,\mu }}$ is defined by:
\begin{linenomath*}
\begin{equation}\label{eq:theta}
{\theta _{n,\mu }} = \bmod \left( {{\theta _n} - {\mu _\theta} + \pi ,2\pi } \right) - \pi
\end{equation}
\end{linenomath*}
where $\theta _n$ is the AoA/AoD/EoA/EoD of the $n^{th}$ multipath and ${\mu _n}$ is calculated by
\begin{linenomath*}
\begin{equation}\label{eq:mu}
{\mu _\theta } = \frac{{\sum\limits_{n = 1}^N {{\theta _{n }} \cdot {P_n}} }}{{\sum\limits_{n = 1}^N {{P_n}} }}
\end{equation}
\end{linenomath*}
The normal distribution fitting values of angular spreads (ASA, ASD, ESA, ESD) for each link are summarized in Table \ref{Table:Channel_Parameters}. The mean values of angular spreads are very small for satellite-terrestrial links, which indicates that the MPCs are relatively less than that in terrestrial links, and are mainly concentrated on the LOS direction.
In terrestrial links, ESA and ESD values are larger than ASA and ASD, which implies that most of the multipaths components arrive from the elevation direction. This truthfully reflects that in our simulation results, a large number of reflected components are mainly from the ground, and the scattering occurs mostly on the surface of objects on both sides of the rail track. This can be observed in Fig. \ref{fig:ASsnapshot}.
\begin{figure}[!t]
\center
\includegraphics[width=0.65\columnwidth,draft=false]{figure//ASsnapshot.pdf}\\
\caption{Reflected and scattered rays for terrestrial links}
\label{fig:ASsnapshot}
\end{figure}
\subsection{Co-channel Interference Analysis}
Frequency reuse and interference are a set of indivisible themes. Due to the fact that the satellite links and the terrestrial links use the same spectrum in this study, co-channel interference exists between these two links. Co-channel interference means the carrier frequencies of the desired signal and the interference signal are the same and are received by the receiver without discrimination, which increases the difficulty in detecting the desired signal.
Since the interference signal provides no information, this signal will contribute to a degradation of the SIR. The SIR can then be expressed as:
\begin{linenomath*}
\begin{equation}
SIR({\rm dB})=P_{\rm{signal}}({\rm dBm})-P_{\rm{interference}}({\rm dBm})
\end{equation}
\end{linenomath*}
where SIR is the signal-to-interference ratio, $P _{\rm{signal}}$ is the useful received power from the corresponding Tx, and $P_{\rm{interference}}$ is the unwanted received power from the other Tx. The SIR is then evaluated for both satellite-terrestrial and terrestrial HSR systems.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/SIR.pdf}\\
\caption{Signal-to-interference ratio analysis results: a) Satellite-terrestrial system; b) CDF of satellite-terrestrial system; c) Terrestrial HSR system; and d) CDF of terrestrial HSR system.}
\label{fig:SIR}
\end{figure}
Fig. \ref{fig:SIR}(a) depicts the SIR results obtained at the satellite Rx (i.e. SaUE) in the satellite-terrestrial system. Signals from the SA2SaUE link are assumed to be useful signals while signals from the BS2SaUE link will act as interference signals.
It is obvious that the bottoms of the SIR, result from the deep fading of \emph{P$_{\rm{signal}}$}. Except for these bottoms, the SIR values are around -30 dB for rainy weather condition. Through calculating the CDF of SIR as depicted in Fig. \ref{fig:SIR}(b), the SIR coverage probability can be obtained. The probability that the SIR is higher than the threshold 0 dB is approximately 2$\%$, which indicates that the interference from terrestrial BSs will have a great impact on the effective satellite link for satellite-terrestrial communication system.
Similarly, Fig. \ref{fig:SIR}(c) presents the SIR results obtained at the terrestrial Rx (i.e. TrUE) in the terrestrial HSR system. Signals from the BS2TrUE link are assumed to be useful signals while signals from the SA2TrUE link will act as interference signals.
The peaks of the SIR occur shortly after the Rx moves below the crossing bridges. At this moment, there exists a direct path from the BS while no direct path from the satellite. On the contrary, the bottoms of SIR occur when the Rx just left the crossing bridges for there exists a direct path from the satellite while no direct path from the BS. With the exception of these peaks and bottoms, the SIR values are around 60 dB for rainy weather condition, which are marked in Fig. \ref{fig:SIR}(c). Through calculating the CDF of SIR as depicted in Fig. \ref{fig:SIR}(d), the probability that the received SIR is greater than 40 dB is approximately 98$\%$, which is reliable enough for future intelligent rail transportation applications. Evidently, the interference from satellite antennas will not have much impact on the effective terrestrial link for terrestrial HSR system.
\subsection{Effect of Weather Conditions on SIR}
According to the calculation methods for estimating long-term rain attenuation statistics on terrestrial and satellite-terrestrial communication links given in ITU-R P.530-17 and ITU-R P.618-13, the attenuation due to rain is proportional to the distance of the communication link. Accordingly, the rainfall has a greater impact on the received signal from satellite antennas, compared with the received signal from terrestrial BSs.
Thus for the terrestrial HSR system, the attenuation of the interference signal caused by rainfall is greater than that of the useful signal, which leads to higher SIR values in the rainy day. While as for the satellite-terrestrial system, the attenuation of the useful signal caused by rainfall is greater than that of the interference signal, which results in lower SIR values in the rainy day. This is depicted in Fig. \ref{fig:SIR}.
\section{Conclusion}
In this paper, the satellite-terrestrial channel at 22.6 GHz is characterized for a typical HSR environment. The CloudRT platform is used to extract the key channel parameters of a realistic 3D HSR scenario. The objects in the scenario are defined and reconstructed according to their typical geometries and materials. Channel characterization and respective conclusions are drawn based on simulation results.
Obstacles that commonly appear in HSR scenarios, such as crossing bridges and pylons, will severely affect the performance of the wireless communication systems, since they can block the LOS path and therefore, cause deep fading on the received power.
Compared with terrestrial links, this phenomenon becomes more pronounced in satellite-terrestrial links, since for satellite links, the LOS path contributes more than other multipath components, which in turns makes satellite links more sensitive for shadowing.
The maximum value of the rain attenuation for terrestrial links of the scenario in this paper should be no greater than 8 dB, and the typical value of the rain attenuation for satellite-terrestrial links is around 30 dB.
Through analyzing the channel parameters for different weather conditions, we conclude that the rainfall would not influence channel parameters like Rician $K$-factor, RMS delay spread, and angular spreads, but it will influence the received power and corresponding interference between terrestrial link and satellite-terrestrial link.
When the Rx antenna is mounted on the top of the train, the large-scale objects like buildings and train stations will provide strong reflected and scattered contributions which have significant influence on the wireless channel, while some small-scale objects such as billboards and traffic signs should also not be neglected since they have a great impact on channel parameters such as Rician $K$-factor and RMS delay spread.
The SIR between satellite-terrestrial and terrestrial communication systems is also analyzed.
The SIR values for satellite-to-terrestrial interference are around 60 dB, which indicates that the interference from satellite antennas will not have much impact on the effective terrestrial links. This basically meets the requirement of good communication performance in 5G mmWave channel. On the contrary, the SIR values for terrestrial-to-satellite interference are around -30 dB, which states that satellite-terrestrial communication links will be severely affected by the existence of terrestrial links, since the strength of the interference from terrestrial BSs is comparable to the received useful signal from satellite antennas.
The channel characterization analysis and the key channel parameter extraction provided in this paper, are suitable for effective link budget of the satellite-terrestrial channel in a realistic HSR environment, which will help the research community understand the propagation channel when designing mmWave technologies and communication system for future intelligent rail transportation.
Future work will address the mmWave satellite-terrestrial channel characterization in more scenarios and potential system configurations.
\acknowledgments
This work was supported in part by Institute of Information \& communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2018-0-00792, QoE improvement of open Wi-Fi on public transportation for the reduction of communication expense), and in part by IITP grant funded by the Korea government(MSIT) (No.2018-0-00175, 5G AgiLe and fLexible integration of SaTellite And cellulaR). Readers can access the data of this paper in the 4TU.Centre for Research Data by the link {\color{blue} \underline{\emph{https://doi.org/10.4121/uuid:f6c34e2e-fa34-4bd6-9046-1c350a9bb5db}}}.
\section{Introduction}
While planning to write some \add[FS]{complex} document together with some of my colleagues the same discussion was started over and over again: should \LaTeX\ or some office suite like \change[PS]{Microsoft Office}{OpenOffice.org} be used? Strengths of \LaTeX\ are its deterministic behavior, its reliable handling of split documents and unreached typesetting of formulas. On the other hand, current office suites provide the user with several features that are at least very desirable when collaboratively writing a document: they provide integrated merging facilities and are able to track changes and attach notes to the text. The merging issue can be reasonably handled by version tracking systems like SVN or CVS,\note[FS]{Should we emphasize that UNIX diff only works on line basis?} but there was no acceptable solution to the issue of change tracking available. Of course, some \remove[PS]{militant} \LaTeX\ purists tried to convince me that all change tracking can be handled by insertion of \LaTeX\ \textit{comments}. I have tried to handle one project like this but it did not work out! The main reason was that reading and editing of large documents is mostly handled in \annote[PS]{DVI}{what about PDF? The changes can be seen in PDF as well ...} format and not on the \LaTeX\ source level -- but \LaTeX\ comments cannot be seen in DVI! Especially, if you have sent one version of the document to a colleague and you want to skim quickly over it in order to see what has been changed.
While returning from a project meeting and staring out of the train's window I had the idea how we could combine the ``best of both worlds'' for collaborative text editing: by adding change tracking and note facilities to \LaTeX ! This is the basic idea of the \texttt{trackchanges} \LaTeX\ package. But this is only one part of the change tracking convenience offered by an office suite. The second part of the story is that changes and notes need to be accepted or rejected! This is the goal of the other programs of the \textbf{trackchanges} open source project hosted on sourceforge\footnote{Please visit \url{http://trackchanges.sourceforge.net}}
\end{document}
\section{Introduction}
While planning to write some complex document together with some of my colleagues the same discussion was started over and over again: should \LaTeX\ or some office suite like OpenOffice.org be used? Strengths of \LaTeX\ are its deterministic behavior, its reliable handling of split documents and unreached typesetting of formulas. On the other hand, current office suites provide the user with several features that are at least very desirable when collaboratively writing a document: they provide integrated merging facilities and are able to track changes and attach notes to the text. The merging issue can be reasonably handled by version tracking systems like SVN or CVS, but there was no acceptable solution to the issue of change tracking available. Of course, some \LaTeX\ purists tried to convince me that all change tracking can be handled by insertion of \LaTeX\ \textit{comments}. I have tried to handle one project like this but it did not work out! The main reason was that reading and editing of large documents is mostly handled in DVI format and not on the \LaTeX\ source level -- but \LaTeX\ comments cannot be seen in DVI! Especially, if you have sent one version of the document to a colleague and you want to skim quickly over it in order to see what has been changed.
While returning from a project meeting and staring out of the train's window I had the idea how we could combine the ``best of both worlds'' for collaborative text editing: by adding change tracking and note facilities to \LaTeX ! This is the basic idea of the \texttt{trackchanges} \LaTeX\ package. But this is only one part of the change tracking convenience offered by an office suite. The second part of the story is that changes and notes need to be accepted or rejected! This is the goal of the other programs of the \textbf{trackchanges} open source project hosted on sourceforge\footnote{Please visit \url{http://trackchanges.sourceforge.net}}
\end{document}
\chapter{Court scene - multiple murderer}
\annote[novi]{\emph{Cut to a courtroom. Severe atmosphere.}}{In the Flying Circus there is never a ``severe'' atmosphere.}
\noindent\\ Judge:
Michael Norman Randall, you have been found guilty of the murder of Arthur Reginald Webster, Charles Patrick Trumpington, Marcel Agnes Bernstein, Lewis Anona Rudd, John Malcolm Kerr, Nigel Sinclair Robinson, \change[ym]{Norman Arthur Potter}{Harry Potter}, \add[novi]{Thing 1 \& Thing 2, }\add[ym]{Humpty Dumpty, }\add[three]{Superman, }\add[quatro]{Cold Fusion, }\add[pyat]{The Lorax, }\add[one more]{Mr. T, }\add{The Dread Pirate Roberts, }Felicity Jayne Stone, Jean-Paul Reynard, Rachel Shirley Donaldson, Stephen Jay Greenblatt, Karl-Heinz Mullet, Belinda Anne Ventham, Juan-Carlos Fernandez, Thor Olaf Stensgaard, Lord Kimberley of Pretoria\note{Isn't Pretoria in South Africa?}, \remove{Lady Kimberley of Pretoria, }The Right Honourable Nigel Warmsly Kimberley, Robert Henry Noonan\add[novi]{, Your Mom} and Felix James Bennett, on or about the morning of the 19th December 1972\refneeded[novi]{}. Have you anything to say before I pass sentence?
\noindent\\ Randall:
\annote[ym]{Yes, sir. I'm very sorry.}{That was short.}
\noindent\\ Judge:
Very sorry?
\noindent\\ Randall:
Yes, sir. It was a very very bad thing to have done and I'm really very ashamed of myself. I can only say it won't happen again. To have murdered so many people in such a short space of time is really awful, and I really am very, very, very sorry that I did it, and also that I've taken up so much of the court's valuable time listening to the sordid details of these senseless killings of mine. I would particularly like to say, a very personal and sincere 'sorry' to you, m'lud, for my appalling behaviour throughout this trial. I'd also like to say sorry to the police, for putting them to so much trouble (shot of three heavily bandaged exhausted-looking policemen behind him) for the literally hours of work they've had to put in, collecting evidence and identifying corpses and so forth. You know I think sometimes we ought to realize the difficult and often dangerous work involved in tracking down violent criminals like myself and I'd just like them to know that their fine work is at least appreciated by me.
\noindent\\\emph{The policemen look embarrassed.}
\noindent\\ First Policeman:
No, no, we were only doing our job.
\noindent\\ \annote{Second Policeman:
No, no, no, no.
\noindent\\ Randall:
It's very good of you to say that, but I know what you've been through.
\noindent\\ First Policeman:
No, no, we've had worse.
\noindent\\ Third Policeman:
It was plain sailing apart from the arrest.}{That could have been left out.}
\noindent\\ Randall:
I know and I'm grateful. I'd like to apologize too to the prosecuting counsel for dragging him in here morning after morning in such lovely weather.
\remove[ym]{\noindent\\ Counsel:
Well, I would have had to come in anyway.
\noindent\\ Randall:
Ah good, but what a presentation of a case!
\noindent\\ Counsel:
Oh thank you.
\noindent\\ Randall:
No, no, it's a privilege to watch you in action. I never had a chance.
\noindent\\ Counsel:
Oh yes you did.
\noindent\\ Randall:
Not after that summing up. Great.
\noindent\\ Counsel:
Oh thank you. (very chuffed)}
\noindent\\ Randall:
And now I must come to the jury. What can I say. I've dragged you in here, day after day, keeping you away from your homes, your jobs, your loved ones, just to hear the private details of my petty atrocities.
\noindent\\ Foreman:
No, no, it was very \change{interesting}{fascinating}.
\noindent\\ Randall:
But you could have had a much nicer case.
\noindent\\ Foreman:
No, no, murder's much more fun.
\noindent\\ First Juryman:
Yes and so many of them.
\noindent\\ Second Juryman:
Excellent.
\noindent\\ Third Juryman:
We've had a terrific time. (the jury applauds)
\noindent\\ Randall:
(blows his nose, does a Dickie Attenborough) I'm sorry, I'm very moved. And so, m'lud, it only remains for you to pass the most savage sentence on me that the law can provide.
\noindent\\ Judge:
Well er... not necessarily.
\noindent\\ Randall:
No, m'lud, the full penalty of the law is hardly sufficient. I insist I must be made an example of.
\noindent\\ Judge:
Well yes and no. I mean society at large...
\noindent\\ Randall:
Oh no, m'lud. Not with mass murder.
\noindent\\ Judge:
But in this case, (to court) don't you think?
\noindent\\ Court:
Yes, yes!
\noindent\\ Randall:
Oh, come on, m'lud, you've got to give me life.
\noindent\\ Court:
No, no, no, no.
\noindent\\ Randall:
(to court at large) Well, ten years at least.
\noindent\\ Judge:
Ten years!
\noindent\\ Court:
Shame. Shame!
\noindent\\ Randall:
Well five then. Be fair.
\noindent\\ Judge:
No, no. I'm giving you three months.
\noindent\\ Randall:
Oh no, that's so embarrassing. I won't hear of it. Give me six...please.
\noindent\\ Judge:
Well, all right. Six months.
\noindent\\ Randall:
Thank you, m'lud.
\noindent\\ Judge:
But suspended.
\noindent\\ Randall:
Oh no.
\noindent\\ Court:
Hooray. (they applaud)
\noindent\\ Foreman:
Three cheers for the defendant. Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ All:
For he's a jolly good fellow, For he's a jolly good fellow, For he's a jolly good fellow...
\noindent\\ Voice \emph{(off)}:
Which nobody can deny.
\end{document}
\chapter{Court scene - multiple murderer}
\emph{Cut to a courtroom. Severe atmosphere.}
\noindent\\ Judge:
Michael Norman Randall, you have been found guilty of the murder of Arthur Reginald Webster, Charles Patrick Trumpington, Marcel Agnes Bernstein, Lewis Anona Rudd, John Malcolm Kerr, Nigel Sinclair Robinson, Harry Potter, Thing 1 \& Thing 2, Humpty Dumpty, Superman, Cold Fusion, The Lorax, Mr. T, The Dread Pirate Roberts, Felicity Jayne Stone, Jean-Paul Reynard, Rachel Shirley Donaldson, Stephen Jay Greenblatt, Karl-Heinz Mullet, Belinda Anne Ventham, Juan-Carlos Fernandez, Thor Olaf Stensgaard, Lord Kimberley of Pretoria, The Right Honourable Nigel Warmsly Kimberley, Robert Henry Noonan, Your Mom and Felix James Bennett, on or about the morning of the 19th December 1972. Have you anything to say before I pass sentence?
\noindent\\ Randall:
Yes, sir. I'm very sorry.
\noindent\\ Judge:
Very sorry?
\noindent\\ Randall:
Yes, sir. It was a very very bad thing to have done and I'm really very ashamed of myself. I can only say it won't happen again. To have murdered so many people in such a short space of time is really awful, and I really am very, very, very sorry that I did it, and also that I've taken up so much of the court's valuable time listening to the sordid details of these senseless killings of mine. I would particularly like to say, a very personal and sincere 'sorry' to you, m'lud, for my appalling behaviour throughout this trial. I'd also like to say sorry to the police, for putting them to so much trouble (shot of three heavily bandaged exhausted-looking policemen behind him) for the literally hours of work they've had to put in, collecting evidence and identifying corpses and so forth. You know I think sometimes we ought to realize the difficult and often dangerous work involved in tracking down violent criminals like myself and I'd just like them to know that their fine work is at least appreciated by me.
\noindent\\\emph{The policemen look embarrassed.}
\noindent\\ First Policeman:
No, no, we were only doing our job.
\noindent\\ Second Policeman:
No, no, no, no.
\noindent\\ Randall:
It's very good of you to say that, but I know what you've been through.
\noindent\\ First Policeman:
No, no, we've had worse.
\noindent\\ Third Policeman:
It was plain sailing apart from the arrest.
\noindent\\ Randall:
I know and I'm grateful. I'd like to apologize too to the prosecuting counsel for dragging him in here morning after morning in such lovely weather.
\noindent\\ Randall:
And now I must come to the jury. What can I say. I've dragged you in here, day after day, keeping you away from your homes, your jobs, your loved ones, just to hear the private details of my petty atrocities.
\noindent\\ Foreman:
No, no, it was very fascinating.
\noindent\\ Randall:
But you could have had a much nicer case.
\noindent\\ Foreman:
No, no, murder's much more fun.
\noindent\\ First Juryman:
Yes and so many of them.
\noindent\\ Second Juryman:
Excellent.
\noindent\\ Third Juryman:
We've had a terrific time. (the jury applauds)
\noindent\\ Randall:
(blows his nose, does a Dickie Attenborough) I'm sorry, I'm very moved. And so, m'lud, it only remains for you to pass the most savage sentence on me that the law can provide.
\noindent\\ Judge:
Well er... not necessarily.
\noindent\\ Randall:
No, m'lud, the full penalty of the law is hardly sufficient. I insist I must be made an example of.
\noindent\\ Judge:
Well yes and no. I mean society at large...
\noindent\\ Randall:
Oh no, m'lud. Not with mass murder.
\noindent\\ Judge:
But in this case, (to court) don't you think?
\noindent\\ Court:
Yes, yes!
\noindent\\ Randall:
Oh, come on, m'lud, you've got to give me life.
\noindent\\ Court:
No, no, no, no.
\noindent\\ Randall:
(to court at large) Well, ten years at least.
\noindent\\ Judge:
Ten years!
\noindent\\ Court:
Shame. Shame!
\noindent\\ Randall:
Well five then. Be fair.
\noindent\\ Judge:
No, no. I'm giving you three months.
\noindent\\ Randall:
Oh no, that's so embarrassing. I won't hear of it. Give me six...please.
\noindent\\ Judge:
Well, all right. Six months.
\noindent\\ Randall:
Thank you, m'lud.
\noindent\\ Judge:
But suspended.
\noindent\\ Randall:
Oh no.
\noindent\\ Court:
Hooray. (they applaud)
\noindent\\ Foreman:
Three cheers for the defendant. Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ All:
For he's a jolly good fellow, For he's a jolly good fellow, For he's a jolly good fellow...
\noindent\\ Voice \emph{(off)}:
Which nobody can deny.
\end{document}
\chapter{Court scene - multiple murderer}
\emph{Cut to a courtroom. Severe atmosphere.}
\noindent\\ Judge:
Michael Norman Randall, you have been found guilty of the murder of Arthur Reginald Webster, Charles Patrick Trumpington, Marcel Agnes Bernstein, Lewis Anona Rudd, John Malcolm Kerr, Nigel Sinclair Robinson, Norman Arthur Potter, Felicity Jayne Stone, Jean-Paul Reynard, Rachel Shirley Donaldson, Stephen Jay Greenblatt, Karl-Heinz Mullet, Belinda Anne Ventham, Juan-Carlos Fernandez, Thor Olaf Stensgaard, Lord Kimberley of Pretoria, Lady Kimberley of Pretoria, The Right Honourable Nigel Warmsly Kimberley, Robert Henry Noonan and Felix James Bennett, on or about the morning of the 19th December 1972. Have you anything to say before I pass sentence?
\noindent\\ Randall:
Yes, sir. I'm very sorry.
\noindent\\ Judge:
Very sorry?
\noindent\\ Randall:
Yes, sir. It was a very very bad thing to have done and I'm really very ashamed of myself. I can only say it won't happen again. To have murdered so many people in such a short space of time is really awful, and I really am very, very, very sorry that I did it, and also that I've taken up so much of the court's valuable time listening to the sordid details of these senseless killings of mine. I would particularly like to say, a very personal and sincere 'sorry' to you, m'lud, for my appalling behaviour throughout this trial. I'd also like to say sorry to the police, for putting them to so much trouble (shot of three heavily bandaged exhausted-looking policemen behind him) for the literally hours of work they've had to put in, collecting evidence and identifying corpses and so forth. You know I think sometimes we ought to realize the difficult and often dangerous work involved in tracking down violent criminals like myself and I'd just like them to know that their fine work is at least appreciated by me.
\noindent\\\emph{The policemen look embarrassed.}
\noindent\\ First Policeman:
No, no, we were only doing our job.
\noindent\\ Second Policeman:
No, no, no, no.
\noindent\\ Randall:
It's very good of you to say that, but I know what you've been through.
\noindent\\ First Policeman:
No, no, we've had worse.
\noindent\\ Third Policeman:
It was plain sailing apart from the arrest.
\noindent\\ Randall:
I know and I'm grateful. I'd like to apologize too to the prosecuting counsel for dragging him in here morning after morning in such lovely weather.
\noindent\\ Counsel:
Well, I would have had to come in anyway.
\noindent\\ Randall:
Ah good, but what a presentation of a case!
\noindent\\ Counsel:
Oh thank you.
\noindent\\ Randall:
No, no, it's a privilege to watch you in action. I never had a chance.
\noindent\\ Counsel:
Oh yes you did.
\noindent\\ Randall:
Not after that summing up. Great.
\noindent\\ Counsel:
Oh thank you. (very chuffed)
\noindent\\ Randall:
And now I must come to the jury. What can I say. I've dragged you in here, day after day, keeping you away from your homes, your jobs, your loved ones, just to hear the private details of my petty atrocities.
\noindent\\ Foreman:
No, no, it was very interesting.
\noindent\\ Randall:
But you could have had a much nicer case.
\noindent\\ Foreman:
No, no, murder's much more fun.
\noindent\\ First Juryman:
Yes and so many of them.
\noindent\\ Second Juryman:
Excellent.
\noindent\\ Third Juryman:
We've had a terrific time. (the jury applauds)
\noindent\\ Randall:
(blows his nose, does a Dickie Attenborough) I'm sorry, I'm very moved. And so, m'lud, it only remains for you to pass the most savage sentence on me that the law can provide.
\noindent\\ Judge:
Well er... not necessarily.
\noindent\\ Randall:
No, m'lud, the full penalty of the law is hardly sufficient. I insist I must be made an example of.
\noindent\\ Judge:
Well yes and no. I mean society at large...
\noindent\\ Randall:
Oh no, m'lud. Not with mass murder.
\noindent\\ Judge:
But in this case, (to court) don't you think?
\noindent\\ Court:
Yes, yes!
\noindent\\ Randall:
Oh, come on, m'lud, you've got to give me life.
\noindent\\ Court:
No, no, no, no.
\noindent\\ Randall:
(to court at large) Well, ten years at least.
\noindent\\ Judge:
Ten years!
\noindent\\ Court:
Shame. Shame!
\noindent\\ Randall:
Well five then. Be fair.
\noindent\\ Judge:
No, no. I'm giving you three months.
\noindent\\ Randall:
Oh no, that's so embarrassing. I won't hear of it. Give me six...please.
\noindent\\ Judge:
Well, all right. Six months.
\noindent\\ Randall:
Thank you, m'lud.
\noindent\\ Judge:
But suspended.
\noindent\\ Randall:
Oh no.
\noindent\\ Court:
Hooray. (they applaud)
\noindent\\ Foreman:
Three cheers for the defendant. Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ All:
For he's a jolly good fellow, For he's a jolly good fellow, For he's a jolly good fellow...
\noindent\\ Voice \emph{(off)}:
Which nobody can deny.
\end{document}
\section{Introduction}
\section{Introduction}
Satellite communication systems have been greatly developing in the domain of broadcasting, navigation, rescue, and disaster relief because of their potentiality to provide wide coverage and achieve high data rate transmission \cite{7230282}.
For most cases in previous generations, satellite systems were considered completely independent from the terrestrial communication \cite{8795462}. However, there exists a potential shortcoming that the satellite system will degenerate in the presence of shadowing, which occurs when the line-of-sight (LOS) link between the satellite and the terrestrial user is blocked by obstacles \cite{8081808,caixuesong2}.
The high-speed railway (HSR) is one of the most challenging scenarios in the fifth-generation mobile communication system (5G) \cite{Liuyu1}, whose demands for high data rate transmission and high reliability services have grown rapidly \cite{Liuyu2}.
On the one hand, the terrestrial cellular network can provide low cost coverage for high reliability applications in HSR environment through its non-LOS (NLOS) communication.
On the other hand, the satellite system can mitigate the problems of overload and congestion by providing wide coverage to complement and extend the dense terrestrial cells, especially for the terrestrial wireless communication in HSR areas.
Thus, it can be seen that a/an hybrid/integrated satellite-terrestrial cooperative communication system can realize genuine ubiquitous coverage for future HSR communications \cite{2011MIMO,7105655}.
In this case, the terrestrial mobile users can make full use of the spatial diversity gain by receiving independent multipath fading signals from satellites and terrestrial base stations. Consequently, the effectiveness and reliability of transmission will be greatly increased \cite{2011111}.
In order to realize this vision, the integrated satellite-terrestrial communication systems should be carefully designed, especially by analyzing the interference between the satellite link and the terrestrial link based on realistic channel characteristics \cite{fanwei2,fanwei3}.
Hence, it is essential to capture joint channel characteristics in cooperative satellite-terrestrial systems in order to make realistic performance assessments and channel modeling for future intelligent rail transportation \cite{ZhouTao1,ZhouTao3}.
Most of the existing performance analysis for satellite and terrestrial communication systems were studied based on pure mathematical models \cite{7343438,7373246,7308010,7156170,ZTE}.
However, the existing results can hardly be applied for the satellite-terrestrial channel characterization for HSR due to the particular geometrical and physical characteristics of the HSR environment.
Therefore, there is a lack of deep investigation for satellite-terrestrial channel based on a typical HSR scenario, which causes limited accuracy on coverage prediction and interference analysis.
Thus, in this study, we characterize the satellite-terrestrial channel at 22.6 GHz band comprehensively through simulation and modeling, with the following contributions:
\begin{itemize}
\item We reconstruct a typical HSR model and conduct extensive ray-tracing (RT) simulations in four terrestrial and satellite-terrestrial communication links with two weather conditions. Based on RT simulation results, the four links are characterized in terms of key channel parameters, containing root-mean square (RMS) delay spread (DS), Rician $K$-factor (KF), azimuth angular spread of arrival/departure (ASA/ASD), elevation angular spread of arrival/departure (ESA/ESD).
\item We predict the propagation behavior of all the objects in the simulation scenario, as well as the interaction between them. Besides, through calculating the signal-to-interference ratio (SIR), the interference between terrestrial HSR system and satellite-terrestrial system is evaluated.
\end{itemize}
The rest of this paper is organized as follows: Section II addresses the scenario modeling and the simulation setup. Key channel parameters are analyzed and characterized in Section III. Finally, conclusions and further work are drawn in Section IV.
\section{Railway Environment Reconstruction and Simulation Setup}
\subsection{Antenna Model}
For the satellite-terrestrial system, the satellite transmitter (Tx) employs the antenna called APSMLA609V01 which is provided by ITU, and the receiver (Rx) antenna is selected according to ITU-R S.456-6 \cite{465-6}. The antenna patterns of the satellite-terrestrial system are depicted in Fig. \ref{fig:Antenna}(a) and Fig. \ref{fig:Antenna}(b).
As for the terrestrial HSR system, both the Tx and Rx employ the same antenna. The antenna pattern is depicted in Fig. \ref{fig:Antenna}(c).
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/Antenna.pdf}\\
\caption{Antenna patterns used in ray-tracing simulations: a) Satellite antenna pattern; b) Satellite UE antenna pattern; and c) Terrestrial antenna pattern.}
\label{fig:Antenna}
\end{figure}
\subsection{Simulation Scenario}
For characterizing the channel in a HSR environment, it is important to define the scenario model with distinguished propagation features. In this paper, the three-dimensional (3D) model of a typical HSR scenario reconstructed via SketchUp tool is depicted in Fig. \ref{fig:3D model} \cite{8319730}.
Several buildings, traffic signs, billboards, crossing bridges, train stations, etc., are defined in the vicinity of the railway, composing a realistic HSR scenario.
All the objects are modeled according to their typical geometric shapes, and different materials are assigned to their surfaces.
Therefore, the electromagnetic behavior of each object could be effectively established, and the realistic description of the HSR environment allows the analysis of key channel parameters.
\begin{figure}[!t]
\center
\includegraphics[width=0.7\columnwidth,draft=false]{figure//scenario.pdf}\\
\caption{3D model of the railway scenario for ray-tracing simulation}
\label{fig:3D model}
\end{figure}
The train travels at a constant speed of 300 km/h during the 500 m movement in the scenario. 1441 samples are extracted in the simulation, corresponding to a sampling distance of 0.347 m.
For satellite-terrestrial links, the Tx is located on a geosynchronous satellite (GEO) called Koreasat 6, which is positioned at 116$^{\circ}$E overhead the equator and at a distance of approximately 37470 km from the target HSR scenario. The Rx is assembled to the rear of the train with a total height of 5.2 m, which includes the train height (4.5 m) and the antenna bracket.
The presence of terrestrial links will affect the SIR of satellite-terrestrial links, and vice versa. Hence, for the SIR analysis between the terrestrial HSR system and satellite-terrestrial system, an additional communication link is included in the simulation. The terrestrial Tx is placed at the top of the steep wall with a height of 26 m, and the Rx is similarly assembled to the rear of the train with a total height of 4.7 m. Both the terrestrial and satellite-terrestrial links are depicted in Fig. \ref{fig:Communication links}.
The abbreviations in this figure are noted as follows: BS and TrUE are short for base station (i.e. Tx) and train user equipment (i.e. Rx) for the terrestrial HSR system, respectively. SA and SaUE are short for satellite antenna (i.e. Tx) and satellite user equipment (i.e. Rx) for the satellite-terrestrial system, respectively.
\begin{figure}[!t]
\center
\includegraphics[width=0.65\columnwidth,draft=false]{figure/InterferenceScenario.pdf}\\
\caption{Communication links for interference analysis}
\label{fig:Communication links}
\end{figure}
Table \ref{Table:scenario configuration} conclude the scenario configurations for the terrestrial HSR system and the satellite-terrestrial system. The communication scenarios for both systems are the same. The most differences are the locations of Tx and the selected antennas for Tx and Rx.
\begin{table}[!t]
\centering
\caption{Scenario configurations for terrestrial HSR system and satellite-terrestrial system}
\label{Table:scenario configuration}
\begin{tabular}{c|c|l|l}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
Frequency & \multicolumn{3}{l}{22.1-23.1 GHz} \\ \hline\rule{0pt}{8pt}
Bandwidth & \multicolumn{3}{l}{1 GHz} \\ \hline\rule{0pt}{8pt}
Antenna & \multicolumn{3}{l}{Directional antenna} \\ \hline\rule{0pt}{8pt}
\multirow{8}{*} {Terrestrial HSR system} & \multirow{4}{*}{Tx} & Power & 20 dBm \\ \cline{3-4}\rule{0pt}{8pt}
& & Maximum antenna gain & 16 dBi \\ \cline{3-4}\rule{0pt}{8pt}
& & Antenna beamwidth & 20 degree \\ \cline{3-4}\rule{0pt}{8pt}
& & Height & 26 m \\ \cline{2-4}\rule{0pt}{8pt}
& \multirow{3}{*}{Rx} & Maximum antenna gain & 22 dBi \\ \cline{3-4}\rule{0pt}{8pt}
& & Antenna beamwidth & 20 degree \\ \cline{3-4}\rule{0pt}{8pt}
& & Height & 4.7 m \\ \hline\rule{0pt}{8pt}
\multirow{8}{*}{Satellite-terrestrial system} & \multirow{4}{*}{Tx} & Power & 40.6 dBm \\ \cline{3-4}\rule{0pt}{8pt}
& & Maximum antenna gain & 53 dBi \\ \cline{3-4}\rule{0pt}{8pt}
& & Antenna beamwidth & 1 degree \\ \cline{3-4}\rule{0pt}{8pt}
& & Height & 37469.3 km \\ \cline{2-4}\rule{0pt}{8pt}
& \multirow{3}{*}{Rx} & Maximum antenna gain & 32 dBi \\ \cline{3-4}\rule{0pt}{8pt}
& & Antenna beamwidth & 3 degree \\ \cline{3-4}\rule{0pt}{8pt}
& & Height & 5.2 m \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table}
Furthermore, the rainfall can significantly affect the performance of the wireless communication system since it causes additional attenuation to wave propagation, especially for the satellite-terrestrial link \cite{618-13}. Therefore, both communication systems are characterized for rainy and sunny weather conditions. In total, the simulation contains 8 cases which are summarized in Table \ref{Table:Analysis cases}. Suffix ``-R'' and ``-S'' represent the rainy and sunny weather condition, respectively.
\begin{table}[!t]
\centering
\caption{Analysis cases for the satellite-terrestrial channel}
\label{Table:Analysis cases}
\small
\begin{tabular}{c|c|c|c|c|c}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\textbf{Tx} & \textbf{Rx} & \textbf{Weather} & \textbf{Signal} & \textbf{Interference} & \textbf{Terminology} \\ \hline\rule{0pt}{8pt}
\multirow{4}{*}{BS} & \multirow{2}{*}{TrUE} & Rainy & \checkmark & & BS2TrUE-R \\ \cline{3-6}\rule{0pt}{8pt}
& & Sunny & \checkmark & & BS2TrUE-S \\ \cline{2-6}\rule{0pt}{8pt}
& \multirow{2}{*}{SaUE} & Rainy & & \checkmark & BS2SaUE-R \\ \cline{3-6}\rule{0pt}{8pt}
& & Sunny & & \checkmark & BS2SaUE-S \\ \hline\rule{0pt}{8pt}
\multirow{4}{*}{SA} & \multirow{2}{*}{SaUE} & Rainy & \checkmark & & SA2SaUE-R \\ \cline{3-6}\rule{0pt}{8pt}
& & Sunny & \checkmark & & SA2SaUE-S \\ \cline{2-6}\rule{0pt}{8pt}
& \multirow{2}{*}{TrUE} & Rainy & & \checkmark & SA2TrUE-R \\ \cline{3-6}\rule{0pt}{8pt}
& & Sunny & & \checkmark & SA2TrUE-S \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table}
\subsection{Simulation Setup}
As a deterministic modeling method, RT simulations can provide full information of multipath effects in multiple domains and build accurate site-specific channel models.
It has been successfully used for different applications \cite{ZhouTao2,fanwei1,Guanke2,Guanke3,caixuesong1,caixuesong3,chenxiaoming}.
The RT simulator employed in this study, CloudRT, is jointly developed by Beijing Jiaotong University and Technische Universit{\"a}t Braunschweig.
It can trace rays corresponding to various propagation mechanisms, such as direct rays, reflected rays, scattered rays, etc., and is validated and calibrated by a large number of measurements at sub-6 GHz \cite{Abbas2015Simulation} and terahertz (THz) band \cite{Priebe2013Stochastic}.
More than ten properties of each ray can be output from RT results, such as reflection order, time of arrival, received power, AoA, AoD, EoA, EoD, etc.
More information on the CloudRT can be found in tutorial \cite{hedanping} and at http://www.raytracer.cloud/.
The setup for RT simulations is detailed in Table \ref{Table:SimulationSetUp}.
\begin{table}[!t]
\centering
\caption{Ray-tracing simulation setup}
\label{Table:SimulationSetUp}
\small
\begin{tabular}{c|l|l|l}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\multirow{5}{*}{\begin{tabular}[c]{@{}c@{}}Propagation\\ mechanism\end{tabular}}
& Direct & \multicolumn{2}{l}{\checkmark}\\ \cline{2-4}\rule{0pt}{8pt}
& Reflection & \multicolumn{2}{l}{up to the 2$^{nd}$ order}\\ \cline{2-4}\rule{0pt}{8pt}
& Diffraction & \multicolumn{2}{l}{Uniform theory of diffraction (UTD)} \\ \cline{2-4}\rule{0pt}{8pt}
& Scattering & \multicolumn{2}{l}{Directive scattering model} \\ \cline{2-4}\rule{0pt}{8pt}
& Transmission & \multicolumn{2}{l}{\checkmark} \\ \hline\rule{0pt}{8pt}
\multirow{5}{*}{Material} & Building & \multicolumn{2}{l}{Marble, Toughened glass} \\ \cline{2-4}\rule{0pt}{8pt}
& Steep wall, Cutting walls & \multicolumn{2}{l}{Brick} \\ \cline{2-4}\rule{0pt}{8pt}
& Railway furniture, Train & \multicolumn{2}{l}{Metal} \\ \cline{2-4}\rule{0pt}{8pt}
& Tree & \multicolumn{2}{l}{Wood} \\ \cline{2-4}\rule{0pt}{8pt}
& Ground & \multicolumn{2}{l}{Concrete} \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table}
Table \ref{Table:EM parameter} summarizes electromagnetism (EM) parameters of the involved materials, where $\varepsilon _{r}^{'}$ is the real part of the relative permittivity, $tan\delta $ is the loss tangent, $S$ and $\alpha$ are the scattering coefficient and scattering exponent of the directive scattering model \cite{Vittorio}\cite{wanglonghe2019hindawi}.
Particularly, parameters of the wood and concrete are calibrated \cite{Wang}.
\begin{table}[!t]
\centering
\caption{EM parameters of different materials}
\label{Table:EM parameter}
\small
\begin{tabular}{c|c|c|c|c|c|c}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
Material & Marble & Toughened glass & Brick & Metal & Wood & Concrete \\ \hline
$\varepsilon _{r}^{'}$ & 3.0045 & 1.0538 & 1.9155 & 1 & 6.6 & 5.4745 \\ \hline
$tan\delta$ & 0.2828 & 23.9211 & 0.0568 & 10$^{7}$ & 0.9394 & 0.0021 \\ \hline
$S$ & 0.0022 & 0.0025 & 0.0019 & 0.0026 & 0.0086 & 0.0011 \\ \hline
$\alpha$ & 15.3747 & 5.5106 & 49.5724 & 17.7691 & 13.1404 & 109 \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table}
\section{Excess Propagation Attenuation}
Apart from the attenuation due to classic propagation mechanisms already in the CloudRT, the additional propagation attenuation caused by several effects is significant for millimeter wave (mmWave) communication links, which must be considered accordingly and added into the CloudRT in this study.
\subsection{Excess Propagation Attenuation for Terrestrial Links}
For terrestrial links, the attenuation due to atmospheric gases and rain is considered to be of great influence \cite{530-17}.
The attenuation due to the absorption by oxygen and water vapour is always present, and should be included in the calculation of total propagation attenuation at frequencies above 10 GHz. The calculation method for the attenuation due to atmospheric gases is given in Recommendation ITU-R P.530-17 \cite{530-17}. Assuming that the maximum link length of the terrestrial HSR scenario we designed is 0.6 km, then the maximum value of attenuation by atmospheric gases in this case is around 0.12 dB.
Although the rain attenuation can be ignored at frequencies below 5 GHz, it must be included in attenuation calculations at higher frequencies, where its importance increases rapidly. Based on the technique for estimating long-term statistics of rain attenuation given in ITU-R P.530-17, the maximum value of the rain attenuation for terrestrial links should be no greater than 8.1074 dB.
\subsection{Excess Propagation Attenuation for Satellite-Terrestrial Links}
The excess propagation attenuation for satellite-terrestrial links is the sum of several different elements. Generally at elevation angles above $10^\circ$ (which is $45^\circ$ in this paper), only the attenuation due to atmospheric gases, rain, clouds and possible scintillation will be significant \cite{618-13}.
The gaseous attenuation which is entirely caused by absorption depends mainly on the frequency, elevation angle, altitude above sea level and water vapour density.
We can get the details for calculating the gaseous attenuation according to ITU-R P.676-11 \cite{676-11}.
Thus, the typical value of the gaseous attenuation ($A_G$) is 0.7071 dB.
Due to the uncertainty of the occurrence time and region of rainfall, it is impossible to calculate the rain attenuation accurately. After decades of observation and research, the calculation of long-term rain attenuation statistics from point rainfall rate have been summarized in ITU-R P.618-13 \cite{618-13}.
The typical value of attenuation by rain ($A_R$) is 30.0162 dB.
The attenuation due to clouds and fog has a great influence on wave propagation of satellite communication links, which can be calculated by the total columnar content of cloud liquid water in ITU-R P.840-7 \cite{840-7}.
The typical value of attenuation by clouds and fog ($A_C$) is 2.1677 dB.
The effect of the tropospheric scintillation on the signal will enhance with the increase of carrier frequency, especially above 10 GHz. A general technique for predicting the cumulative distribution of tropospheric scintillation is given in ITU-R P.618-13.
The typical value of attenuation by tropospheric scintillation ($A_S$) is 0.7638 dB.
\subsection{Total Excess Propagation Attenuation}
Since we consider two weather conditions (rainy day and sunny day) in the RT simulations, we should obtain the total excess propagation attenuation for different cases respectively.
For terrestrial links, the total excess propagation attenuation is 8.2274 dB for the rainy day and 0.12 dB for the sunny day.
For satellite-terrestrial links, the total attenuation represents the combined effect of atmospheric gases, rain, clouds and tropospheric scintillation. A general method for calculating total attenuation ($A_T$) is given by \cite{618-13}
\begin{equation}
\begin{split}
A_{T-Rainy}&=A_G+\sqrt{(A_R+A_C)^2+A^2_S}=32.90\; {\rm dB}\\
A_{T-Sunny}&=A_G+\sqrt{A^2_C+A^2_S}=3.01\; {\rm dB}
\end{split}
\end{equation}
Accordingly, the typical value of total attenuation is 32.90 dB for the rainy day and 3.01 dB for the sunny day in satellite-terrestrial links.
\section{Channel Characterization and Key Parameter Analysis}
Based on extensive RT simulation results, the channel characteristics for the terrestrial HSR system and the satellite-terrestrial system with two weather conditions are given by the following related key parameters: received power, RMS delay spread (DS), Rician $K$-factor (KF), ASA, ASD, ESA and ESD.
Based on the simulation results, it is identified that the impact of rainfall on DS, KF, ASA, ASD, ESA and ESD in this scenario setup is negligible. Thus, the impact of the weather condition on the channel characterization will mainly be considered on the SIR analysis.
If there is no special explanation, the data and results in the subsequent subsection are all obtained on the rainy weather condition.
\subsection{Received Power and Power Delay Profile}
The received power for both satellite-terrestrial and terrestrial links is depicted in Fig. \ref{fig:RP}, where the green solid lines represent the direct component (i.e. LOS path), the blue dotted lines depict the ensemble of multipath components (i.e. NLOS path), and the red solid lines indicate the total received power.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/ReceivedPower.pdf}\\
\caption{Received power of four communication links: a) SA2SaUE; b) BS2TrUE; c) SA2TrUE; and d) BS2SaUE.}
\label{fig:RP}
\end{figure}
For all the four links, there evidently exists deep fading in two consecutive sections where the moving distance of the Rx is approximately 20-40 m and 60-90 m, respectively. The deep fading results from the obstruction for the direct path, which is caused by crossing bridges over the railway.
As for satellite-terrestrial links (i.e. SA2SaUE and SA2TrUE), the received power is approximately a fixed value along most of the train displacement as depicted in Fig. \ref{fig:RP}(a) and Fig. \ref{fig:RP}(c).
This is because there exists a permanent direct path between the satellite and the train Rx antenna, and the impact of the multipath components on the received signal is extremely minimal due to the narrow antenna beamwidth used for satellite communications.
Moreover, compared with the TrUE, the quite narrow antenna beamwidth of the SaUE will cause more situations of the direct path obstructed by the pylons (see Fig. \ref{fig:Pylon}), and further lead to a series of deep fading at 150, 250, 350 and 450 m in Fig. \ref{fig:RP}(a) and Fig. \ref{fig:RP}(c).
\begin{figure}[!t]
\center
\includegraphics[width=0.6\columnwidth,draft=false]{figure/Pylon.pdf}\\
\caption{Pylons in the HSR environment}
\label{fig:Pylon}
\end{figure}
Furthermore, the received power of terrestrial links (i.e., BS2TrUE and BS2SaUE) decreases as the train gradually moves away from the base station.
This is not only due to the increase of the free space path loss caused by the increasing propagation distance, but also because that the direct path is not aligned with the main lobe of the Rx antenna in the elevation plane, which results in relatively low power of the direct path. This antenna misalignment is depicted in Fig. \ref{fig:MainLobe}.
Despite the fact that in terrestrial links, the LOS component is also obstructed when the train is running under the crossing bridges and pylons, it is not as pronounced as in satellite-terrestrial links, because of the reduced incidence elevation angle and the rich multipath components caused by the surrounding objects.
\begin{figure}[!t]
\center
\includegraphics[width=0.6\columnwidth,draft=false]{figure/MainLobe.pdf}\\
\caption{The antenna misalignment as the train moves}
\label{fig:MainLobe}
\end{figure}
Moreover, the power delay profiles (PDPs) for both satellite-terrestrial links and terrestrial links are depicted in Fig. \ref{fig:PDP}.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/PDP.pdf}\\
\caption{PDPs of four communication links: a) SA2SaUE; b) BS2TrUE; c) SA2TrUE; and d) BS2SaUE.}
\label{fig:PDP}
\end{figure}
\subsection{RMS Delay Spread}
RMS delay spread is an important measure that quantifies the dispersion effect due to propagation in the time delay domain, to which the communication systems might be sensitive. RMS delay spread is defined as the square root of the second central moment of the power delay profile (PDP) \cite{Rappaport2002Wireless} as in:
\begin{linenomath*}
\begin{equation}\label{eq:DS}
{\sigma _\tau } = \sqrt {\frac{{\sum\limits_{n = 1}^N {{\tau _n}^2 \cdot {P_n}} }}{{\sum\limits_{n = 1}^N {{P_n}} }} - {{\left( {\frac{{\sum\limits_{n = 1}^N {{\tau _n} \cdot {P_n}} }}{{\sum\limits_{n = 1}^N {{P_n}} }}} \right)}^2}}
\end{equation}
\end{linenomath*}
where ${\sigma _\tau }$ is the RMS delay spread, $P_n$ and $\tau _n$ are the power and the excess delay of the $n^{th}$ multipath, respectively.
In order to quantify the RMS delay spread for four links in the HSR environment, the obtained results are fitted by normal distribution of mean value $\mu$ and standard deviation $\sigma$. These values are depicted in Table \ref{Table:Channel_Parameters}, including the normal distribution fitting values of other key channel parameters described in the following subsections.
\begin{table*}[!t]
\centering
\caption{Extracted key channel parameters of four communication links}
\label{Table:Channel_Parameters}
\scriptsize
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c}
\specialrule{0.3pt}{2pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\multirow{2}{*}{\textbf{Link}} & \multicolumn{2}{c|}{\textbf{DS {[}ns{]}}} & \multicolumn{2}{c|}{\textbf{KF {[}dB{]}}}& \multicolumn{2}{c|}{$\bm{ASA\ [^{\circ}]}$} & \multicolumn{2}{c|}{$\bm{ASD\ [^{\circ}]}$} & \multicolumn{2}{c|}{$\bm{ESA\ [^{\circ}]}$} & \multicolumn{2}{c}{$\bm{ESD\ [^{\circ}]}$} \\ \cline{2-13}\rule{0pt}{8pt}
& $\mu_{DS}$ & $\sigma_{DS}$ & $\mu_{KF}$ & $\sigma_{KF}$ & $\mu_{ASA}$ & $\sigma_{ASA}$ & $\mu_{ASD}$ & $\sigma_{ASD}$ & $\mu_{ESA}$ & $\sigma_{ESA}$ & $\mu_{ESD}$ & $\sigma_{ESD}$ \\ \hline\rule{0pt}{8pt}
BS2TrUE-R & 0.71 & 0.48 & 26.61 & 21.96 & 0.31 & 4.06 & 0.15 & 0.18 & 3.40 & 1.77 & 0.43 & 0.22 \\ \hline\rule{0pt}{8pt}
SA2TrUE-R & 2.41 & 0.36 & 53.44 & 7.67 & 0.02 & 0.02 & 0.16 & 0.39 & 0.06 & 0.08 & 0 & 0 \\ \hline\rule{0pt}{8pt}
BS2SaUE-R & 3.63 & 18.72 & 26.26 & 18.98 & 3.33 & 10.84 & 0.28 & 0.28 & 3.97 & 4.88 & 0.31 & 0.15 \\ \hline\rule{0pt}{8pt}
SA2SaUE-R & 2.42 & 0.33 & 56.76 & 1.27 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\specialrule{0.3pt}{1pt}{0.5pt}
\specialrule{0.3pt}{0.5pt}{2pt}
\end{tabular}
\end{table*}
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/DS.pdf}\\
\caption{RMS delay spread values of four communication links: a) SA2SaUE; b) BS2TrUE; c) SA2TrUE; and d) BS2SaUE.}
\label{fig:DS}
\end{figure}
The cumulative distribution functions (CDFs) of the RMS delay spread values for satellite-terrestrial and terrestrial communication links are depicted in Fig. \ref{fig:DS}.
As depicted in Fig. \ref{fig:DS}(a) and Fig. \ref{fig:DS}(c), the RMS delay spread of satellite-terrestrial links are similar, with values less than 3 ns with a probability of 90\%, which means that most of the strong multipath components (MPCs) are concentrated around the LOS path and the effect of MPCs on the satellite link is limited. This is consistent with the simulation results in which the scattered rays are mainly concentrated on the top of the train.
However, an unanticipated RMS delay spread value of 200 ns is found for the BS2SaUE link, as depicted in Fig. \ref{fig:DS}(d). This is because of the relatively long time delay and high signal power of a reflected path, which is caused by the distant metallic noise barrier. This reflected path can be observed in Fig. \ref{fig:RMSsnapshot}.
\begin{figure}[!t]
\center
\includegraphics[width=0.6\columnwidth,draft=false]{figure//RMSsnapshot.pdf}\\
\caption{One reflected path in the simulation of the BS2SaUE link}
\label{fig:RMSsnapshot}
\end{figure}
\subsection{Rician $K$-factor}
The Rician $K$-factor is a significant parameter to quantify the channel fading severity, which is defined as the ratio of the power of the strongest component to the total power of the remaining components in the received signal \cite{6899647}. Thus, the Rician $K$-factor can be calculated according to its definition:
\begin{linenomath*}
\begin{equation}\label{eq:KF}
\centering
KF\left( {dB} \right) = 10 \cdot {\rm{lo}}{{\rm{g}}_{10}} \left(\frac{{{P_{{\rm{strongest}}}}}}{{\sum {{P_{{\rm{remaining}}}}} }} \right)
\end{equation}
\end{linenomath*} where $KF$ is the Rician $K$-factor, ${P_{\rm{strongest}}}$ and ${P_{\rm{remaining}}}$ are the power of the strongest component and each remaining component, respectively.
The fitting results of the Rician $K$-factor are summarized in Table \ref{Table:Channel_Parameters} and the CDFs are compared in Fig. \ref{fig:KF}.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/KF.pdf}\\
\caption{Rician $K$-factor values of four communication links: a) SA2SaUE; b) BS2TrUE; c) SA2TrUE; and d) BS2SaUE.}
\label{fig:KF}
\end{figure}
From the table and the figures, the $\mu_{KF}$ is around 55 dB for satellite-terrestrial links. The large mean values of the Rician $K$-factor are because of the strong contribution of the direct component, compared to the other rays.
However, the Rician $K$-factor values in terrestrial links are significantly smaller than that in satellite-terrestrial links, with values less than 0 dB in approximately 10\% of the recorded values. This is because of the richness of multipath components in terrestrial links, which results in an increasing ${P_{\rm{remaining}}}$.
\subsection{Angular Spread}
The four angular spreads (ASA, ASD, ESA, and ESD) are calculated through the same approach of the 3rd Generation Partnership Project (3GPP) standards \cite{3GPP}
\begin{linenomath*}
\begin{equation}\label{eq:AS}
\sigma _{\rm{AS}} = \sqrt {\frac{{\sum\limits_{n = 1}^N {{{\left( {{\theta _{n,\mu }}} \right)}^2} \cdot {P_n}} }}{{\sum\limits_{n = 1}^N {{P_n}} }}}
\end{equation}
\end{linenomath*}
where $\sigma _{\rm{AS}}$ is the angular spread, $P_n$ is the power of the $n^{th}$ multipath component, and ${\theta _{n,\mu }}$ is defined by:
\begin{linenomath*}
\begin{equation}\label{eq:theta}
{\theta _{n,\mu }} = \bmod \left( {{\theta _n} - {\mu _\theta} + \pi ,2\pi } \right) - \pi
\end{equation}
\end{linenomath*}
where $\theta _n$ is the AoA/AoD/EoA/EoD of the $n^{th}$ multipath and ${\mu _n}$ is calculated by
\begin{linenomath*}
\begin{equation}\label{eq:mu}
{\mu _\theta } = \frac{{\sum\limits_{n = 1}^N {{\theta _{n }} \cdot {P_n}} }}{{\sum\limits_{n = 1}^N {{P_n}} }}
\end{equation}
\end{linenomath*}
The normal distribution fitting values of angular spreads (ASA, ASD, ESA, ESD) for each link are summarized in Table \ref{Table:Channel_Parameters}. The mean values of angular spreads are very small for satellite-terrestrial links, which indicates that the MPCs are relatively less than that in terrestrial links, and are mainly concentrated on the LOS direction.
In terrestrial links, ESA and ESD values are larger than ASA and ASD, which implies that most of the multipaths components arrive from the elevation direction. This truthfully reflects that in our simulation results, a large number of reflected components are mainly from the ground, and the scattering occurs mostly on the surface of objects on both sides of the rail track. This can be observed in Fig. \ref{fig:ASsnapshot}.
\begin{figure}[!t]
\center
\includegraphics[width=0.65\columnwidth,draft=false]{figure//ASsnapshot.pdf}\\
\caption{Reflected and scattered rays for terrestrial links}
\label{fig:ASsnapshot}
\end{figure}
\subsection{Co-channel Interference Analysis}
Frequency reuse and interference are a set of indivisible themes. Due to the fact that the satellite links and the terrestrial links use the same spectrum in this study, co-channel interference exists between these two links. Co-channel interference means the carrier frequencies of the desired signal and the interference signal are the same and are received by the receiver without discrimination, which increases the difficulty in detecting the desired signal.
Since the interference signal provides no information, this signal will contribute to a degradation of the SIR. The SIR can then be expressed as:
\begin{linenomath*}
\begin{equation}
SIR({\rm dB})=P_{\rm{signal}}({\rm dBm})-P_{\rm{interference}}({\rm dBm})
\end{equation}
\end{linenomath*}
where SIR is the signal-to-interference ratio, $P _{\rm{signal}}$ is the useful received power from the corresponding Tx, and $P_{\rm{interference}}$ is the unwanted received power from the other Tx. The SIR is then evaluated for both satellite-terrestrial and terrestrial HSR systems.
\begin{figure}[!t]
\center
\includegraphics[width=1\columnwidth,draft=false]{figure/SIR.pdf}\\
\caption{Signal-to-interference ratio analysis results: a) Satellite-terrestrial system; b) CDF of satellite-terrestrial system; c) Terrestrial HSR system; and d) CDF of terrestrial HSR system.}
\label{fig:SIR}
\end{figure}
Fig. \ref{fig:SIR}(a) depicts the SIR results obtained at the satellite Rx (i.e. SaUE) in the satellite-terrestrial system. Signals from the SA2SaUE link are assumed to be useful signals while signals from the BS2SaUE link will act as interference signals.
It is obvious that the bottoms of the SIR, result from the deep fading of \emph{P$_{\rm{signal}}$}. Except for these bottoms, the SIR values are around -30 dB for rainy weather condition. Through calculating the CDF of SIR as depicted in Fig. \ref{fig:SIR}(b), the SIR coverage probability can be obtained. The probability that the SIR is higher than the threshold 0 dB is approximately 2$\%$, which indicates that the interference from terrestrial BSs will have a great impact on the effective satellite link for satellite-terrestrial communication system.
Similarly, Fig. \ref{fig:SIR}(c) presents the SIR results obtained at the terrestrial Rx (i.e. TrUE) in the terrestrial HSR system. Signals from the BS2TrUE link are assumed to be useful signals while signals from the SA2TrUE link will act as interference signals.
The peaks of the SIR occur shortly after the Rx moves below the crossing bridges. At this moment, there exists a direct path from the BS while no direct path from the satellite. On the contrary, the bottoms of SIR occur when the Rx just left the crossing bridges for there exists a direct path from the satellite while no direct path from the BS. With the exception of these peaks and bottoms, the SIR values are around 60 dB for rainy weather condition, which are marked in Fig. \ref{fig:SIR}(c). Through calculating the CDF of SIR as depicted in Fig. \ref{fig:SIR}(d), the probability that the received SIR is greater than 40 dB is approximately 98$\%$, which is reliable enough for future intelligent rail transportation applications. Evidently, the interference from satellite antennas will not have much impact on the effective terrestrial link for terrestrial HSR system.
\subsection{Effect of Weather Conditions on SIR}
According to the calculation methods for estimating long-term rain attenuation statistics on terrestrial and satellite-terrestrial communication links given in ITU-R P.530-17 and ITU-R P.618-13, the attenuation due to rain is proportional to the distance of the communication link. Accordingly, the rainfall has a greater impact on the received signal from satellite antennas, compared with the received signal from terrestrial BSs.
Thus for the terrestrial HSR system, the attenuation of the interference signal caused by rainfall is greater than that of the useful signal, which leads to higher SIR values in the rainy day. While as for the satellite-terrestrial system, the attenuation of the useful signal caused by rainfall is greater than that of the interference signal, which results in lower SIR values in the rainy day. This is depicted in Fig. \ref{fig:SIR}.
\section{Conclusion}
In this paper, the satellite-terrestrial channel at 22.6 GHz is characterized for a typical HSR environment. The CloudRT platform is used to extract the key channel parameters of a realistic 3D HSR scenario. The objects in the scenario are defined and reconstructed according to their typical geometries and materials. Channel characterization and respective conclusions are drawn based on simulation results.
Obstacles that commonly appear in HSR scenarios, such as crossing bridges and pylons, will severely affect the performance of the wireless communication systems, since they can block the LOS path and therefore, cause deep fading on the received power.
Compared with terrestrial links, this phenomenon becomes more pronounced in satellite-terrestrial links, since for satellite links, the LOS path contributes more than other multipath components, which in turns makes satellite links more sensitive for shadowing.
The maximum value of the rain attenuation for terrestrial links of the scenario in this paper should be no greater than 8 dB, and the typical value of the rain attenuation for satellite-terrestrial links is around 30 dB.
Through analyzing the channel parameters for different weather conditions, we conclude that the rainfall would not influence channel parameters like Rician $K$-factor, RMS delay spread, and angular spreads, but it will influence the received power and corresponding interference between terrestrial link and satellite-terrestrial link.
When the Rx antenna is mounted on the top of the train, the large-scale objects like buildings and train stations will provide strong reflected and scattered contributions which have significant influence on the wireless channel, while some small-scale objects such as billboards and traffic signs should also not be neglected since they have a great impact on channel parameters such as Rician $K$-factor and RMS delay spread.
The SIR between satellite-terrestrial and terrestrial communication systems is also analyzed.
The SIR values for satellite-to-terrestrial interference are around 60 dB, which indicates that the interference from satellite antennas will not have much impact on the effective terrestrial links. This basically meets the requirement of good communication performance in 5G mmWave channel. On the contrary, the SIR values for terrestrial-to-satellite interference are around -30 dB, which states that satellite-terrestrial communication links will be severely affected by the existence of terrestrial links, since the strength of the interference from terrestrial BSs is comparable to the received useful signal from satellite antennas.
The channel characterization analysis and the key channel parameter extraction provided in this paper, are suitable for effective link budget of the satellite-terrestrial channel in a realistic HSR environment, which will help the research community understand the propagation channel when designing mmWave technologies and communication system for future intelligent rail transportation.
Future work will address the mmWave satellite-terrestrial channel characterization in more scenarios and potential system configurations.
\acknowledgments
This work was supported in part by Institute of Information \& communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2018-0-00792, QoE improvement of open Wi-Fi on public transportation for the reduction of communication expense), and in part by IITP grant funded by the Korea government(MSIT) (No.2018-0-00175, 5G AgiLe and fLexible integration of SaTellite And cellulaR). Readers can access the data of this paper in the 4TU.Centre for Research Data by the link {\color{blue} \underline{\emph{https://doi.org/10.4121/uuid:f6c34e2e-fa34-4bd6-9046-1c350a9bb5db}}}.
\section{Introduction}
While planning to write some \add[FS]{complex} document together with some of my colleagues the same discussion was started over and over again: should \LaTeX\ or some office suite like \change[PS]{Microsoft Office}{OpenOffice.org} be used? Strengths of \LaTeX\ are its deterministic behavior, its reliable handling of split documents and unreached typesetting of formulas. On the other hand, current office suites provide the user with several features that are at least very desirable when collaboratively writing a document: they provide integrated merging facilities and are able to track changes and attach notes to the text. The merging issue can be reasonably handled by version tracking systems like SVN or CVS,\note[FS]{Should we emphasize that UNIX diff only works on line basis?} but there was no acceptable solution to the issue of change tracking available. Of course, some \remove[PS]{militant} \LaTeX\ purists tried to convince me that all change tracking can be handled by insertion of \LaTeX\ \textit{comments}. I have tried to handle one project like this but it did not work out! The main reason was that reading and editing of large documents is mostly handled in \annote[PS]{DVI}{what about PDF? The changes can be seen in PDF as well ...} format and not on the \LaTeX\ source level -- but \LaTeX\ comments cannot be seen in DVI! Especially, if you have sent one version of the document to a colleague and you want to skim quickly over it in order to see what has been changed.
While returning from a project meeting and staring out of the train's window I had the idea how we could combine the ``best of both worlds'' for collaborative text editing: by adding change tracking and note facilities to \LaTeX ! This is the basic idea of the \texttt{trackchanges} \LaTeX\ package. But this is only one part of the change tracking convenience offered by an office suite. The second part of the story is that changes and notes need to be accepted or rejected! This is the goal of the other programs of the \textbf{trackchanges} open source project hosted on sourceforge\footnote{Please visit \url{http://trackchanges.sourceforge.net}}
\end{document}
\section{Introduction}
While planning to write some complex document together with some of my colleagues the same discussion was started over and over again: should \LaTeX\ or some office suite like OpenOffice.org be used? Strengths of \LaTeX\ are its deterministic behavior, its reliable handling of split documents and unreached typesetting of formulas. On the other hand, current office suites provide the user with several features that are at least very desirable when collaboratively writing a document: they provide integrated merging facilities and are able to track changes and attach notes to the text. The merging issue can be reasonably handled by version tracking systems like SVN or CVS, but there was no acceptable solution to the issue of change tracking available. Of course, some \LaTeX\ purists tried to convince me that all change tracking can be handled by insertion of \LaTeX\ \textit{comments}. I have tried to handle one project like this but it did not work out! The main reason was that reading and editing of large documents is mostly handled in DVI format and not on the \LaTeX\ source level -- but \LaTeX\ comments cannot be seen in DVI! Especially, if you have sent one version of the document to a colleague and you want to skim quickly over it in order to see what has been changed.
While returning from a project meeting and staring out of the train's window I had the idea how we could combine the ``best of both worlds'' for collaborative text editing: by adding change tracking and note facilities to \LaTeX ! This is the basic idea of the \texttt{trackchanges} \LaTeX\ package. But this is only one part of the change tracking convenience offered by an office suite. The second part of the story is that changes and notes need to be accepted or rejected! This is the goal of the other programs of the \textbf{trackchanges} open source project hosted on sourceforge\footnote{Please visit \url{http://trackchanges.sourceforge.net}}
\end{document}
\chapter{Court scene - multiple murderer}
\annote[novi]{\emph{Cut to a courtroom. Severe atmosphere.}}{In the Flying Circus there is never a ``severe'' atmosphere.}
\noindent\\ Judge:
Michael Norman Randall, you have been found guilty of the murder of Arthur Reginald Webster, Charles Patrick Trumpington, Marcel Agnes Bernstein, Lewis Anona Rudd, John Malcolm Kerr, Nigel Sinclair Robinson, \change[ym]{Norman Arthur Potter}{Harry Potter}, \add[novi]{Thing 1 \& Thing 2, }\add[ym]{Humpty Dumpty, }\add[three]{Superman, }\add[quatro]{Cold Fusion, }\add[pyat]{The Lorax, }\add[one more]{Mr. T, }\add{The Dread Pirate Roberts, }Felicity Jayne Stone, Jean-Paul Reynard, Rachel Shirley Donaldson, Stephen Jay Greenblatt, Karl-Heinz Mullet, Belinda Anne Ventham, Juan-Carlos Fernandez, Thor Olaf Stensgaard, Lord Kimberley of Pretoria\note{Isn't Pretoria in South Africa?}, \remove{Lady Kimberley of Pretoria, }The Right Honourable Nigel Warmsly Kimberley, Robert Henry Noonan\add[novi]{, Your Mom} and Felix James Bennett, on or about the morning of the 19th December 1972\refneeded[novi]{}. Have you anything to say before I pass sentence?
\noindent\\ Randall:
\annote[ym]{Yes, sir. I'm very sorry.}{That was short.}
\noindent\\ Judge:
Very sorry?
\noindent\\ Randall:
Yes, sir. It was a very very bad thing to have done and I'm really very ashamed of myself. I can only say it won't happen again. To have murdered so many people in such a short space of time is really awful, and I really am very, very, very sorry that I did it, and also that I've taken up so much of the court's valuable time listening to the sordid details of these senseless killings of mine. I would particularly like to say, a very personal and sincere 'sorry' to you, m'lud, for my appalling behaviour throughout this trial. I'd also like to say sorry to the police, for putting them to so much trouble (shot of three heavily bandaged exhausted-looking policemen behind him) for the literally hours of work they've had to put in, collecting evidence and identifying corpses and so forth. You know I think sometimes we ought to realize the difficult and often dangerous work involved in tracking down violent criminals like myself and I'd just like them to know that their fine work is at least appreciated by me.
\noindent\\\emph{The policemen look embarrassed.}
\noindent\\ First Policeman:
No, no, we were only doing our job.
\noindent\\ \annote{Second Policeman:
No, no, no, no.
\noindent\\ Randall:
It's very good of you to say that, but I know what you've been through.
\noindent\\ First Policeman:
No, no, we've had worse.
\noindent\\ Third Policeman:
It was plain sailing apart from the arrest.}{That could have been left out.}
\noindent\\ Randall:
I know and I'm grateful. I'd like to apologize too to the prosecuting counsel for dragging him in here morning after morning in such lovely weather.
\remove[ym]{\noindent\\ Counsel:
Well, I would have had to come in anyway.
\noindent\\ Randall:
Ah good, but what a presentation of a case!
\noindent\\ Counsel:
Oh thank you.
\noindent\\ Randall:
No, no, it's a privilege to watch you in action. I never had a chance.
\noindent\\ Counsel:
Oh yes you did.
\noindent\\ Randall:
Not after that summing up. Great.
\noindent\\ Counsel:
Oh thank you. (very chuffed)}
\noindent\\ Randall:
And now I must come to the jury. What can I say. I've dragged you in here, day after day, keeping you away from your homes, your jobs, your loved ones, just to hear the private details of my petty atrocities.
\noindent\\ Foreman:
No, no, it was very \change{interesting}{fascinating}.
\noindent\\ Randall:
But you could have had a much nicer case.
\noindent\\ Foreman:
No, no, murder's much more fun.
\noindent\\ First Juryman:
Yes and so many of them.
\noindent\\ Second Juryman:
Excellent.
\noindent\\ Third Juryman:
We've had a terrific time. (the jury applauds)
\noindent\\ Randall:
(blows his nose, does a Dickie Attenborough) I'm sorry, I'm very moved. And so, m'lud, it only remains for you to pass the most savage sentence on me that the law can provide.
\noindent\\ Judge:
Well er... not necessarily.
\noindent\\ Randall:
No, m'lud, the full penalty of the law is hardly sufficient. I insist I must be made an example of.
\noindent\\ Judge:
Well yes and no. I mean society at large...
\noindent\\ Randall:
Oh no, m'lud. Not with mass murder.
\noindent\\ Judge:
But in this case, (to court) don't you think?
\noindent\\ Court:
Yes, yes!
\noindent\\ Randall:
Oh, come on, m'lud, you've got to give me life.
\noindent\\ Court:
No, no, no, no.
\noindent\\ Randall:
(to court at large) Well, ten years at least.
\noindent\\ Judge:
Ten years!
\noindent\\ Court:
Shame. Shame!
\noindent\\ Randall:
Well five then. Be fair.
\noindent\\ Judge:
No, no. I'm giving you three months.
\noindent\\ Randall:
Oh no, that's so embarrassing. I won't hear of it. Give me six...please.
\noindent\\ Judge:
Well, all right. Six months.
\noindent\\ Randall:
Thank you, m'lud.
\noindent\\ Judge:
But suspended.
\noindent\\ Randall:
Oh no.
\noindent\\ Court:
Hooray. (they applaud)
\noindent\\ Foreman:
Three cheers for the defendant. Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ All:
For he's a jolly good fellow, For he's a jolly good fellow, For he's a jolly good fellow...
\noindent\\ Voice \emph{(off)}:
Which nobody can deny.
\end{document}
\chapter{Court scene - multiple murderer}
\emph{Cut to a courtroom. Severe atmosphere.}
\noindent\\ Judge:
Michael Norman Randall, you have been found guilty of the murder of Arthur Reginald Webster, Charles Patrick Trumpington, Marcel Agnes Bernstein, Lewis Anona Rudd, John Malcolm Kerr, Nigel Sinclair Robinson, Harry Potter, Thing 1 \& Thing 2, Humpty Dumpty, Superman, Cold Fusion, The Lorax, Mr. T, The Dread Pirate Roberts, Felicity Jayne Stone, Jean-Paul Reynard, Rachel Shirley Donaldson, Stephen Jay Greenblatt, Karl-Heinz Mullet, Belinda Anne Ventham, Juan-Carlos Fernandez, Thor Olaf Stensgaard, Lord Kimberley of Pretoria, The Right Honourable Nigel Warmsly Kimberley, Robert Henry Noonan, Your Mom and Felix James Bennett, on or about the morning of the 19th December 1972. Have you anything to say before I pass sentence?
\noindent\\ Randall:
Yes, sir. I'm very sorry.
\noindent\\ Judge:
Very sorry?
\noindent\\ Randall:
Yes, sir. It was a very very bad thing to have done and I'm really very ashamed of myself. I can only say it won't happen again. To have murdered so many people in such a short space of time is really awful, and I really am very, very, very sorry that I did it, and also that I've taken up so much of the court's valuable time listening to the sordid details of these senseless killings of mine. I would particularly like to say, a very personal and sincere 'sorry' to you, m'lud, for my appalling behaviour throughout this trial. I'd also like to say sorry to the police, for putting them to so much trouble (shot of three heavily bandaged exhausted-looking policemen behind him) for the literally hours of work they've had to put in, collecting evidence and identifying corpses and so forth. You know I think sometimes we ought to realize the difficult and often dangerous work involved in tracking down violent criminals like myself and I'd just like them to know that their fine work is at least appreciated by me.
\noindent\\\emph{The policemen look embarrassed.}
\noindent\\ First Policeman:
No, no, we were only doing our job.
\noindent\\ Second Policeman:
No, no, no, no.
\noindent\\ Randall:
It's very good of you to say that, but I know what you've been through.
\noindent\\ First Policeman:
No, no, we've had worse.
\noindent\\ Third Policeman:
It was plain sailing apart from the arrest.
\noindent\\ Randall:
I know and I'm grateful. I'd like to apologize too to the prosecuting counsel for dragging him in here morning after morning in such lovely weather.
\noindent\\ Randall:
And now I must come to the jury. What can I say. I've dragged you in here, day after day, keeping you away from your homes, your jobs, your loved ones, just to hear the private details of my petty atrocities.
\noindent\\ Foreman:
No, no, it was very fascinating.
\noindent\\ Randall:
But you could have had a much nicer case.
\noindent\\ Foreman:
No, no, murder's much more fun.
\noindent\\ First Juryman:
Yes and so many of them.
\noindent\\ Second Juryman:
Excellent.
\noindent\\ Third Juryman:
We've had a terrific time. (the jury applauds)
\noindent\\ Randall:
(blows his nose, does a Dickie Attenborough) I'm sorry, I'm very moved. And so, m'lud, it only remains for you to pass the most savage sentence on me that the law can provide.
\noindent\\ Judge:
Well er... not necessarily.
\noindent\\ Randall:
No, m'lud, the full penalty of the law is hardly sufficient. I insist I must be made an example of.
\noindent\\ Judge:
Well yes and no. I mean society at large...
\noindent\\ Randall:
Oh no, m'lud. Not with mass murder.
\noindent\\ Judge:
But in this case, (to court) don't you think?
\noindent\\ Court:
Yes, yes!
\noindent\\ Randall:
Oh, come on, m'lud, you've got to give me life.
\noindent\\ Court:
No, no, no, no.
\noindent\\ Randall:
(to court at large) Well, ten years at least.
\noindent\\ Judge:
Ten years!
\noindent\\ Court:
Shame. Shame!
\noindent\\ Randall:
Well five then. Be fair.
\noindent\\ Judge:
No, no. I'm giving you three months.
\noindent\\ Randall:
Oh no, that's so embarrassing. I won't hear of it. Give me six...please.
\noindent\\ Judge:
Well, all right. Six months.
\noindent\\ Randall:
Thank you, m'lud.
\noindent\\ Judge:
But suspended.
\noindent\\ Randall:
Oh no.
\noindent\\ Court:
Hooray. (they applaud)
\noindent\\ Foreman:
Three cheers for the defendant. Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ All:
For he's a jolly good fellow, For he's a jolly good fellow, For he's a jolly good fellow...
\noindent\\ Voice \emph{(off)}:
Which nobody can deny.
\end{document}
\chapter{Court scene - multiple murderer}
\emph{Cut to a courtroom. Severe atmosphere.}
\noindent\\ Judge:
Michael Norman Randall, you have been found guilty of the murder of Arthur Reginald Webster, Charles Patrick Trumpington, Marcel Agnes Bernstein, Lewis Anona Rudd, John Malcolm Kerr, Nigel Sinclair Robinson, Norman Arthur Potter, Felicity Jayne Stone, Jean-Paul Reynard, Rachel Shirley Donaldson, Stephen Jay Greenblatt, Karl-Heinz Mullet, Belinda Anne Ventham, Juan-Carlos Fernandez, Thor Olaf Stensgaard, Lord Kimberley of Pretoria, Lady Kimberley of Pretoria, The Right Honourable Nigel Warmsly Kimberley, Robert Henry Noonan and Felix James Bennett, on or about the morning of the 19th December 1972. Have you anything to say before I pass sentence?
\noindent\\ Randall:
Yes, sir. I'm very sorry.
\noindent\\ Judge:
Very sorry?
\noindent\\ Randall:
Yes, sir. It was a very very bad thing to have done and I'm really very ashamed of myself. I can only say it won't happen again. To have murdered so many people in such a short space of time is really awful, and I really am very, very, very sorry that I did it, and also that I've taken up so much of the court's valuable time listening to the sordid details of these senseless killings of mine. I would particularly like to say, a very personal and sincere 'sorry' to you, m'lud, for my appalling behaviour throughout this trial. I'd also like to say sorry to the police, for putting them to so much trouble (shot of three heavily bandaged exhausted-looking policemen behind him) for the literally hours of work they've had to put in, collecting evidence and identifying corpses and so forth. You know I think sometimes we ought to realize the difficult and often dangerous work involved in tracking down violent criminals like myself and I'd just like them to know that their fine work is at least appreciated by me.
\noindent\\\emph{The policemen look embarrassed.}
\noindent\\ First Policeman:
No, no, we were only doing our job.
\noindent\\ Second Policeman:
No, no, no, no.
\noindent\\ Randall:
It's very good of you to say that, but I know what you've been through.
\noindent\\ First Policeman:
No, no, we've had worse.
\noindent\\ Third Policeman:
It was plain sailing apart from the arrest.
\noindent\\ Randall:
I know and I'm grateful. I'd like to apologize too to the prosecuting counsel for dragging him in here morning after morning in such lovely weather.
\noindent\\ Counsel:
Well, I would have had to come in anyway.
\noindent\\ Randall:
Ah good, but what a presentation of a case!
\noindent\\ Counsel:
Oh thank you.
\noindent\\ Randall:
No, no, it's a privilege to watch you in action. I never had a chance.
\noindent\\ Counsel:
Oh yes you did.
\noindent\\ Randall:
Not after that summing up. Great.
\noindent\\ Counsel:
Oh thank you. (very chuffed)
\noindent\\ Randall:
And now I must come to the jury. What can I say. I've dragged you in here, day after day, keeping you away from your homes, your jobs, your loved ones, just to hear the private details of my petty atrocities.
\noindent\\ Foreman:
No, no, it was very interesting.
\noindent\\ Randall:
But you could have had a much nicer case.
\noindent\\ Foreman:
No, no, murder's much more fun.
\noindent\\ First Juryman:
Yes and so many of them.
\noindent\\ Second Juryman:
Excellent.
\noindent\\ Third Juryman:
We've had a terrific time. (the jury applauds)
\noindent\\ Randall:
(blows his nose, does a Dickie Attenborough) I'm sorry, I'm very moved. And so, m'lud, it only remains for you to pass the most savage sentence on me that the law can provide.
\noindent\\ Judge:
Well er... not necessarily.
\noindent\\ Randall:
No, m'lud, the full penalty of the law is hardly sufficient. I insist I must be made an example of.
\noindent\\ Judge:
Well yes and no. I mean society at large...
\noindent\\ Randall:
Oh no, m'lud. Not with mass murder.
\noindent\\ Judge:
But in this case, (to court) don't you think?
\noindent\\ Court:
Yes, yes!
\noindent\\ Randall:
Oh, come on, m'lud, you've got to give me life.
\noindent\\ Court:
No, no, no, no.
\noindent\\ Randall:
(to court at large) Well, ten years at least.
\noindent\\ Judge:
Ten years!
\noindent\\ Court:
Shame. Shame!
\noindent\\ Randall:
Well five then. Be fair.
\noindent\\ Judge:
No, no. I'm giving you three months.
\noindent\\ Randall:
Oh no, that's so embarrassing. I won't hear of it. Give me six...please.
\noindent\\ Judge:
Well, all right. Six months.
\noindent\\ Randall:
Thank you, m'lud.
\noindent\\ Judge:
But suspended.
\noindent\\ Randall:
Oh no.
\noindent\\ Court:
Hooray. (they applaud)
\noindent\\ Foreman:
Three cheers for the defendant. Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ Foreman:
Hip. Hip.
\noindent\\ Court:
Hooray.
\noindent\\ All:
For he's a jolly good fellow, For he's a jolly good fellow, For he's a jolly good fellow...
\noindent\\ Voice \emph{(off)}:
Which nobody can deny.
\end{document}
|
1,477,468,751,420 | arxiv | \section{Introduction}
Given a Riemannian manifold $(M, \mathrm{g})$, its Ricci tensor $\operatorname{Ric}_\mathrm{g}$ is also a symmetric 2-tensor, and hence \emph{locally} (i.e. at each point) there is a basis that diagonalizes $\operatorname{Ric}_\mathrm{g}$ at that point. In this article we study the problem of diagonalizing $\operatorname{Ric}_\mathrm{g}$ \emph{globally}, in a particular sense that we will describe below. Diagonalizing the Ricci tensor is helpful in studying the Einstein equation (see \cite{da09}), the prescribed Ricci curvature equation, and the Ricci flow on a homogeneous space or more generally a cohomogeneity one manifold.
In particular, we are interested in diagonal metrics on closed cohomogeneity one manifolds under the Ricci flow. The Ricci flow is the geometric PDE
\begin{equation}\label{eq:RF}
\begin{aligned}
\frac{\dd \mathrm{g}}{\dd t} &= -2\operatorname{Ric}_\mathrm{g},\,\,\, \mathrm{g}(0) &= \mathrm{g}_0
\end{aligned}
\end{equation}
for evolving in time a given Riemannian metric $\mathrm{g}_0$ on a manifold $M$. A cohomogeneity one manifold $M$ is a manifold with an action by a Lie group $\mathsf{G}$ so that the generic orbit of the group action has codimension $1$. By a diagonal metric on a cohomogeneity one manifold, we mean a metric that is diagonal with respect to a basis consisting of Killing vector fields of the action of $\mathsf{G}$ along a geodesic orthogonal to all orbits. See \cite{ak04}, \cite{aik15}, \cite{iks16} and \cite{bk16} where the Ricci flow on compact cohomogeneity one manifolds is used (sometimes implicitly) to study questions about singularity formation and curvature evolution under the flow.
As we will see, Ricci-diagonality for a cohomogeneity one manifold is equivalent to Ricci-diagonality for a principal orbit, which is a homogeneous space $\mathsf{G}\cdot p \cong \mathsf{G}/\H$. Therefore, initially we focus our attention on the case where $M$ is a Lie group or a homogeneous space with a left-invariant metric. In those cases, the metric on $M$ is completely determined by the inner product on a single tangent space. For a Lie group $\mathsf{G}$, the tangent space at the identity element, $T_e \mathsf{G}$, can be identified with the Lie algebra $\gg$, and we make the following definition.
\begin{maindefn}
A basis $\mathcal{B}$ for a Lie algebra $\gg$ is said to be \emph{stably Ricci-diagonal} if any diagonal left-invariant metric has diagonal Ricci tensor $\operatorname{Ric}_\mathrm{g}$.
\end{maindefn}
We can similarly define stably Ricci diagonal bases corresponding to a homogeneous space $\mathsf{G}/\H$. Which bases of a Lie algebra $\gg$ are stably Ricci-diagonal? The question has been answered for nilpotent Lie algebras: Lauret and Will proved \cite{lw13} that stably Ricci diagonal bases are characterized by the Lie algebraic condition of being \textit{nice}. For the discussion in the present article, we redefine it as follows.
\begin{maindefn}
A basis $\mathcal{B} = \{X_1, \cdots, X_n\}$ for a Lie algebra $\mathfrak{g}$ is said to be \emph{nice} if $[X_i , X_j]$ is always a scalar multiple of some element in the basis.
\end{maindefn}
Under the assumptions of our article this is equivalent to the definition in \cite{lw13}. The paper \cite{lw13} also provides examples of solvable Lie algebras for which \textit{stably Ricci-diagonal} and \textit{nice} are not equivalent. In this article we show that the two conditions are equivalent for compact groups.
\begin{mainthm}\label{mainthm:nice_stably_Ric_diag}
Let $\mathsf{G}$ be a compact Lie group with biinvariant metric $Q$. Suppose $\mathcal{B} = \{e_i\}$ is a $Q$-orthonormal basis for $\gg$. Then $\mathcal{B}$ is stably Ricci-diagonal for left-invariant metrics on $\mathsf{G}$ if and only if $\mathcal{B}$ is a nice basis.
\end{mainthm}
We also give a characterization of the stably Ricci-diagonal condition for a homogeneous space $\mathsf{G}/\H$. In order to describe it, we need some more notation. Let $Q$ be a biinvariant metric on $\gg$ and let $\mathcal{B}= \{e_l\}_l$ be a $Q$-orthonormal basis for $\gg$. The Lie algebra structure constants $\gamma_{ij}^k$ are defined via $[e_i, e_j] = \displaystyle\sum_k \gamma_{ij}^k e_k$, i.e. $\gamma_{ij}^k = Q([e_i, e_j], e_k)$. Under the adjoint action of $\H$ on $\mathfrak{n} = \mathfrak{h}^\perp$, we have the orthogonal decomposition into irreducible $\H$-modules $\mathfrak{n} = \mathfrak{n}_1 \oplus \cdots \oplus \mathfrak{n}_l$.
\begin{mainpropn}\label{mainpropn:sRd_homogeneous}
Let $\mathsf{G}/\H$ be a compact homogeneous space and $\mathcal{B}$ a $Q$-orthonormal basis for $\mathfrak{n} = \mathfrak{h}^\perp$. Then $\mathcal{B}$ is stably Ricci-diagonal if and only if $\displaystyle\sum_{\substack{e_\alpha \in \mathfrak{n}_r\\ e_\beta \in \mathfrak{n}_s}} \gamma_{\alpha\beta}^i\gamma_{\alpha\beta}^j = 0$ for all $r$, $s$, $i\neq j$ and $e_i \in \mathfrak{n}_i$, $e_j \in \mathfrak{n}_j$ where $\mathfrak{n}_i$ and $\mathfrak{n}_j$ are modules equivalent under the action of $\operatorname{Ad}(\H)$.
\end{mainpropn}
We will also see that the equations only depend on $i, j$, i.e., are independent of the choice of basis elements $e_i\in \mathfrak{n}_i$, $e_j\in \mathfrak{n}_j$.
\begin{center}
***
\end{center}
In the second part of this article, we focus on cohomogeneity one manifolds. We are interested in the question of whether an invariant diagonal metric on a closed cohomogeneity one manifold ($M$, $\mathsf{G}$) remains diagonal (in the same basis) when evolved by the Ricci flow. When the answer is affirmative, it implies that there is a \emph{time-independent} frame that diagonalizes the metric restriction on each orbit. The time-independence is not guaranteed otherwise, isometry preservation notwithstanding. Preservation of diagonality has another important geometric consequence, namely that any curve transverse to all orbits which is a geodesic in the initial metric $\mathrm{g}_0$, will remain a geodesic (up to reparametrization) in the evolving metric $\mathrm{g}(t)$. In \cite{bk16}, a crucial step was to show that for the specific cohomogeneity one manifolds under consideration in that paper, diagonality of the metric is preserved under the Ricci flow.
The answer to this question a priori depends on the choice of basis used to describe the metric. At a minimum, the basis $\mathcal{B}$ must be \textit{stably Ricci-diagonal}, since otherwise the Ricci flow equation implies that the metric acquires off-diagonal terms to first order in time. However, even if the basis is known to be stably Ricci-diagonal, it is not clear that the flow preserves diagonality of the metric since we cannot rule out the possibility of the metric acquiring off-diagonal components at a slower rate. Nevertheless, it seems natural to make the following conjecture:
\begin{mainconj}
Let $\mathcal{B}$ be a stably-Ricci diagonal basis for the cohomogeneity one manifold $(M, \mathsf{G})$, and let $\mathrm{g}_0$ be a metric on $M$ that is diagonal with respect to $\mathcal{B}$. Then the Ricci flow evolving metric $\mathrm{g}(t)$ is also diagonal in the basis $\mathcal{B}$.
\end{mainconj}
The reason this is a non-trivial question is that an affirmative answer is equivalent to the existence of solutions to a degenerate parabolic system of coupled PDEs in the space ($r$) and time ($t$) variables with overdetermined boundary conditions. See \cite{kr19} for a more detailed discussion on this topic. Thus, this question is distinct from (and more difficult than) the similar question for homogeneous metrics. For the analogous result in the homogeneous setting, it is sufficient that the Ricci tensor of a diagonal metric also be diagonal in the same basis, and the conclusion follows from the existence and uniqueness theorem for ODEs.
We prove that the above conjecture holds for a special class of cohomogeneity one manifolds. To describe this class, we need to introduce some notation. Let $(M, \mathsf{G})$ be a cohomogeneity one manifold with orbit space $M/\mathsf{G} \cong [0,L]$. Then $M$ admits a decomposition into disc bundles over the two non-principal orbits, $M = \mathsf{G}\times_{\mathsf{K}_-}D_-\cup \mathsf{G}\times_{\mathsf{K}_+}D_+$. Here $\H\subset\{\mathsf{K}_-, \mathsf{K}_+\}$ are (isotropy) subgroups of $\mathsf{G}$, and $D_\pm$ are Euclidean discs with $\partial D_\pm = S_\pm = \mathsf{K}_\pm/\H$. Conversely, any collection of groups $\H \subset \{\mathsf{K}_-, \mathsf{K}_+\} \subset \mathsf{G}$ where $\mathsf{K}_\pm/\H$ are spheres, gives rise to a cohomogeneity one manifold via the above union of disk bundles. We denote by $\mathfrak{h}\subset \k_\pm\subset \gg$ the corresponding Lie algebras. Then the tangent space at a point $p$ in $M$ is identified with $\mathfrak{h}^\perp \oplus \operatorname{span}\{\frac{\partial}{\partial r}\}$, where $r$ is a variable parameterizing the orbit space. We use $\mathcal{B}$ to denote a basis for $\mathfrak{h}^\perp$ and define $\mathcal{B}' = \mathcal{B} \cup \{\frac{\partial}{\partial r}\}$.
\begin{mainthm}\label{mainthm:RF_SO}
Let $M$ be a manifold with a cohomogeneity one action by $\mathsf{G} = \mathsf{SO}(n)$. Suppose that the isotropy groups $\H$, $\mathsf{K}_\pm$ are each products of block embeddings of $\mathsf{SO}(k)$'s. Let $\mathcal{B}$ be a basis for $\mathfrak{h}^\perp$ that is a subset of the natural basis $\{E_{ij}\}$ of $\mathfrak{so}(n)$ and $\mathrm{g}_0$ a diagonal metric with respect to $\mathcal{B}'$. Then the metric $\mathrm{g}(t)$ satisfying \eqref{eq:RF} is also diagonal with respect to $\mathcal{B}'$ as long as the flow exists.
\end{mainthm}
Here $\mathfrak{so}(n)$ is the Lie algebra of $\mathsf{SO}(n)$ and $E_{ij}$ is the matrix with a $1$ in the $(i,j)$ entry, a $-1$ in the $(j,i)$ entry and $0$'s in all other entries. As we will see, $\{E_{ij}\}_{1\leq i<j \leq n}$ is a nice basis for $\mathfrak{so}(n)$.
In fact, the conclusion of Theorem \ref{mainthm:RF_SO} holds for a slightly larger class of metrics, see Theorem \ref{thm:rf_diag_pres_so_disconnected}. As an application, in the next result we show that the techniques of the main theorem in \cite{bk16} extend to higher dimensions.
\begin{mainthm}\label{mainthm:sec_GZ}
Let $M$ be a manifold with a cohomogeneity one action by $\mathsf{SO}(n)$ with a group diagram where the groups $\H$, $\mathsf{K}_\pm$ are products of $\mathsf{SO}(k)$'s in block embedding, and such that there are two singular orbits each of codimension two. Then $M$ admits a metric $\mathrm{g}_0$ such that $\sec_{\mathrm{g}_0} \geq 0$ and when evolved by the Ricci flow, $\mathrm{g}(t)$ immediately acquires some negatively curved $2$-planes.
\end{mainthm}
This paper is organized as follows. In Section \ref{sec:cohom1mfds} we give some background and explain our notation for invariant cohomogeneity one metrics. The relevant notation for homogeneous spaces is also presented implicitly in this section. In Section \ref{sec:nice_stably_Ric_diag} we prove Theorem \ref{mainthm:nice_stably_Ric_diag} and Proposition \ref{mainpropn:sRd_homogeneous}. In Section \ref{sec:examples} we provide examples and investigate the nice basis condition for some semisimple Lie algebras. Section \ref{sec:SO_grp_diagram} contains the proof of Theorem \ref{mainthm:RF_SO}. Finally, Theorem \ref{mainthm:sec_GZ} is proved in Section \ref{sec:appln}.
\subsection*{Acknowledgements} I am grateful to Wolfgang Ziller and Renato Bettiol for many detailed and helpful comments. I also thank Lee Kennard and William Wylie for their encouragement.
\section{Cohomogeneity one manifolds}\label{sec:cohom1mfds}
In this Section, we recall the definition of a cohomogeneity one group action and describe the structure of invariant metrics on a cohomogeneity one manifold. For more details one may refer to \cite{gz00}, \cite{gz02}.
\subsection{Cohomogeneity one structure}
A Lie group $\mathsf{G}$ is said to act on a manifold $M$ with cohomogeneity one if the orbit space $M/\mathsf{G}$ is 1-dimensional (equivalently, if the generic orbits of the group action are codimension one hypersurfaces). If $M$ is compact, this implies that $M/\mathsf{G}$ is either an interval $[0,L]$ or a circle $S^1$. The former is guaranteed when the manifold is simply connected. We will assume from now on that $M/\mathsf{G} = [0,L]$. Let $\pi$ be the quotient map $M\rightarrow M/\mathsf{G}$. The generic orbits, i.e. $\pi^{-1}(r)$ for $r\in(0,L)$ are called principal orbits. The open set $M^0$ formed by the union of all the principal orbits is sometimes referred to as the principal part of $M$. The orbits $B_- = \pi^{-1}(0)$ and $B_+ = \pi^{-1}(L)$ are called singular orbits.
Pick any point $x_- \in B_-$ and let $\gamma(r)$ be a minimal geodesic normal to $B_-$, with $\gamma(0) = x_-$ and meeting the other singular orbit $B_+$ for the first time in $\gamma(L) = x_+$. Then $\gamma(r)$ for $r\in(0,L)$ parametrizes the orbit space. The isotropy group is the same group $\H\subset \mathsf{G}$ at all points $\gamma(r)$ with $0<r<L$ and $\H$ is called the principal isotropy group. The isotropy groups $\mathsf{K}_\pm$ at $x_\pm$ are the singular isotropy groups. Thus each principal orbit is isometric to a homogeneous space $\mathsf{G}/\H$ and the singular orbits $B_\pm$ are isometric to $\mathsf{G}/\mathsf{K}_\pm$ respectively.
By the Slice Theorem, $M$ is composed of disk bundles over the singular orbits $B_-$ and $B_+$, glued along their common boundary $\mathsf{G}/\H$. This also implies that $\mathsf{K}_\pm/\H$ are diffeomorphic to spheres $S^{l_\pm}$. The data $\H\subset \mathsf{K}_\pm\subset \mathsf{G}$ is called a group diagram, and determines the cohomogeneity one manifold up to equivariant diffeomorphism.
\subsection{Invariant metrics}
By symmetry, any invariant metric is completely determined by specifying it along $\gamma$. Thus a cohomogeneity one metric on the principal part of $M$ has the following form:
\begin{equation*}
\begin{aligned}
\mathrm{g}(r) = \dd r^2 + \mathrm{g}_r, \mbox{\,\, $r \in (0, L)$},
\end{aligned}
\end{equation*}
where $\mathrm{g}_r$ is a one parameter family of homogeneous metrics on the fixed homogeneous space $\mathsf{G}/\H$. This metric extends across the singular orbits to yield a smooth metric on all of $M$ if and only if the metric and its derivatives satisfy certain differential conditions known as \textit{smoothness conditions} at the endpoints $r=0$ and $r=L$ (see \cite{vz18}).
\subsection{Diagonal metrics}
We will now explain more carefully what we mean by a diagonal metric on a cohomogeneity one manifold. Let $\H\subset \mathsf{K}_\pm\subset \mathsf{G}$ be the group diagram and let $\mathfrak{h}\subset \mathfrak{k}_\pm\subset \mathfrak{g}$ be the corresponding Lie algebras. Let $Q$ be a biinvariant metric on $\mathfrak{g}$ and $\mathfrak{m}_\pm=\mathfrak{k}_\pm^\perp$, $\mathfrak{p}_\pm=\mathfrak{h}^\perp \cap \mathfrak{k}_\pm$ with respect to this metric. Thus $\mathfrak{g} = \mathfrak{h} \oplus \mathfrak{p}_- \oplus \mathfrak{m}_- = \mathfrak{h} \oplus \mathfrak{p}_+ \oplus \mathfrak{m}_+$.
Let $\{X_i\}_{i=1}^m$ be a $Q$-orthonormal basis for $\mathfrak{h}^\perp$ that respects the decompositions $\mathfrak{h}^\perp = \mathfrak{m}_+\oplus\mathfrak{p}_+ = \mathfrak{m}_-\oplus\mathfrak{p}_-$. The existence of such a basis is also an assumption on the group diagram! For instance, the group diagram described in the following example \emph{does not} admit such a basis.
\begin{example}
Let $\{e_1, e_2, e_3\}$ be a $Q$-orthonormal basis for $\gg = \mathfrak{so}(3)$. Consider the group diagram whose corresponding Lie algebras are given by
\begin{align*}
\k_- &= \operatorname{span}\{e_1\};\,\,\, \k_+ = \operatorname{span}\left\{e_1+e_2-2e_3\right\};\,\,\, \mathfrak{h} = \{0\}\\
\implies \mathfrak{p}_- &= \operatorname{span}\{e_1\};\,\, \mathfrak{m}_- = \operatorname{span}\{e_2, e_3\};\,\, \mathfrak{p}_+ = \operatorname{span}\left\{e_1+e_2-2e_3\right\};\\\mathfrak{m}_+ &= \operatorname{span}\left\{ e_1+e_2+e_3, e_1-e_2 \right\}
\end{align*}
A basis that respects the decomposition $\mathfrak{h}^\perp = \mathfrak{m}_-\oplus\mathfrak{p}_-$ must have $v_1 = e_1$ as an element. A basis that respects the decomposition $\mathfrak{h}^\perp = \mathfrak{m}_+\oplus\mathfrak{p}_+$ must have $v_2 = \frac{e_1+e_2-2e_3}{\sqrt{6}}$ as an element. Since $Q(v_1,v_2) \neq 0$, such a basis is not $Q$-orthogonal.
\end{example}
The vector space $\mathfrak{h}^\perp$ can be identified with the tangent space to $\mathsf{G}/\H$ at $[\H]$ in the following way. Let $\{X_i^*(r)\}_{i=1}^m$ be Killing vector fields along the curve $\gamma$, defined by
\begin{align*}
X_i^*(r) = \frac{\dd}{\dd s}\exp(s\, X_i)\cdot\gamma(r)\big|_{s=0}
\end{align*}
Then $\{X_i^*(r)\}_{i=1}^m$ is a basis for $T_{[\H]}\mathsf{G}/\H$ at $\gamma(r) = [\H]$. Also, for $i=1,\,\cdots,\,m$, let $\omega_i$ be the 1-form dual to the vector field $X_i^*$. A diagonal metric is one which is of the form
\begin{equation*}
\begin{aligned}
\mathrm{g}(r) = h(r)^2 \dd r^2 + \sum_{i=1}^m f_i(r)^2 \omega_i^2, \text{ $ r\in (0,L)$}
\end{aligned}
\end{equation*}
along a fixed geodesic orthogonal to all the orbits.
\begin{remark}
The metric is not necessarily diagonal at points outside the geodesic $\gamma$. The value of $\, \mathrm{g}(X_i^*, X_j^*)$ at an arbitrary point of $M$ is determined by its value along $\gamma$, with the help of the group action. In particular,
\begin{align*}
\mathrm{g}(X_i^*, X_j^*)|_{g\H} = \mathrm{g}(\operatorname{Ad}_{g^{-1}}X_i^*, \operatorname{Ad}_{g^{-1}}X_j^*)|_{\H}
\end{align*}
Since the metric on the homogeneous space $\mathsf{G}/\H$ is left-invariant but not necessarily biinvariant, the Killing vector fields $X_i^*$ and $X_j^*$ for $i\neq j$ will in general not be orthogonal at points not on $\gamma$.
\end{remark}
In the next example, we provide a basis that respects the group diagram, but for which we can explicitly check that the basis is not nice and not stably Ricci-diagonal.
\begin{example}
The Kervaire sphere $S^5$ has a cohomogeneity one action (see \cite{gvwz06}) with the following group diagram:
\begin{align*}
\mathsf{G} &= \mathsf{SO}(2)\times \mathsf{SO}(3),\\ \mathsf{K}_- &= \mathsf{SO}(2) = (e^{-i\theta}, \operatorname{diag}(R(d\theta), 1)),\\ \mathsf{K}_+ &= \O(2) = (\det B, \operatorname{diag}(\det B, B)),\\ \H &= \mathbb{Z}_2 = \langle -1, \operatorname{diag}(-1, -1, 1) \rangle,
\end{align*}
where $d$ is an odd integer. We select the following basis for $\gg$, which respects the inclusions $\mathfrak{h}\subset\mathfrak{k}_\pm\subset\gg$ and is orthonormal in the natural biinvariant metric on $\mathsf{G}$:
\begin{align*}
X_1 = \frac{1}{d^2+1}(-I, dE_{12}), X_2 = \frac{1}{d^2+1}(dI,E_{12}), X_3 = (0, E_{13}), X_4 = (0, E_{23})
\end{align*}
Then, $[X_3, X_4] = -\frac{d}{d^2+1}X_1 - \frac{1}{d^2+1}X_2$, so this is not a nice basis.
This basis is also not stably Ricci diagonal. Indeed, we can choose the metric such that at some point in the interior of the geodesic $\gamma$, the functions $f_i$ all have the same value. At such a point, \cite[Proposition 1.14]{gz02} implies $\operatorname{Ric}(X_1, X_2) = \frac{1}{2}\frac{d}{(d^2+1)^2} \neq 0$.
\end{example}
Sometimes we will also need the following notation. The isotropy group $\H$ acts on $\mathfrak{n} = \mathfrak{h}^\perp$ via the adjoint action, and we have $\mathfrak{n} = \mathfrak{n}_1 \oplus \cdots \oplus \mathfrak{n}_l$ a sum of $Q$-orthogonal irreducible $\H$-modules. Then, by Schur's lemma, $\mathrm{g}|_{\mathfrak{n}_i}$ is a multiple of $Q|_{\mathfrak{n}_i}$, $\mathrm{g}|_{\mathfrak{n}_i} = f_i(r)\cdot Q|_{\mathfrak{n}_i}$ at $\gamma(r)$. It will usually be clear from context whether we are using a given index to denote an $\H$-module or an individual vector.
\section{Nice bases and stably Ricci-diagonal bases}\label{sec:nice_stably_Ric_diag}
In this Section, we give a Lie-algebraic characterization for a basis to be stably Ricci-diagonal, proving Theorem \ref{mainthm:nice_stably_Ric_diag} and Proposition \ref{mainpropn:sRd_homogeneous}. First, we note some properties of the Lie algebra structure constants, which will enable us to simplify the expression for $\operatorname{Ric}_\mathrm{g}$.
Let $\mathsf{G}$ be a compact group with biinvariant metric $Q$ and $\{e_l\}$ a $Q$-orthonormal basis for $\gg$. If $\gamma_{ij}^k = Q([e_i, e_j], e_k)$ then
\begin{align*}
\gamma_{ij}^k &= -\gamma_{ik}^j = -\gamma_{kj}^i = -\gamma_{ji}^k
\end{align*}
since $\operatorname{ad}_X$ is skew-symmetric in $Q$. In particular, if any two indices are equal, then that structure constant is zero.
We refer to the formulae for Ricci curvature as derived in \cite[Proposition 1.14]{gz02}. The formulae there are for the Ricci curvature of a cohomogeneity one manifold, but it is easy to read off the Ricci curvature of a principal orbit/ homogeneous space by subtracting the second fundamental form contribution, i.e. any term involving derivatives. Using the skew-symmetry to collect terms and simplify, we write the expression for $\operatorname{Ric}_\mathrm{g}$ for a homogeneous space $\mathsf{G}/\H$ as follows:
\begin{equation}\label{eqn:ric_homog_biinv}
\begin{aligned}
\operatorname{Ric}^{\mathsf{G}/\H}(e_i, e_j) = \sum_{\{r,s\}}\left( \frac{f_i^2f_j^2 - (f_r^2- f_s^2)^2}{2f_r^2f_s^2} \sum_{\substack{e_\alpha \in \mathfrak{n}_r\\e_\beta \in \mathfrak{n}_s}} \gamma_{\alpha\beta}^i\gamma_{\alpha\beta}^j \right)
\end{aligned}
\end{equation}
where $e_i\in \mathfrak{n}_i$, $e_j\in \mathfrak{n}_j$ and $i\neq j$. We omit the formula for the restriction of $\operatorname{Ric}_\mathrm{g}$ to a single $\H$-module $\mathfrak{n}_i$, since irreducibility of $\mathfrak{n}_i$ implies $\operatorname{Ric}_\mathrm{g}|_{\mathfrak{n}_i}$ is diagonal. A Lie group $\mathsf{G}$ is a special case of the above, in which case each module $\mathfrak{n}_i$ is $1$-dimensional.
\begin{equation}
\label{eqn:ric_cpt_biinv}
\begin{aligned}
\operatorname{Ric}^\mathsf{G}(e_i,e_j) = \sum_{\{r,s\}} \frac{f_i^2f_j^2 - (f_r^2 - f_s^2)^2}{2f_r^2f_s^2} \gamma_{rs}^i\gamma_{rs}^j
\end{aligned}
\end{equation}
In each case, the sums (with subscript $\{r, s\}$) are over \textit{unordered pairs} of indices $r,s$, denoting $\H$-modules in the case of $\mathsf{G}/\H$ and individual vectors in the case of $\mathsf{G}$.
With these preliminaries, we are now ready to show that a nice basis is stably Ricci-diagonal.
\begin{proposition}\label{propn:Ric-diag}
Consider a homogeneous space $\mathsf{G}/\H$. Let $Q$ be a bi-invariant metric on $\mathsf{G}$ and let $\mathfrak{n}$ be an $\operatorname{Ad}_\H$ invariant complement of $\mathfrak{h}$ in $\mathfrak{g}$. Let $\mathcal{B} = \{e_\alpha\}$ be a $Q$-orthonormal basis for $\mathfrak{n}$ such that for any pair $e_i$, $e_j$ $\in \mathfrak{n}$, $[e_i, e_j]_\mathfrak{n}$ (i.e. the projection of $[e_i, e_j]$ onto $\mathfrak{n}$) is a scalar multiple of another basis element.
Then $\operatorname{Ric}_\mathrm{g}^{\mathsf{G}/\H}$ is diagonal in the basis $\mathcal{B}$ whenever metric $\mathrm{g}$ is diagonal with respect to $\mathcal{B}$.
\end{proposition}
\begin{proof}
As remarked above, by irreducibility of the $\mathfrak{n}_i$'s, $\operatorname{Ric}_\mathrm{g}^{\mathsf{G}/\H}$ is diagonal when restricted to a single $\mathfrak{n}_i$. Thus we need only consider the case when $e_i \in n_i$, $e_j\in \mathfrak{n}_j$ with $i\neq j$. By assumption, for each pair $r, s$, $\gamma_{rs}^i\neq 0$ for at most one $i$. Therefore equation \eqref{eqn:ric_cpt_biinv} implies the result.
\end{proof}
In particular, for a Lie group $\mathsf{G}$, the above proposition implies:
\begin{corollary}
\label{cor:nice->sRd}
Let $\mathsf{G}$ be a compact Lie group with biinvariant metric $Q$. Let $\mathcal{B}$ be a $Q$-orthonormal basis for $\gg$ such that $\mathcal{B}$ is a nice basis. Then $\mathcal{B}$ is stably Ricci-diagonal.
\end{corollary}
We will now prove the converse.
\begin{theorem}\label{thm:nice_stably_Ric_diag}
Let $\mathsf{G}$ be a compact group with biinvariant metric $Q$. Let $\mathcal{B} = \{e_i\}$ be a $Q$-orthonormal basis for $\gg$. Suppose that $\mathcal{B}$ is stably Ricci-diagonal. Then $\mathcal{B}$ is a nice basis.
\end{theorem}
\begin{proof}
Let $\mathrm{g} = \sum_{i=1}^n f_i^2\omega_i^2$ be a $\mathcal{B}$-diagonal metric on $\gg$.
If $\mathcal{B}$ is stably Ricci-diagonal then the right hand side of \eqref{eqn:ric_cpt_biinv} must equal $0$ for every choice of values $f_k$. This gives us equations the structure constants $\gamma_{ab}^c$ must satisfy, and choosing sufficiently many values of the constants $f_k$, we will show that $\gamma_{rs}^i\gamma_{rs}^j = 0$ whenever $i\neq j$. If we let $\mathrm{g}_0$ be the diagonal metric where $f_k = c$ for each $k$, then $\operatorname{Ric}_{\mathrm{g}_0}(e_i, e_j)=0$ implies
\begin{equation}\label{eq:sum_zero1}
\sum_{\{r,s\}} \gamma_{rs}^i \gamma_{rs}^j = 0.
\end{equation}
Next, fix one $r_0$. Consider the metric $\mathrm{g}_1$ where $f_{r_0} = \sqrt{2}c$ and $f_k=c$ for all $k\neq r_0$. Then $\operatorname{Ric}_{\mathrm{g}_1}(e_i, e_j) = 0$ implies
\begin{align*}
\sum_{s\neq r_0, i, j}\frac{c^4 - (2c^2 - c^2)^2}{4c^2}\gamma_{r_0 s}^i \gamma_{r_0 s}^j &+ \sum_{\substack{\{r,s\}:\\ r,s \neq r_0}} \frac{c^4 - (c^2-c^2)^2}{2c^4} \gamma_{rs}^i \gamma_{rs}^j = 0\\
\implies & \sum_{\substack{\{r,s\}:\\ r,s \neq r_0}}\gamma_{rs}^i \gamma_{rs}^j = 0 \numberthis \label{eq:sum_zeroo}
\end{align*}
Subtracting \eqref{eq:sum_zeroo} from \eqref{eq:sum_zero1} we see that for each fixed $r_0$,
\begin{equation}\label{eq:sum_zero2}
\sum_{s\neq r_0} \gamma_{r_0 s}^i \gamma_{r_0 s}^j = 0
\end{equation}
Now, fix a pair of indices $r_0$ and $r_1$. Consider the diagonal metric $\mathrm{g}_2$ where $f_{r_0} = f_{r_1} = \sqrt{2}c$ and $f_s = c$ for all $s\neq r_0, r_1$. The condition that $\operatorname{Ric}_{\mathrm{g}_2}(e_i,e_j)=0$ can be written as
\small
\begin{multline*}
\begin{aligned}
\sum_{s\neq r_0, r_1} \frac{f_i^2f_j^2 - (f_{r_0}^2-f_s^2)^2}{2f_{r_0}^2f_s^2} \gamma_{r_0 s}^i \gamma_{r_0 s}^j + \sum_{s\neq r_0, r_1} \frac{f_i^2f_j^2 - (f_{r_1}^2-f_s^2)^2}{2f_{r_1}^2f_s^2} \gamma_{r_1 s}^i \gamma_{r_1 s}^j + \frac{f_i^2f_j^2 - (f_{r_0}^2 - f_{r_1}^2)^2}{2f_{r_0}^2f_{r_1}^2}\gamma_{r_0 r_1}^i\gamma_{r_0 r_1}^j\\ + \sum_{\substack{\{r,s\}:\\r,s \neq r_0,r_1}}\frac{f_i^2f_j^2 - (f_s^2-f_r^2)}{2f_r^2f_s^2}\gamma_{rs}^i\gamma_{rs}^j = 0\\
\mbox{\normalsize Hence, } \sum_{s\neq r_0, r_1} \frac{c^4 - (2c^2-c^2)^2}{4c^4} \gamma_{r_0 s}^i \gamma_{r_0 s}^j + \sum_{s\neq r_0, r_1} \frac{c^4 - (2c^2-c^2)^2}{4c^4} \gamma_{r_1 s}^i \gamma_{r_1 s}^j + \frac{c^4 - (2c^2 - 2c^2)^2}{8c^4}\gamma_{r_0 r_1}^i\gamma_{r_0 r_1}^j\\ + \sum_{\substack{\{r,s\}:\\r,s \neq r_0,r_1}}\frac{c^4 - (c^2-c^2)}{2c^4}\gamma_{rs}^i\gamma_{rs}^j = 0\\
\end{aligned}
\end{multline*}
\normalsize
Therefore we obtain
\begin{equation}\label{eq:sum_zero3}
\gamma_{r_0 r_1}^i\gamma_{r_0 r_1}^j + 4 \sum_{\substack{\{r,s\}:\\ r,s \neq r_0,r_1}}\gamma_{rs}^i\gamma_{rs}^j = 0
\end{equation}
Equation \eqref{eq:sum_zero1} can be written as follows:
\begin{align*}
& - \gamma_{r_0 r_1}^i\gamma_{r_0 r_1}^j + \sum_{s \neq r_0} \gamma_{r_0 s}^i\gamma_{r_0 s}^j + \sum_{s \neq r_1} \gamma_{r_1 s}^i\gamma_{r_1 s}^j + \sum_{\{r,s\}: r,s \neq r_0, r_1}\gamma_{rs}^i\gamma_{rs}^j = 0\\
\mbox{and hence } & - \gamma_{r_0 r_1}^i\gamma_{r_0 r_1}^j + 0 + 0 + \sum_{\{r,s\}:r,s \neq r_0, r_1}\gamma_{rs}^i\gamma_{rs}^j = 0 \mbox{ by $\eqref{eq:sum_zero2}$}\\
\mbox{Thus,}&\\
& - \gamma_{r_0 r_1}^i\gamma_{r_0 r_1}^j + \sum_{\{r,s\}:r,s \neq r_0, r_1}\gamma_{rs}^i\gamma_{rs}^j = 0 \numberthis \label{eq:sumzero}
\end{align*}
Combining \eqref{eq:sumzero} with \eqref{eq:sum_zero3} we obtain
\begin{align*}
\sum_{\{r,s\}:r,s \neq r_0, r_1}\gamma_{rs}^i\gamma_{rs}^j = 0 \mbox{ and }\gamma_{r_0 r_1}^i\gamma_{r_0 r_1}^j = 0
\end{align*}
In particular, $\gamma_{r_0 r_1}^i\gamma_{r_0 r_1}^j = 0$. Since $i,j, r_0$ and $r_1$ were chosen arbitrarily, this shows that $\gamma_{rs}^i\gamma_{rs}^j = 0$ for each $r$, $s$, and $i\neq j$. By skew symmetry, this implies that $\gamma_{rs}^i\gamma_{rs}^j = 0$, hence for any fixed $r$ and $s$, there is at most one index $i$ such that $\gamma_{rs}^i \neq 0$. In other words, $[e_r,e_s]$ (if non-zero) is a multiple of a single basis element. Therefore $\mathcal{B}$ is a nice basis.
\end{proof}
As a result, we have:
\begin{proof}[Proof of Theorem \ref{mainthm:nice_stably_Ric_diag}]
A direct consequence of Corollary \ref{cor:nice->sRd} and Theorem \ref{thm:nice_stably_Ric_diag}.
\end{proof}
The proof of Proposition \ref{mainpropn:sRd_homogeneous} is analogous, taking into account that $\mathrm{g}|_{\mathfrak{n}_i} = f_i^2 Q|_{\mathfrak{n}_i}$.
\begin{remark}
The number of equations in Proposition \ref{mainpropn:sRd_homogeneous} can be reduced significantly. Indeed, let
\begin{align*}
B(X, Y) = \sum_{\substack{e_\alpha \in \mathfrak{n}_r\\e_\beta \in \mathfrak{n}_s}} Q([e_\alpha, e_\beta], X) Q([e_\alpha, e_\beta], Y) \mbox{ for all } X, Y\in \mathfrak{n}.
\end{align*}
Then, since $\operatorname{Ad}_g$ preserves $Q$ and the Lie brackets, it follows that $B$ is $\operatorname{Ad}(\H)$-invariant. Thus $B|_{\mathfrak{n}_i} = \lambda_i Q|_{\mathfrak{n}_i}$ for some constant $\lambda_i$ and Proposition \ref{mainpropn:sRd_homogeneous} reduces to a single equation for each $i, j, r, s$. Similarly, if the basis $\mathcal{B}$ respects the equivalence between two modules $\mathfrak{n}_i$ and $\mathfrak{n}_j$ then Proposition \ref{mainpropn:sRd_homogeneous} reduces to $1$, $2$, or $4$ equations depending on whether the representations are orthogonal, complex or quaternionic.
\end{remark}
We will now address Ricci-diagonality for cohomogeneity one manifolds. We have:
\begin{proposition}\label{propn:ric_homog}
Let $M$ be a cohomogeneity one manifold with principal part $M^0 = \mathsf{G}/\H \times (0, L)$ and let $\mathrm{g} = \dd r^2 + \mathrm{g}_r$ be a diagonal $\mathsf{G}$-invariant metric on $M^0$. Let $\mathcal{B}$ be a $Q$-orthonormal basis for $\mathfrak{h}^\perp$. Then $\operatorname{Ric}_\mathrm{g}^M$ is diagonal in the basis $\mathcal{B}' = \mathcal{B}\, \cup \{\frac{\partial}{\partial r}\}$ at the point $\gamma(r)$ if and only if $\operatorname{Ric}_{\mathrm{g}_r}^{\mathsf{G}/\H}$ is diagonal in the basis $\mathcal{B}$.
\end{proposition}
\begin{proof}
We can use the formulae for $\operatorname{Ric}_\mathrm{g}$ from \cite[Proposition 1.14]{gz02}.
We will determine when the off-diagonal terms of $\operatorname{Ric}_\mathrm{g}$ are zero. Firstly, we note that for any diagonal metric $\mathrm{g}$, $\operatorname{Ric}_\mathrm{g}(\frac{\partial}{\partial r}, X) = 0$ for any $X$ tangent to the orbit $\mathsf{G}/\H$.
Since the second fundamental form is diagonal, we have
\begin{align*}
\displaystyle \operatorname{Ric}_\mathrm{g}(e_i, e_j) = \operatorname{Ric}_{\mathrm{g}_r}(e_i, e_j)
\end{align*}
when $e_i\in \mathfrak{n}_i$ and $e_j\in \mathfrak{n}_j$ distinct $\H$-modules. Therefore, $\operatorname{Ric}_\mathrm{g}(e_1, e_2) = 0$ if and only if $\operatorname{Ric}_{\mathrm{g}_r}(e_1, e_2) = 0$.
If $e_1$, $e_2$ belong to the same $\H$-module $\mathfrak{n}_i$, then
\begin{align*}
\operatorname{Ric}_\mathrm{g}(e_1, e_2) &= \operatorname{Ric}_{\mathrm{g}_r}(e_1, e_2) + \left\{ \frac{-f_i'}{f_i}\sum_s \frac{f_s'}{f_s}\dim \mathfrak{n}_s + \frac{f_i'^2}{f_i^2} - \frac{f_i''}{f_i} \right\} f_i^2 Q(e_1, e_2)
\end{align*}
Thus $\operatorname{Ric}_\mathrm{g}(e_1, e_2) = 0$ if and only if $\operatorname{Ric}_{\mathrm{g}_r}(e_1, e_2) = 0$.
\end{proof}
Thus, for cohomogeneity one manifolds, the problem is reduced to understanding the stably Ricci-diagonal condition for a homogeneous metric on $\mathsf{G}/\H$.
\section{Examples}\label{sec:examples}
In this Section, we examine some standard bases of semisimple Lie algebras to see if any of them are nice (and consequently stably-Ricci diagonal).
\subsection{$\mathfrak{so}(n)$} A basis for the Lie algebra $\mathfrak{so}(n)$ is given by $\{E_{ij}\}_{1\leq i<j\leq n}$. Recall that $E_{ij}$ is defined to be the $n\times n$ matrix with a $1$ in the $(i,j)$ entry, a $-1$ in the $(j,i)$ entry, and $0$ in all other entries. It is easy to see that the brackets are given by
\begin{align*}
[E_{ij}, E_{kl}] = \left\{
\begin{array}{ll}
0 & \mbox{if } \{i,j\} \cap \{k,l\} = \emptyset \\
0 & \mbox{if } \{i,j\} = \{k,l\} \\
E_{il} & \mbox{if } j = k \mbox{ and } i\neq l
\end{array}
\right.
\end{align*}
The Lie bracket skew-symmetry and the fact that $E_{ji} = -E_{ij}$, yield all remaining brackets. From this it is clear that $\{E_{ij}\}_{i<j}$ is a nice basis. By \ref{cor:nice->sRd}, this implies it is stably Ricci-diagonal.
\subsection{$\mathfrak{u}(n)$}
A basis for the Lie algebra $\mathfrak{u}(n)$ is given by $\mathcal{B} = \{E_{pq}\}_{1\leq p<q\leq n}\cup\{F_{pq}\}_{1\leq p<q\leq n}\cup \{H_l\}_{1\leq l\leq n}$, where
\begin{itemize}
\item $E_{pq}$ is as defined earlier.
\item $F_{pq}$ is the matrix with $i$ in the $(p,q)$ and $(q,p)$ entries and $0$ in all other entries.
\item $H_l$ is the matrix with an $i$ in the $(l,l)$ entry and $0$ in the other entries.
\end{itemize}
Then $\mathcal{B}$ is not a nice basis since, for example, $[E_{12}, F_{12}] = \operatorname{diag}(i, -i, 0, \cdots, 0) = H_1 - H_2$.
\subsection{$\mathfrak{su}(n)$}
A basis for $\mathfrak{su}(n)$ is given by $\mathcal{B} = \{E_{pq}\}_{1\leq p<q\leq n}\cup\{F_{pq}\}_{1\leq p<q\leq n}\cup \{G_l\}_{1\leq l < n}$ where $E_{pq}$, $F_{pq}$ are as above and $G_l = H_l - H_{l+1}$. Then this basis is not nice, since, for example, $[E_{13}, F_{13}] = H_1 - H_3 = G_1 + G_2$. But $\mathcal{B}$ is also not orthogonal w.r.t. biinvariant metric! However, note that for any basis $\mathcal{B}$ for $\mathfrak{su}(n)$ that contains $\{E_{pq}\}\cup\{F_{pq}\}$, the set of brackets of basis elements contains the $^n C_2$ linearly independent elements $\{H_i - H_j\}_{i<j}$. At most $n-1$ of these brackets can also be basis elements, thus $\mathcal{B}$ is not a nice basis.
\subsection{$\mathfrak{sp}(n)$}
A basis for the Lie algebra $\mathfrak{sp}(n)$ is given by $\mathcal{B} = \{E_{pq}\}_{1\leq p<q\leq n}\cup\{F_{pq}\}_{1\leq p<q\leq n}\cup\{Y_{pq}\}_{1\leq p<q\leq n}\cup\{Z_{pq}\}_{1\leq p<q\leq n}\cup \{H_l\}_{1\leq l\leq n}\cup \{S_l\}_{1\leq l\leq n}\cup \{T_l\}_{1\leq l\leq n}$. where
\begin{itemize}
\item $Y_{pq}$ is the matrix with $j$ in the $(p,q)$ and $(q,p)$ entries and $0$ in all other entries.
\item $Z_{pq}$ is the matrix with $k$ in the $(p,q)$ and $(q,p)$ entries and $0$ in all other entries.
\item $S_l$ is the matrix with an $j$ in the $(l,l)$ entry and $0$ in the other entries.
\item $T_l$ is the matrix with an $k$ in the $(l,l)$ entry and $0$ in the other entries.
\item $E_{pq}$, $F_{pq}$, and $H_l$ are as defined earlier.
\end{itemize}
Then $\mathcal{B}$ is not a nice basis either, since $[E_{12}, F_{12}] = \operatorname{diag}(i, -i, 0, \cdots, 0) = H_1 - H_2$.
\subsection{$\gg_2$} A basis for the Lie algebra $\gg_2\subset \mathfrak{so}(7)$ is given by
\begin{align*}
X_1 = E_{12} - E_{47},\,\, X_2 = E_{12}+ E_{56},\,\, X_3 = E_{14} + E_{27},\,\, X_4 = E_{14} - E_{36},\,\, X_5 = E_{16} + E_{25}, \\ X_6 = E_{34} + E_{16},\,\,
X_7 = E_{13} + E_{46},\,\, X_8 = E_{13} + E_{57},\,\, X_9 = E_{15} - E_{26},\,\,
X_{10} = E_{15} - E_{37}, \\
X_{11} = E_{17} - E_{24},\,\,
X_{12} = E_{17} + E_{35},\,\,
X_{13} = E_{23} + E_{67},\,\,
X_{14} = E_{45} + E_{67}
\end{align*}
Then, $[X_1, X_6] = X_9 - X_{10}$, so this is not a nice basis. But it is also not orthonormal.
Thus, the basis $\{E_{ij}\}$ for $\mathfrak{so}(n)$ is the only nice basis amongst the above examples. It would be an interesting question for future investigation to find whether the semisimple Lie algebras apart from $\mathfrak{so}(n)$ do admit nice bases, and if so, to describe them.
\section{Group diagrams involving standard basis of $\mathfrak{so}(n)$}\label{sec:SO_grp_diagram}
In this Section, we restrict our attention to group diagrams where the groups $\mathsf{G}, \mathsf{K}_\pm, \H$ are all standard block embeddings of $\mathsf{SO}(k)$ or products of $\mathsf{SO}(k)$'s in $\mathsf{SO}(n)$. For this class of cohomogeneity one manifolds, there exists a nice (and hence stably Ricci-diagonal) basis adapted to the group diagram, namely a basis consisting of $E_{ij}$'s. The main result of this section is the proof of Theorem \ref{mainthm:RF_SO}. The proof proceeds via the observation that diagonal metrics on such a manifold have more symmetries than general $\mathsf{SO}(n)$-invariant metrics.
\begin{proposition}
\label{propn:conj_iso}
Suppose $M$ is a cohomogeneity one manifold with group diagram where $\mathsf{G} = \mathsf{SO}(n)$ and $\H, \mathsf{K}_\pm$ are block embeddings of products of $\mathsf{SO}(k)$'s. Let $\mathcal{B}$ be a basis for $\mathfrak{h}^\perp = \mathfrak{n}$ consisting entirely of $E_{ij}$'s. Let $\mathrm{g}$ be a metric on $M$ that is diagonal in the basis $\mathcal{B}' = \mathcal{B}\cup \{ \frac{\partial}{\partial r} \}$. Then for each diagonal matrix $A \in \O(n)$, there is a map $\Phi_A : M\rightarrow M$ that is an isometry of $\mathrm{g}$.
\end{proposition}
\begin{proof}
We first observe that any isomorphism $\phi : \mathsf{G} \rightarrow \mathsf{G}$ such that $\phi(\mathsf{K}_\pm)\subset \mathsf{K}_\pm$ and $\phi(\H)\subset\H$ induces a diffeomorphism $\Phi$ of $M$. Indeed, fix a geodesic $\gamma$ for which the stabilizer groups are $\mathsf{K}_\pm$, $\H$. Any $p\in M$ is of the form $p = g_p\cdot \gamma(r)$ for some $r$ and $g_p\in\mathsf{G}$. Define $\Phi(g\cdot\gamma(r)) = \phi(g)\cdot\gamma(r)$. Then $\Phi$ is well-defined since $\Phi(gh\cdot\gamma(r)) = \phi(g)\phi(h)\cdot\gamma(r) = \phi(g)\cdot\gamma(r)$ for $0<r<L$ and similarly for $r = 0, L$. If $\dd(\phi)_e|_{\mathfrak{h}^\perp}$ is an isometry, then $\Phi$ is an isometry on the regular part of $M$ since $\mathsf{G}$ acts by isometries as well.
In our case, we define $\phi_A : \mathsf{G}\rightarrow\mathsf{G}$ as conjugation by a diagonal element $A$ in $\O(n)$. Then $\phi_A$ preserves the groups $\mathsf{K}_\pm$, $\H$ and takes a basis vector $E_{ij}\in\mathcal{B}$ into $\pm E_{ij}$ and hence the induced diffeomorphism $\Phi_A :M\rightarrow M$ is an isometry in the diagonal metric.
\end{proof}
\begin{proposition}
\label{propn:inv_diag}
Let $V$ be a subspace of $\mathfrak{so}(n)$ spanned by a subset of the $E_{ij}$'s. If $\mathrm{g}$ is a metric on $V$ which is invariant under $\operatorname{Ad}_A$ for all diagonal $A\in\O(n)$ then $\mathrm{g}$ is diagonal in the standard basis consisting of $E_{ij}$'s.
\end{proposition}
\begin{proof}
For each pair of linearly independent elements $E_{ij}, E_{kl} \in \mathfrak{so}(n)$, there exists an element $A \in \O(n)$ such that $\operatorname{Ad}_A E_{ij} = E_{ij}$ and $\operatorname{Ad}_A E_{kl} = -E_{kl}$. Indeed, if $\{i,j\} \cap \{k,l\} = \emptyset$ then we can take $A$ to be the diagonal matrix with a $-1$ in the $(k,k)$ entry and $1$'s in the other diagonal entries. If $\{i,j\}\cap \{k,l\} = \{i\}$ then without loss of generality we are considering $E_{ij}$ and $E_{il}$, and we can take $A$ to be the diagonal matrix with $-1$ in the $(i,i)$ and $(j,j)$ entries and $1$'s in the other diagonal entries.
Then, invariance of the metric under the $\operatorname{Ad}_{\O(n)}$-action implies $\mathrm{g}(E_{ij}, E_{kl}) = 0$, since
\begin{align*}
\mathrm{g}(E_{ij}, E_{kl}) = \mathrm{g}(\operatorname{Ad}_A E_{ij}, \operatorname{Ad}_A E_{kl}) = \mathrm{g}(E_{ij}, -E_{kl}) = -\mathrm{g}(E_{ij}, E_{kl}).
\end{align*}
\end{proof}
We are now ready to prove Theorem \ref{mainthm:RF_SO}.
\begin{proof}[Proof of Theorem \ref{mainthm:RF_SO}]
Let $\mathrm{g}$ be a diagonal metric on $M$. By Proposition \ref{propn:conj_iso}, each diagonal matrix $A\in\O(n)$ yields an additional isometry $\Phi_A$ of $(M, \mathrm{g}_0)$.
Since isometries are preserved under the Ricci flow, each $\Phi_A$ is an isometry of $(M, \mathrm{g}(t))$ as well. Now, by Proposition \ref{propn:inv_diag}, any metric invariant under all the $\Phi_A$'s, must be diagonal. Thus $\mathrm{g}(t)$ is diagonal for each $t>0$ as well.
\end{proof}
In fact, the conclusion of Theorem \ref{mainthm:RF_SO} is also true for a slightly larger class of cohomogeneity one group diagrams.
\begin{theorem}
\label{thm:rf_diag_pres_so_disconnected}
Suppose that the group diagram of the manifold $M$ is such that
\begin{enumerate}
\item $\mathsf{G} = \mathsf{SO}(n)$
\item $\H^0, \mathsf{K}_\pm^0$ are block embeddings of products of $\mathsf{SO}(k)$'s
\item $\H \cong \H^0\rtimes B$, $\mathsf{K}_\pm \cong \mathsf{K}_\pm^0 \rtimes B$ where $B$ is a finite subgroup of $\mathsf{G}$ and $B$ acts on $\H^0$ or $\mathsf{K}_\pm^0$ by conjugation, i.e. $B$ is in the normalizer of $\H^0, \mathsf{K}_\pm^0$.
\end{enumerate}
Then the diagonality of metrics is preserved under the Ricci flow.
\end{theorem}
\begin{proof}
In the above situation, $M$ is the quotient by the right action of $B$ on the manifold $\widetilde{M}$ whose group diagram is $\H^0\subseteq \mathsf{K}_\pm^0\subseteq \mathsf{G}$. Therefore, any cohomogeneity one metric $\mathrm{g}$ on $M$ can be lifted to a cohomogeneity one metric $\tilde{\mathrm{g}}$ on $\widetilde{M}$ such that (at points on $\gamma$) $\tilde{\mathrm{g}}$ is invariant under the conjugation action by $B$. If $\mathrm{g}$ is diagonal then so is $\tilde{\mathrm{g}}$.
Evolve the (diagonal) metric $\tilde{\mathrm{g}}$ via the Ricci flow. By Theorem
\ref{mainthm:RF_SO}, the evolving metric $\tilde{\mathrm{g}}(t)$ on $M$ is diagonal. Since isometries are preserved under the Ricci flow, the (diagonal) evolving metric $\tilde{\mathrm{g}}(t)$ remains invariant under conjugation by $B$, and hence descends to a diagonal metric $\mathrm{g}(t)$ on $M$, such that $\mathrm{g}(t)$ also satisfies the Ricci flow equation with initial metric $\mathrm{g}$. By uniqueness of solutions to the Ricci flow, this shows that the metric on $M$ remains diagonal under the Ricci flow.
\end{proof}
The arguments above relied on the choice of basis elements $\{E_{ij}\}$ for the Lie algebras of $\H$, $\mathsf{K}$ and $\mathsf{G}$, which was made possible by assuming these groups were block embeddings of products of $\mathsf{SO}(k)$. Below we make the observation that if we want to work with connected Lie groups having basis consisting of some number of standard basis elements $\{E_{ij}\}$ in $\mathfrak{so}(n)$, then the only possibility for the groups $\H$ and $\mathsf{K}$ are block embeddings of products of $\mathsf{SO}(k)$'s.
\begin{proposition}
Let $\k$ be a subalgebra of $\mathfrak{so}(n)$ that is generated (as an $\mathbb{R}$-vector space) by some subset $S$ of the $E_{ij}$'s. Then $\k$ is the direct sum of subalgebras of the form $\mathfrak{so}(k)$ in block embedding and hence $\mathsf{K}$ is a product of $\mathsf{SO}(k)$'s in block embedding.
\end{proposition}
\begin{proof}
Write the set $S$ as the union of a finite number of sets $S_1$, $S_2$, $\cdots$, $S_m$ such that
\begin{enumerate}
\item $\{i, j\} \cap \{k, l\} = \emptyset$ whenever $E_{ij} \in S_\alpha$ and $E_{kl} \in S_\beta$ with $\alpha \neq \beta$.
\item Each $S_\alpha$ is minimal (among subsets of $S$) with respect to property (1).
\end{enumerate}
Let $V_\alpha$ be the vector space generated by $S_\alpha$. Then $\k = \oplus_{\alpha} V_\alpha$ as a vector space. Further, each $V_\alpha$ is in fact a Lie subalgebra of $\k$. We can see this easily from the brackets among the $E_{ij}$'s; the brackets among basis elements in $V_\alpha$ cannot yield any indices that do not occur in $S_\alpha$, so $V_\alpha$ is closed under Lie brackets. Additionally, the brackets between $V_\alpha$ and $V_\beta$ are zero when $\alpha\neq \beta$ just by the choice of the sets $S_\alpha$ as having no indices in common. Thus it only remains to show that $V_\alpha \cong \mathfrak{so}(k_\alpha)$ in some block embedding. In fact, if $\Lambda = \{i_1, \cdots, i_l\}$ is the set of indices appearing in $S_\alpha$ then we will show that $V_\alpha$ is the $\mathfrak{so}(l)$ in the block embedding corresponding to the indices $\{i_1, \cdots, i_l\}$.
If $S_\alpha$ has just one element $E_{ij}$ then clearly $V_\alpha \cong \mathbb{R}\cdot E_{ij} \cong \mathfrak{so}(1)$. If $S_\alpha$ has additional elements then without loss of generality there exists an index $k\neq i, j$ such that $E_{ik} \in S_\alpha$, since otherwise we could have split off $\{E_{ij}\}$ as another $S_i$, thus contradicting minimality of $S_\alpha$. Then $[E_{ij}, E_{ik}] = -E_{jk}$, so since $V_\alpha$ is closed under Lie brackets, $E_{jk}\in S_\alpha$. Hence $V_\alpha \supseteq \mathfrak{so}(3) = \langle E_{ij}, E_{jk}, E_{ik} \rangle$. Now if $S_\alpha$ has only three elements then $V_\alpha \cong \mathfrak{so}(3) \subseteq \mathfrak{so}(n)$ in block embedding in the indices $i, j, k$ and we are done. If $S_\alpha$ has additional elements then by minimality of $S_\alpha$, there exists an index $l\neq i, j, k$ such that $E_{il} \in S_\alpha$. Then $[E_{ij}, E_{il}] = -E_{jl}$, $[E_{jk}, E_{jl}] = -E_{kl}$ so since $V_\alpha$ is closed under brackets, $E_{il}, E_{jl}, E_{kl} \in S_\alpha$. Hence $V_\alpha \supseteq \mathfrak{so}(4) = \langle E_{ij}, E_{jk}, E_{ik}, E_{il}, E_{jl}, E_{kl} \rangle$. As before, if $S_\alpha$ has no more elements then we are done as $V_\alpha \cong \mathfrak{so}(4) \subseteq \mathfrak{so}(n)$ in block embedding in the indices $i, j, k, l$. Continuing this process, we see that
\begin{enumerate}
\item At each stage we obtain $V_\alpha \supset \mathfrak{so}(k)$ (in block embedding) for some $k$.
\item This process must terminate since $V_\alpha$ is finite dimensional.
\end{enumerate}
Hence the termination of this process yields $V_\alpha \cong \mathfrak{so}(k)$ in some block embedding.
\end{proof}
\section{Instantaneous behaviour of Grove-Ziller metrics under Ricci flow}\label{sec:appln}
In this Section, we use the result of the previous section to study the Ricci flow behaviour of certain cohomogeneity one $\sec\geq 0$ metrics. In particular, we prove Theorem \ref{mainthm:sec_GZ} from the Introduction, which extends the techniques of \cite{bk16} to higher dimensional cohomogeneity one manifolds. First, we derive the Ricci flow equations for a diagonal cohomogeneity one metric, assuming it evolves through other diagonal metrics. In the expressions below, $b_i = K(e_i, e_i)$, where $K(\cdot, \cdot)$ is the Killing form of $\gg$, and $e_i\in\mathfrak{n}_i$. Also, $m$ denotes the dimension of $\mathfrak{h}^\perp$.
\begin{proposition}\label{propn:RFeqns}
Let $\mathrm{g}(t)$ be a time-dependent diagonal cohomogeneity one metric evolving by the Ricci flow. Then the components $h, f_1, \cdots, f_m$ of $\mathrm{g}(t)$ satisfy the following system of PDEs:
\begin{equation}\label{eq:RFcohom1}
\begin{aligned}
&h_t = \sum_{j=1}^m \left(\frac{{f_j}_{rr}}{hf_j} - \frac{{f_j}_rh_r}{h^2f_j} \right)\\
&{f_i}_t = \frac{{f_i}_{rr}}{h^2} - \frac{{f_i}_rh_r}{h^3} + \frac{{f_i}_r}{h}\sum_{j=1}^m \frac{{f_j}_r}{hf_j} - \frac{{f_i}_r^2}{h^2f_i} - \sum_{j,k=1}^m\frac{f_i^4-2f_k^4}{4f_if_j^2f_k^2}{\gamma_{jk}^i}^2 + \frac{b_i}{2f_i} \\
&t\in (0,T),\, r\in(0,L),\, i = 1,\cdots m
\end{aligned}
\end{equation}
\end{proposition}
\begin{proof}
A time-dependent diagonal metric $\mathrm{g}$ and diagonal Ricci tensor can be written as:
\begin{align*}
\mathrm{g}(r,t) &= h(r,t)^2\,\dd r^2 + \sum_{i=1}^m f_i(r,t)^2\, \omega_i^2\\
\operatorname{Ric}_\mathrm{g}(r,t) &= \operatorname{Ric}_\mathrm{g}\left(\frac{\partial}{\partial r}, \frac{\partial}{\partial r}\right)\,\dd r^2 + \sum_{i=1}^m \operatorname{Ric}_\mathrm{g}(X_i^*, X_i^*)\,\omega_i^2
\end{align*}
Differentiating the metric term by term with respect to $t$ yields
\begin{align*}
\frac{\dd\mathrm{g}}{\dd t} = 2hh_t\,\dd r^2 + \sum_{i=1}^m 2f_i{f_i}_t\, \omega_i^2
\end{align*}
On the other hand, by \cite[Proposition 1.14]{gz02} the Ricci tensor can be written in terms of the metric and the structure constants $\gamma_{ij}^k$ as
\begin{align}
\operatorname{Ric}(e_i, e_i) &= -\frac{b_i}{2} + \sum_{j,k=1}^m\frac{f_i^4-2f_k^4}{4f_j^2f_k^2}{\gamma_{jk}^i}^2 + \left\{ -\frac{{f_i}_r}{hf_i}\sum_{j=1}^m \frac{{f_j}_r}{hf_j} + \frac{{f_i}_r^2}{h^2f_i^2} - \frac{{f_i}_{rr}}{h^2f_i} + \frac{{f_i}_rh_r}{h^3f_i} \right\}f_i^2
\end{align}
Substituting these in the Ricci flow equation \eqref{eq:RF} and comparing coefficients then yields the result.
\end{proof}
We now use this system of PDEs to study the Ricci flow behavior of sectional curvature on a special class of cohomogeneity one manifolds.
\begin{theorem}\label{thm:sec_GZ}
Let $M$ be a cohomogeneity one manifold with the action of $\mathsf{SO}(n)$ with a group diagram where the groups $\H$, $\mathsf{K}_\pm$ are products of $\mathsf{SO}(k)$ in block embedding, and such that there are two singular orbits each of codimension two. Then $M$ admits a metric $\mathrm{g}$ such that $\sec_\mathrm{g} \geq 0$ and when evolved by the Ricci flow, $\mathrm{g}$ immediately acquires some negatively curved $2$-planes.
\end{theorem}
\begin{proof}
Since the cohomogeneity one manifold ($M$, $\mathsf{G}$) has codimension $2$ singular orbits, by Grove-Ziller \cite{gz00}, $M$ admits a $\mathsf{G}$-invariant metric $\g_{\rm GZ}$ with $\sec \geq 0$. By the construction in \cite{gz00}, one can arrange that the metric is diagonal in a basis coming from the standard basis vectors of $\mathfrak{so}(n)$. By Theorem \ref{mainthm:RF_SO}, the evolving metric $\mathrm{g}(t)$ will be diagonal in the same basis, and hence the components of the metric will staisfy \eqref{eq:RFcohom1}.
By the Grove-Ziller construction, up to relabelling of indices, the functions $f_i$ that determine the metric have qualitative behaviour as follows. At a singular orbit, i.e. $r=0$, $f_1$ vanishes, and the remaining functions $f_i$ are equal and constant in a neighborhood of $r=0$. As a consequence, if we define $\mu(r)$ to be the $2$-plane spanned by $\frac{\partial}{\partial r}$ and $X_2$ then $\sec_{\g_{\rm GZ}}\mu(r) = -\frac{f_2''}{f_2} = 0$ for $r$ close to $0$. Here $'$ denotes derivative with respect to arclength along $\gamma(r)$. We will compute the first variation of $\sec_{\mathrm{g}(t)}\mu(r)$ at $t=0$.
Using the assumptions about the $f_i$'s in a neighborhood of $r=0$ for the metric $\mathrm{g}_{GZ}$,
\begin{align*}
\frac{\dd}{\dd t}\left(\frac{f_2''}{f_2}\right) = -\frac{({f_2})_{rrt}}{f_2}\big|_{t=0}
\end{align*}
By regularity of $f_2$, $({f_2})_{rrt} = (({f_2})_t)_{rr}$ which we can compute by twice differentiating with respect to $r$ the equation in \ref{eq:RFcohom1} corresponding to $f_2$. We compute this derivative at $t=0$ (i.e. for the metric $\g_{\rm GZ}$) and for $r>0$ close to $0$. As a result, $f_i = c$ for $i>1$, for some constant $c$. Hence $({f_i})_r = ({f_i})_{rr} = 0$ for each $i>1$, so the expression for $({f_2})_t$ in a neighborhood of $r=0$ reduces to
\begin{align*}
({f_2})_t\big|_{t=0} &= \frac{b_2}{2f_2} - \sum_{j,k} \frac{f_2^4 - 2f_k^4}{4f_2f_j^2f_k^2}(\gamma_{jk}^2)^2\\
&= \frac{b_2}{2f_2} - \frac{f_2^4 - 2f_1^4}{4f_2f_3^2f_1^2}(\gamma_{31}^2)^2 - \frac{f_2^4 - 2f_3^4}{4f_2f_1^2f_3^2}(\gamma_{13}^2)^2
\end{align*}
Summands where neither of $j$, $k$ is $1$ vanish because of the condition that $f_i = c$ for all $i>1$. Also, without loss of generality we have assumed that $3$ is the unique index $j$ such that $\gamma_{1j}^2 = \gamma_{j1}^2$ is non-zero. That there is only one such index follows from the fact that $\{E_{ij}\}$ is a nice basis. If the basis is not nice then there will be additional summands of the same form as the second and third summand in the above expression, with $3$ replaced by the suitable index $j$ for which $\gamma_{1j}^2 = \gamma_{j1}^2$ is non-zero. Therefore,
\begin{align*}
({f_2})_t\big|_{t=0} &= \frac{b_2}{2f_2} - \frac{2(f_2^4 - f_3^4) - 2f_1^4}{4f_2f_3^2f_1^2}(\gamma_{13}^2)^2\\
&= \frac{b_2}{2c} + \frac{f_1^2}{2c^3}(\gamma_{13}^2)^2\\
\implies ({f_2})_{trr}\big|_{t=0} &= \frac{(\gamma_{13}^2)^2}{c^3}\cdot(({f_1})_r^2 + f_1({f_1})_{rr})
\end{align*}
By the smoothness conditions at a singular orbit, $f_1(r=0) = ({f_1})_{rr}(r=0) = 0$ and $({f_1})_r(r=0) = a$ for some $a \in \mathbb{Z}_+$. Therefore for small enough $r>0$, $({f_1})_r^2 + f_1({f_1})_{rr}>0$, hence $({f_2})_{trr}>0$ and $\frac{\dd}{\dd t}\sec(\mu(r))\big|_{t=0}<0$. We conclude that for small enough $t>0$, $\sec_{\mathrm{g}(t)}(\mu(r))<0$.
\end{proof}
|
1,477,468,751,421 | arxiv | \section{Introduction}
The precise determination of the element $|V_{ub}|$ of the CKM matrix is an
important test of the flavour structure of the Standard Model (SM) and is crucial
in the indirect search for New Physics.
The latest global fit to the Unitarity Triangle (UT) including
all flavour changing observables but a direct
determination of $|V_{ub}|$ predicts
$|V_{ub}|= (3.44\pm0.16)\times 10^{-3}$ \cite{UTfit}. This value agrees within errors
with the {\it exclusive} determination, that relies on lattice QCD or
light-cone sum rules \cite{Flynn,Ball} and that is still affected by somewhat large
theoretical errors. An {\it inclusive} analysis is in principle the cleanest method
to precisely determine $|V_{ub}|$. This is based on the comparison between the
decay rate of $B\to X_u \ell \nu$ measured by experiments and the
corresponding theoretical prediction.
The latest HFAG world average \cite{HFAG} yields an inclusive
$|V_{ub}|$ which is about $2.5 \sigma$ higher than the value preferred by the
global UT fit, calling for a deeper investigation of the process.
The theoretical description of $B\to X_u \ell \nu$ is based on a local
Operator Product Expansion. Inclusive quantities are organized in a
double series in $\alpha_s$ (perturbative QCD corrections)
and in $1/m_b$ (Heavy Quark Expansion).
The very same method was successfully applied to the $b \rightarrow c$ decay
and led to a precise determination of $|V_{cb}|$, within 2\%. The description of
charmless decays is more involved due to the dominant charmed background that
needs to be rejected by experiments imposing very stringent cuts. These cuts
can spoil the convergence of the OPE introducing sensitivity to nonlocal
effects, such as the motion of the $b$ quark inside the meson (Fermi motion),
that can be parameterized in terms of a light-cone distribution function
(or ``shape function''). The
lowest integer moments of the distribution function
are constrained by the OPE \cite{Bigi:1993ex} and they are expressed
in terms of the $b$ quark mass and of the same 5 and 6 dimensional operators
that contribute to $B\to X_c \ell \nu$. Such expressions are universal,
i.e. independent of the process, and shared by the
radiative decay $B\to X_s \gamma$ only as long as $1/m_b$ corrections are neglected.
An OPE-based treatment of shape function effects in $B\to X_s \gamma$
including subleading ($1/m_b$) effects was developed
in \cite{benson} and turned out to describe well experimental data.
In \cite{Gambino:2007rp} a similar procedure was undertaken for the case
of semileptonic decays, where many complications arise, mostly due to the kinematics
taking place at different $q^2$. In the
following we illustrate the main features of this procedure and show some
meaningful results.
\section{Theoretical framework}
\subsection{Perturbative corrections in a Wilsonian approach}
All observables describing the $B\to X_u \ell \nu$ decay can be
extracted {\it via} integration over the triple differential width:
\begin{eqnarray} \label{eq:aquila_normalization}
\frac{d^3 \Gamma}{dq^2 \,dq_0 \,dE_\ell} &=&
\frac{G_F^2 |V_{ub}|^2}{8\pi^3}
\Bigl\{
q^2 W_1- \left[ 2E_\ell^2-2q_0 E_\ell + \frac{q^2}{2} \right] W_2 \nonumber
+ q^2 (2E_\ell-q_0) W_3 \Bigr\}\times \\&&
\quad\quad\quad\quad \times \theta \left(q_0-E_\ell-\frac{q^2}{4E_\ell} \right)
\ \theta(E_\ell) \ \theta(q^2) \ \theta(q_0-\sqrt{q^2}),
\end{eqnarray}
where $q_0$ and $E_\ell$ are the total leptonic and the charged lepton energies
in the $B$ meson rest frame, $q^2$ is the leptonic invariant mass and $W_{1-3}$
are the three structure functions relevant in the case of massless lepton.
Perturbative corrections to the structure functions $W_{1-3}$ to order ${\cal O}(\alpha_s)$ have
been known for quite long \cite{dfn}, whereas ${\cal O}(\alpha_s^2 \beta_0)$ corrections
recently appeared in \cite{Gambino:2006wk}. Both calculations were performed in the
{\it on-shell} scheme.
It has been stressed several times in the literature \cite{kinetic} that an
{\it on-shell} definition of the $b$ quark mass becomes ambiguous as soon as
power suppressed terms are included. The {\it pole} mass is better traded with
a {\it running} mass $m_b(\mu)$. To this purpose, perturbative corrections to
order ${\cal O}(\alpha_s^2 \beta_0)$ are calculated anew in \cite{Gambino:2007rp}
in the presence of a ``hard'' Wilsonian cutoff $\mu \sim 1$ GeV. This
new scale separates the perturbative regime of gluons with energies larger than $\mu$
from the ``soft'' (non-perturbative) regime of gluons with energies lower than $\mu$.
The contributions of soft gluons are then
absorbed into a redefinition of the heavy quark parameters, consistent with the way
they are extracted from fits to $B\to X_c \ell \nu$ moments in the
{\it kinetic} scheme \cite{Gambino:2004qm,BF}. Physical observables are of course
independent of the cutoff.
\subsection{Fermi motion}
As already mentioned in the Introduction, Fermi motion is encoded in a distribution
function, whose lowest integer moments are constrained by the local OPE.
As soon as $1/m_b$ corrections are retained, such moments cease to be universal:
they have different expressions for each of the three structure functions in eq.
\eqref{eq:aquila_normalization} and show an explicit $q^2$ dependence. To preserve
generality we introduce three separate distribution functions, one for each
of the structure functions, depending on the light-cone component of the
$b$ quark momentum ($k_+$), on $q^2$ and on the Wilsonian cutoff ($\mu$). Hadronic structure
functions are then defined {\it via} a convolution of the perturbative structure functions
with the distribution functions, whose expression is derived at the leading order in $1/m_b$
and $\alpha_s$ and assumed to be valid also at higher orders:
\begin{equation} \label{eq:conv2}
W_i(q_0,q^2) \sim
\int dk_+ \ F_i(k_+,q^2,\mu) \ W_i^{pert}
\left[ q_0 - \frac{k_+}{2} \left( 1 - \frac{q^2}{m_b M_B} \right), q^2,\mu \right]
\end{equation}
Model-dependence resides only in the {\it Ansatz} employed for
the distribution functions. In the analysis of \cite{Gambino:2007rp} a set of
about 80 different functional forms, inspired by those already present
in the literature (exponential, Gaussian, Roman, hyperbolic), is tested and the
related uncertainty on $|V_{ub}|$ turns out to be rather small
(see Sec. \ref{results}).
\subsection{The high $q^2$ region}
The impact of Fermi motion becomes irrelevant at high $q^2$ and the developed
formalism is no more applicable. The OPE itself shows a number of pathological
features in this kinematical region, due to the emergence of unsuppressed higher order
terms. For instance, the OPE predicts a value for the variance of
the distribution functions which decreases at increasing $q^2$, reaching
even negative values. Moreover, Wilson coefficients
of power suppressed operators become more and more important
and already the coefficient of the Darwin term ($\rho_D^3$) shows a divergence at $q^2 = m_b^2$:
\begin{equation}
\frac{d\Gamma}{d\hat{q}^2}\sim \frac{\rho_D^3}{6m_b^3}\left[
20\, {\hat q}^6 +66\, {\hat q}^4+48\, \hat{q}^2+74 -\frac{96}{1-\hat{q}^2}\right]+...,
\quad \hat{q}^2=\frac{q^2}{m_b^2}
\label{q2ope}
\end{equation}
This singularity is removed at the level of the total rate
by a one-loop penguin diagram that mixes
the Weak Annihilation (WA) four-quark operator into the Darwin operator
\cite{bumoments,Ossola:2006uz}. However, as we are interested in differential distributions
as well, a dedicated treatment of the high-$q^2$ region ($q^2 > q^2_* \sim 11\ \text{GeV}^2 $) is
mandatory.
In \cite{Gambino:2007rp} two different methods are proposed and their difference
is used to estimate the associated uncertainty:
\begin{itemize}
\item[{\it a)}] we model the tail in a way consistent
with positivity of the spectra and including a WA contribution ($X$)
through a Dirac-$\delta$ localized at the endpoint (default method):
\begin{equation} \frac{d\Gamma}{d\hat{q}^2}\sim
\frac{\rho_D^3}{6m_b^3}\left[ 20\, {\hat q}^6 +66\, {\hat q}^4+48\, \hat{q}^2+74
-\frac{96\,(1-e^{-\frac{(1-\hat{q}^2)^2}{b^2}} )} {1-\hat{q}^2}\right]+ X\,
\delta(1-\hat{q}^2)+...
\label{q2mod}
\end{equation}
\item[{\it b)}] we extend the Fermi motion description of low $q^2$ to the
high $q^2$ region, {\it freezing} the shape function at $q^2 = {q_*}^2$
and using it in the convolution of eq. \eqref{eq:conv2} at higher $q^2$.
\end{itemize}
\section{Results and theoretical uncertainties} \label{results}
We take advantage of some of the latest experimental
measurements to extract values of $|V_{ub}|$ using the descripted
framework. We leave the task of an average of these results to a future,
hopefully dedicated, experimental analysis. We consider:
\begin{itemize}
\item [\bf A] Belle analysis with $M_X\le1.7\,\mbox{GeV}$ and $E_\ell>1.0 \,\mbox{GeV}$ \cite{belle1};
\item [\bf B] Belle and Babar analyses with $M_X\le1.7\,\mbox{GeV}$, $q^2>8\,\mbox{GeV}^2$,
and $E_\ell>1.0 \,\mbox{GeV}$ \cite{belle1,babar2};
\item [\bf C] Babar analysis with $E_\ell>2.0\,\mbox{GeV}$ \cite{babar1}
\end{itemize}
\begin{table}[t]
\center{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
cuts & {\small $|V_{ub}|\times 10^3$} & $f$ & exp & par & pert & {\small
tail model}
& $q_*^2$ & $X$ & ff &tot th\\ \hline
{\bf A} \cite{belle1} & 3.87 & 0.71 & 6.7 & 3.5 & 1.7 & 1.6 &
2.0 &$_{-2.7}^{+0.0}$ &$^{+2.4}_{-1.1}$ & $\pm 4.7^{+2.4}_{-3.8}$ \\
\hline
{\bf B} \cite{belle1,babar2} & 4.44 & 0.38 & 7.3 & 3.5 & 2.6 & 3.0 & 4.0
& $_{-5.0}^{+0.0}$ & $^{+1.4}_{-0.5}$ & $\pm 6.6_{-5.5}^{+1.4}$ \\ \hline
{\bf C} \cite{babar1} & 4.05 & 0.30 & 5.7 & 4.2 &3.3 & 1.8 & 0.9
&$_{-6.2}^{+0.0}$ & $^{+1.2}_{-0.7}$ & $\pm 5.7^{+1.2}_{-6.9}$ \\
\hline
\end{tabular}}
\caption{\small Values of $|V_{ub}|$ obtained using different experimental results
and their experimental and theoretical
uncertainties (in percentage) due to various sources (see text).
$f$ is the estimated fraction of events. }
\end{table}
Results are summarized in Table 1 and were obtained from a C++ code
available upon request. The reported values of $|V_{ub}|$
are obtained using the default setting, namely an exponential {\it Ansatz}
for the distribution functions, the prescription {\it a)} for the high $q^2$ tail
at $X=0$
(see previous section) and the central values of the fit in \cite{BF} as input parameters, at
$\mu$ = 1 GeV. $f$ is the estimated fraction of events, whereas the
following columns show different sources of uncertainty, namely:
\begin{itemize}
\item Experimental error (exp).
\item Parameteric error (par): it is extracted taking into account all correlations
between non-perturbative parameters \cite{BF} and varying
$\alpha_s = 0.22$ by $\pm 0.02$ as uncorrelated. The uncertainty on $m_b$ is
by far dominating.
\item Perturbative error (pert).
\item Errors related to the high $q^2$ region: we consider the difference between
methods {\it a)} and {\it b)} (tail model), the value of $q^2$ at which the
modelling sets in ($q_*^2$) and the error due to WA effects ($X$).
We let $X$ vary in a range consistent with
the 90\% confidence level bound set by CLEO on the size of WA \cite{cleo_WA},
namely $0 \le X \le 0.04$. We stress that the error related to $X$ is asymmetric and
points to a lower value of $|V_{ub}|$.
\item Functional form dependence (ff), estimated using about 80 different
{\it Ans$\ddot a$tze} for the distribution functions.
\end{itemize}
It is worth stressing that all combinations of cuts considered include the
high $q^2$ region discussed in the previous section which, as we showed,
is plagued by poorly controlled effects. However, Belle measurements {\bf A} and
{\bf B} can be easily combined in order to obtain an estimate of $|V_{ub}|$
with an {\it upper} cut on $q^2$, namely for the combination $M_X\le1.7\,\mbox{GeV}$,
$E_\ell>1.0 \,\mbox{GeV}$, and $q^2<8\,\mbox{GeV}^2$. This yields, within our framework, a value
of $|V_{ub}|$ much lower than in all other cases ($|V_{ub}|=3.18\times 10^{-3}$).
and it might signal some bias in the treatment of the high $q^2$ region either
on the experimental or on the theoretical side. Only a dedicated experimental analysis
with an upper cut on $q^2$ could probably shed some light on this issue.
\section{Summary and References}
We presented a new approach for dealing with the triple differential width of
$B \to X_u \ell \nu$ decays, in a framework characterised by a hard Wilsonian cutoff
$\mu \sim 1$ GeV. The method developed takes into account all known perturbative and
non-perturbative corrections. Fermi motion is treated at the subleading level as
well.
Some problems related to the high $q^2$ region of the process were pointed out, that
were probably underestimated in the past and that still deserve a deeper
investigation.
We also presented some numerical results with the associated uncertainties and
put forward the suggestion of a new experimental analysis with an upper cut
on $q^2$.
|
1,477,468,751,422 | arxiv | \section{Introduction}
Cloud radio access networks (CRAN) are expected to be the core new network architecture in next generation mobile radio systems \cite{Andrews_Buzzi_Choi_Hanly_Lozano_Soong_Zhang}. To support the ever increasing demand for high-speed data, base-stations are increasingly deployed in smaller cell sizes with a progressive move towards full spectrum reuse. By connecting the numerous base-stations via high-speed links to centralized cloud computing processors, CRAN provides an efficient cellular architecture that enables large-scale interference management through coordinated and joint signal processing. While the majority of recent works focus on a single-cloud scenario and neglects intercloud interference, this letter considers the more favorable practical multicloud scenario and addresses the user-to-cloud assignment problem.
The model considered in this paper is a practical realization of a CRAN system over a dense multicell network. It consists of a radio access network comprising several clouds, as opposed to the single-cloud scenario assumed in the recent CRAN literature, e.g., see \cite{Andrews_Buzzi_Choi_Hanly_Lozano_Soong_Zhang} and references therein. A multicloud model is recently considered in \cite{Park_Simeone_Sahin_Shamai_WCL_letter}; however, the problem addressed in \cite{Park_Simeone_Sahin_Shamai_WCL_letter} is based on a pre-known association of clouds and users. The user-to-cloud assignment problem studied in this paper is also related to the base-station association problem which is well studied in the literature of wireless networks. However, the majority of the previous works either focus on centralized solutions to the problem \cite{Han_Farrokhi_Ji_Liu}, or derive distributed solutions for specific utilities, e.g. log-rate maximization \cite{Shen_Yu_JSAC}. Most importantly, the methods in the multiple-input-multiple-output (MIMO) base-station scenario are unsatisfactory in the distributed antenna infrastructure supported in CRANs, as the base-stations connected to one CRAN are not co-located, and so the channel cannot be simply averaged over different paths as in \cite{Shen_Yu_JSAC}.
This paper formulates an optimization problem that maximizes a \textit{generic network-wide utility function} subject to network connectivity constraints where each user cannot be connected to more than one cloud at a time, and each cloud operates according to a resource budget constraint, e.g., the number of users each cloud serves cannot exceed the number of base-sations' antennas connected to the cloud, so as to preserve high system multiplexing gain. The problem is formulated as a generalized assignment problem (GAP), which is an NP-hard problem. The majority of the available solutions in the literature of GAP, both in computer science and operational research, are centralized in nature, e.g. \cite{Shmoys_Tardos}. The main contribution of this paper is that it solves the multicloud association problem using an iterative auction approach, first proposed in \cite{Luo_Chakraborty_Sycara}, utilizing a knapsack-subroutine \cite{knapsack_book}. The proposed method can be implemented in a distributed fashion across the multicloud network, and only requires a reasonable amount of information exchange between the clouds. The paper further proposes a centralized heuristic algorithm with low computational complexity. Simulations results show that the proposed algorithms provide appreciable performance improvements as compared to the conventional cloud-less assignment solutions.
\section{System Model and Problem Formulation}
\subsection{System Model}
Consider the downlink of a multicloud radio access network, composed of $C$ clouds each
serving $B$ basestations, over a network comprising $U$ users. The base-stations are assumed to be connected to the clouds via high-capacity digital links. We further assume that base-stations and users are equipped with single antennas.
Let $\mathcal{C}=\{1,\cdots,C\}$ denote the set of clouds, and $\mathcal{U}=\{1,\cdots,U\}$ be the set of users. Each user $u\in \mathcal{U}$ can be assigned to one and only one cloud $c \in \mathcal{C}$. Furthermore, every cloud $c \in \mathcal{C}$ has its own resource budget constraint, e.g., the constraint on the number of users that it can be connected to.
Let $h_{cbu}\in {\mathbb C}$ be the channel from the $b$th BS of the $c$th cloud to the $u$th user, and let ${\textbf{h}}_{cu}\in {\mathbb C}^{B\times 1}$ be the channel vector from the $c$th cloud to the $u$th user, i.e., ${\textbf{h}}_{cu}= [{{h}}_{c1u},\cdots, {{h}}_{cBu}]^T$. Define $\textbf{w}_{cu} \in {\mathbb C}^{B\times 1}$ be the transmit beamformer over cloud $c$'s BS's for user $u$, which is fixed throughout this paper.
\subsection{Problem Formulation}
Let $r_{cu}$ be the generic reward of associating user $u$ to cloud $c$, and $A_{cu}$ be the binary association variable which is equal to 1 if user $u$ is associated to cloud $c$, and zero otherwise. We focus on solving the generalized cloud-association problem, where each user can be connected to one cloud at most, and where every cloud has it own resource connectivity constraint. The paper considers the following network-wide optimization problem:
\begin{eqnarray}
\label{generalized_optimization_problem}
& \max & \sum_{c,u}r_{cu}A_{cu} \\
& {\rm s.t.\ } & \sum_{c\in \mathcal{C}}A_{cu}\leq 1,\quad \forall u\in \mathcal{U}\nonumber\\
& & \sum_{u\in \mathcal{U}}\alpha_{cu}A_{cu}\leq K_c,\quad \forall c\in \mathcal{C}\nonumber\\
& & A_{cu}\in \{0,1\}, \forall (c,u)\in \mathcal{C}\times \mathcal{U}, \nonumber
\end{eqnarray}
where the optimization is over the binary variable $A_{cu}$, and where the constraint $\sum_{u\in \mathcal{U}}\alpha_{cu}A_{cu}\leq K_c$ denotes the resource connectivity constraint of cloud $c$. For example, $\alpha_{cu}=1$ and $K_c=B$ physically mean that cloud $c$ spatially multiplexes $B$ users at most, since the cluster of base-stations served by cloud $c$ behaves as one distributed antenna system of $B$ antennas.
This paper focuses on solving problem (\ref{generalized_optimization_problem}) by assuming that, at this stage, once a user is associated with a cloud, it is served by all base-stations of that cloud at a fixed power transmission. In a nutshell, all beamforming vectors $\textbf{w}_{cu}$ are fixed throughout this paper. More precisely, each beamforming vector $\textbf{w}_{cu}$ is set to the ones-vector scaled by some fixed power value $P_c$, $\forall$ $c\in \mathcal{C}$. The insight for such assumption is that, in the downlink, calculating the benefit of associating user $u$ to cloud $c$ becomes independent of the association of other users across the network. Finding the index and power value of base-stations serving each user within each cloud is an extra stage which eventually corrects for the appropriate beamforming vectors, e.g., \cite{Shen_Yu_JSAC}. But the second stage falls outside the scope of this paper.
\section{User-to-Cloud Association}
This section proposes a distributed algorithm to solve the cloud-association problem (\ref{generalized_optimization_problem}). It is based on the iterative auction-based approach presented in \cite{Luo_Chakraborty_Sycara}. The algorithm called distributed cloud-association algorithm (DCAA) can be implemented in a distributed fashion across the network. The paper further presents a centralized heuristic algorithm with low computational complexity.
\subsection{Distributed Cloud-Association Algorithm (DCAA)}
The main idea in DCAA is that each cloud $c$ bids for users which maximize cloud $c$'s net benefit, taking into consideration the penalty-tag $\lambda_u$, which can be seen as the price of being associated with a certain user $u$. The net benefit of cloud $c$ if it is assigned to user $u$ becomes $\pi_{cu}=r_{cu}-\lambda_u$. Each cloud $c$ strives to be assigned to users that maximize its overall net benefit: $ \sum_{u}\pi_{cu} A_{cu}$. The algorithm iteratively proceeds in updating the assignment of each cloud, in view of the other clouds' assignment.
\subsubsection{DCAA Description}
At each iteration $t$, let $\lambda_u(t-1)$ be the starting price that has to be paid for a cloud to be assigned to user $u$. The net benefit from assigning cloud $c$ to user $u$ becomes $\pi_{cu}(t-1)=r_{cu}-\lambda_u(t-1)$. Cloud $c$, then, bids for users which maximize its overall net benefit. In other terms, at iteration $t$, cloud $c$ solves the following optimization problem:
\begin{eqnarray}
\label{knapsack_optimization_problem}
& \max & \sum_{u}\pi_{cu}(t-1)A_{cu} \\
& {\rm s.t.\ }& \sum_{u\in \mathcal{U}}\alpha_{cu}A_{cu}\leq K_c \nonumber\\
& & A_{cu}\in \{0,1\}, \forall u\in \mathcal{U}, \nonumber
\end{eqnarray}
where the maximization is over the binary variable $A_{cu}$. Problem (\ref{knapsack_optimization_problem}) is a knapsack problem, which is an NP-hard problem. There exists, however, a fully polynomial time approximation scheme which finds the optimal approximate solution of (\ref{knapsack_optimization_problem}) to any specified degree and outputs the set of users $\mathcal{U}_{c}(t)$; see \cite{knapsack_book} and references therein.
After solving (\ref{knapsack_optimization_problem}), cloud $c$ bids for the set of users in $\mathcal{U}_{c}(t)$ and updates their prices as $\lambda_u(t)=r_{cu}$. Since $\mathcal{U}_{c}(t)$ solves the maximization problem (\ref{knapsack_optimization_problem}), we have: $\forall u \in \mathcal{U}_{c}(t)$, $r_{cu}-\lambda_u(t-1)> 0$. Setting $\lambda_u(t)=r_{cu}$, therefore, guarantees the increase of the price of user $u$, i.e. $\lambda_u(t)>\lambda_u(t-1)$. Such increase in the price of user $u$ makes user $u$ less favorable to clouds $c', \forall c'\neq c $ in the next iteration $(t+1)$. Note that after running the algorithm for all clouds, if user $u$ remains among the set of users associated with cloud $c$, the price of user $u$ is reset to zero before solving problem (\ref{knapsack_optimization_problem}) for cloud $c$, so that the net benefit of associating user $u$ to cloud $c$ is at its maximum $r_{cu}$, after user $u$ shows a mutual interest in cloud $c$.
At iteration $t$, without loss of generality, consider cloud $c=\mod(t-1,C)+1$, where $\mod(.,.)$ represents the modulo operator which simply allows to iterate over all clouds in a sequential manner as the iterations index increases. Let $\beta_{cu}(t)$ denote the bids of cloud $c$ to users $u\in \mathcal{U}_{c}(t)$. The algorithm described above, called DCAA, can be summarized as follows:
\begin{enumerate}
\item Set the iteration index $t=1$, and the initial set of users's prices $\lambda_u(0)=0, \forall u\in \mathcal{U}$.
\item At each iteration $t$, consider cloud $c=\mod(t-1,C)+1$.
\item If $t\leq C$, go to step 5.
\item If $t>C$, $\forall u\in \mathcal{U}_{c}(t-C)$, reset the prices of users that are still associated with cloud $c$, i.e., if there exist some users $u\in \mathcal{U}_{c}(t-C)$ such that $\lambda_u(t-1)=\beta_{cu}(t-C)$, then set their prices $\lambda_u(t-1)$ to zero.
\item Calculate the net benefits $\pi_{cu}(t-1)=r_{cu}-\lambda_u(t-1)$, and solve the knapsack problem of cloud $c$, i.e. problem (\ref{knapsack_optimization_problem}), which determines the updated set of users associated with cloud $c$, denoted by $\mathcal{U}_{c}(t)$:
\begin{enumerate}
\item $\forall u\in \mathcal{U}_c(t)$, update the bids of cloud $c$ to users $u$ as $\beta_{cu}(t)=r_{cu}$, and the prices of users $ u\in \mathcal{U}_c(t)$ to $\lambda_u(t)=\beta_{cu}(t)$.
\item $\forall u\notin \mathcal{U}_c(t)$, keep the prices unchanged, i.e., $\lambda_u(t)=\lambda_u(t-1)$.
\end{enumerate}
\item Set $t=t+1$; go to step 2; and stop at convergence.
\end{enumerate}
\begin{theorem}
The iterative auction-based algorithm DCAA is guaranteed to converge in a finite number of iterations with an approximation ratio $(1+\gamma)$, where $\gamma\in [1,+\infty)$ is the approximation ratio of the subroutine knapsack algorithm used in step 5 above. In other terms, the solution reached by DCAA, denoted $f^{DCAA}$, is $(1+\gamma)$ away from the global optimal solution $f^*$: $(1+\gamma)f^{DCAA}\geq f^*$.
\end{theorem}
Steps for the proof of theorem 1 are omitted in this paper as they mirror theorem 1 and theorem 2 of \cite{Luo_Chakraborty_Sycara}.
\subsubsection{Distributed Implementation}
To implement DCAA at iteration $t$, cloud $c=\mod(t-1,C)+1$ utilizes the set of prices $\lambda_u(t-1)$, the set of benefits $r_{cu}$, the set of weights $w_{cu}$, the set of users $\mathcal{U}_{c}(t-C)$ associated with cloud $c$ at iteration $(t-C)$, and the set of bids $\beta_{cu}(t-C)$ of cloud $c$ for users $u \in \mathcal{U}_{c}(t-C)$.
$r_{cu}$, $w_{cu}$, $\mathcal{U}_{c}(t-C)$, and $\beta_{cu}(t-C)$ are all available at cloud $c$. The set of prices $\lambda_u(t-1)$ set during iteration $t-1$ is the output of cloud $c'$ operation, where $c'=\mod(t-2,C)+1$. A distributed implementation of DCAA is, therefore, possible by a reasonable and simple exchange of users' prices from cloud $c'$ to cloud $c$.
\subsection{Centralized Heuristic Cloud-Association Algorithm (CHCAA)}
DCAA solves the cloud-association problem (\ref{generalized_optimization_problem}) using the knapsack routine which is NP-hard in general. This section presents an alternative low complexity, yet centralized, heuristic to solve (\ref{generalized_optimization_problem}). The method, denoted by centralized heuristic cloud-association algorithm (CHCAA), associates users to clouds based on the individual utilities $r_{cu}$. Let $\bf{R}$ be the $C\times U$ matrix whose entries are the potential individual utilities $r_{cu}$, i.e., the $(c,u)$th entry of the matrix $\bf{R}$ is ${\bf{R}}_{c,u}=r_{cu}$.
At each step, find the largest entry of the matrix $\bf{R}$, call it ${\bf{R}}_{c^{max},u^{max}}$. User $u^{max}$ then maps to cloud $c^{max}$, as long as the resource constraint of cloud $c^{max}$ is still satisfied. Once user $u^{max}$ gets associated with a certain cloud, delete the column of $\bf{R}$ containing ${\bf{R}}_{c^{max},u^{max}}$, so that user $u^{max}$ cannot be connected to other clouds in subsequent steps. Repeat the above procedure and stop when all users are associated with one cloud each, or when all clouds' resource constraints are violated with the addition of one more user. As the simulations results suggest, DCAA and CHCAA show a similar performance, and they both outperform conventional systems using the classical cloud-less assignment solutions.
\section{Simulations}
This section evaluates the performance of the proposed methods in a 7-cell CRAN network, which comprises $C=7$ clouds, $B=3$ base-stations per cloud, and several users distributed across the network. The clouds are located at the center of each cell, and the distance between adjacent clouds is varied in the simulations. The simulations consider the sum-rate maximization problem, i.e. $r_{cu}=\log_2(1+\text{SINR}_{cu})$ where $\text{SINR}_{cu}$ is the signal-to-interference plus noise ratio of user $u$ when associated with cloud $c$. Further, For illustration, we choose $\alpha_{cu}=1$ and $K_c=B$ $\forall (c,u)$, so as to impose the constraint that each cloud can multiplex $B$ users at most.
\begin{figure}
\begin{center}
\rotatebox{0}{\scalebox{0.4}{\includegraphics{./DCAA_CHCAA_nrealizations.eps}}}
\caption{Sum-rate in bps/Hz for a different number of realizations over a network comprising 7 clouds and 3 base-stations per cloud. Total number of users is 28 users, and the intercell distance is 0.5 km.} \label{DCAA_CHCAA_nrealizations}
\end{center}
\end{figure}
Fig.~\ref{DCAA_CHCAA_nrealizations} illustrates the sum-rate performance of the proposed cloud-association algorithms in bps/Hz for different channel realizations, for a network comprising 28 users where the intercell distance is set to 0.5 km. The figure shows that both the distributed cloud-association algorithm (DCAA) and the centralized heuristic cloud-association algorithm (CHCAA) have a similar performance. The difference between the two is that CHCAA has a low computational complexity as compared to DCAA which is an iterative algorithm involving a knapsack solution at each iteration. DCAA, on the other hand, can be implemented in a distributed fashion across the different clouds. Fig.~\ref{DCAA_CHCAA_nrealizations}, further, shows how both DCAA and CHCAA outperform the cloud-less base-station association solution for all realizations of the channel, which highlights the importance of using clouds for associating users in CRAN networks.
\begin{figure}
\begin{center}
\rotatebox{0}{\scalebox{0.4}{\includegraphics{./Gain_of_DCAA_vs_BS_association.eps}}}
\caption{Percentage gain in sum-rate of the proposed algorithm as compared to base-station association in the absence of clouds for different number of users. The network comprises 7 clouds and 3 base-stations per cloud. The intercell distance is 0.5 km.} \label{Gain_of_DCAA_vs_BS_association}
\end{center}
\end{figure}
To illustrate the gain of the cloud-association algorithms as a function of the number of users, Fig.~\ref{Gain_of_DCAA_vs_BS_association} shows the percentage gain in sum-rate for DCAA as compared to the cloud-less base-station association, for a network of 0.5 km intercell distance. As shown in the figure, when the number of users increases, the performance gain due to cloud association increases and reaches up to 60\% improvement when the average number of users per cell is 4 (i.e. total number of users is 28). Such increase in gain is due to the fact that for a larger number of users, interference becomes higher, and so the role of cloud-association as an interference mitigation technique becomes more pronounced.
\section{Conclusions}
Optimization in cloud-radio access networks is a topic of significant interest for emerging wireless networks. The paper utilizes an auction-based iterative algorithm to solve the cloud-association problem. The algorithm can be implemented in a distributed fashion across the multiple clouds using using a reasonable amount of information exchange between the clouds. The paper further proposes a centralized heuristic algorithm with low computational complexity.
\section*{Acknowledgements}
The authors wish to thank Lingzhi Luo from Carnegie Mellon University for his support and helpful discussions.
\bibliographystyle{IEEEtran}
|
1,477,468,751,423 | arxiv | \section{Introduction}
Relativistic heavy ion collisions are an abundant source of strangeness. As strange
quarks have to be newly produced during the hot and dense stage of the collision,
they are thought of carrying information on the properties of the matter that was
created \cite{Koch:1986ud}. Together with other probes like the elliptic flow and jet quenching,
the enhancement of strange particle production is discussed \cite{Adams:2005dq,Back:2004je,Arsene:2004fa,Adcox:2004mh,Ollitrault:1992bk,Rischke:1996nq,Sorge:1996pc,Heiselberg:1998es,Scherer:1999qq,Soff:1999yg,Brachmann:1999xt,Csernai:1999nf,Zhang:1999rs,Kolb:2000sd,Bleicher:2000sx,Stoecker:2004qu,Zhu:2005qa,Petersen:2006vm,Gazdzicki:2004ef,Gazdzicki:1998vd} as a possible signal for the creation of a deconfined phase.\\
Although abundantly produced, the strong interactions of strange hadrons are not
well understood. Such interactions are not only important for the description of
the hadronic phase of a heavy ion collision but also play an important role for
the description of dense hadronic matter. In this context hyperon interactions are
key to understand the phase structure of QCD at large densities and the interior of
compact stars. One way to tackle the problem of hyperon interactions is to study
the formation of hyperclusters and/or hypernuclei. Hypernuclear physics offers a
direct
experimental way to study hyperon--nucleon ($YN$) and hyperon--hyperon
($YY$) interactions ($Y=\Lambda,\Sigma,\Xi,\Omega$).
The nucleus serves as a laboratory offering the unique opportunity
to study basic properties of hyperons and their interactions. Even the confirmation
or exclusion of the existence for such objects can be used as an input for models
that try to describe hyperonic interactions.\\
More exotic forms of deeply bound objects with strangeness have been proposed
\cite{Bodmer:1971we}
as states of matter, either consisting of baryons or quarks.
The H di-baryon was predicted by Jaffe
\cite{Jaffe:1976yi} and later, many more bound di-baryon states with strangeness were
proposed
using quark potentials \cite{Goldman:1987ma,Goldman:1998jd} or the Skyrme
model \cite{Schwesinger:1994vd}.
However, the non-observation of multi-quark bags, e.g. strangelets is still one of
the open problems of
intermediate and high energy physics. Lattice calculations suggest that the
H-dibaryon is a weakly unbound system \cite{Wetzorke:2002mx}, while recent lattice studies report
that there could be strange di-baryon systems including $\Xi$'s that can be
bound \cite{Beane:2011iw}. Because of the size of these clusters lattice studies are usually very
demanding on computational resources and have large lattice artifacts, it is not clear if Lattice QCD predicts a loosely bound H-dibaryon
or if it is unbound \cite{Beane:2010hg,Beane:2011xf,Inoue:2010es,Buchoff:2012ja}. An experimental confirmation of such a state would therefore be an enormous advance in the
understanding of the hyperon interaction.\\
For completeness we also include in our analysis a hypothetical N$\Lambda$
di-baryon with mass 2.054 GeV (see Table 1), a weakly bound state of a
$\Lambda$-hyperon and a neutron. The search for such an exotic object is underway
at GSI \cite{Saito}.\\
Hypernuclei are known to exist and be produced in heavy Ion collisions already
for a long time \cite{nucl-th/9412035,Ahn:2001sx,Takahashi:2001nm,arXiv:1010.2995}.
The recent discoveries of the first anti-hypertriton \cite{star2010} and anti-$\alpha$ \cite{star2011} (the
largest anti-particle cluster ever reported) has fueled the interest in the field
of hypernuclear physics.
Metastable exotic multi-hypernuclear objects (MEMOs)
as well as purely hyperonic systems of $\Lambda$'s and $\Xi$'s
were introduced in \cite{Schaffner:1992sn,Schaffner:1993nn} as the hadronic
counterparts to
multi-strange quark bags \cite{Gilson:1993zs,SchaffnerBielich:1996eh}.\\
Hypernuclear clusters can be produced and studied in various experimental setups, e.g. from proton or anti-proton induced reactions \cite{Gaitanos:2011fy} as well as pion and kaon beams \cite{Faessler:1974xn,Chrien:1979wu,Akei:1990gb,Dohrmann:2004xy,Hashimoto:2006aw}.
In this work we will focus on the production of hypernuclei in high energy
collisions of Au+Au ions \cite{Baltz:1993jh}. In such systems strangeness is produced abundantly
and is likely to form clusters of different sizes. Our aim is to
determine which processes are most efficient in searching for hypernuclei including
exotic ones. Presently, we can discriminate two
distinct mechanisms for hypercluster formation in heavy ion collisions. First,
the absorption of hyperons in the spectator fragments of non central heavy ion
collisions. In this scenario one is interested in hyperons which propagate with
velocities close to the initial velocities of the nuclei, i.e., in the vicinity
of nuclear spectators \cite{Ko:1985gp,Gaitanos:2007mm,Gaitanos:2009at,Botvina:2011jt}.
The hyper-systems obtained here are rather large and moderately excited, decaying into hyperfragments later on \cite{Botvina:2011jt,Botvina:2007pd}.
Alternatively, (hyper-)nuclear clusters can emerge from the hot and dense fireball
region of the reaction. In this scenario the cluster is formed at, or shortly after,
the (chemical-)freeze out of the system. A general assumption is, that these
clusters are then formed through coalescence of different newly produced
hadrons \cite{Scheibl:1998tk}. To estimate the production yield we can employ two distinct approaches which allow us to estimate the theoretical uncertainties associated with different treatment of the process.
First we use a hadronic transport model to provide us with the phase space information of
all hadrons produced in a heavy ion collision. This information then serves as an
input for a coalescence prescription. On the other hand it has been shown \cite{Becattini:1997rv,Cleymans:1990mn,Andronic:2005yp} that thermal models consistently describe the production yields of hadrons (and nuclei
\cite{Andronic:2008gu}) very well. We can therefore assume thermal production of clusters from a fluid
dynamical description to heavy ion collisions.\\
Both approaches differ significantly in their assumptions and one would expect
to obtain different results, depending on the method used. Hence it has been proposed (e.g. see \cite{Cho:2010db,Cho:2011ew}) that the yield of an exotic
hadronic state may depend strongly on its structure.
The purpose of this paper is therefore to, comprehensively, compare hypernuclei and di-baryon production
from a coalescence and thermal/hydrodynamical approach, and interpret the differences. One particular important point is that we deliberately compared two distinctively different models to explore the robustness of our predictions. In this way we can estimate systematic differences introduced by the two models features, for example a difference in the baryon stopping or hyperon phase space distributions.
\section{Thermal production from the UrQMD hybrid model}
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c|}
\hline
Cluster & Mass [GeV] & Chem. Pot. & Spin Deg.\\ \hline\hline
$d$& 1.878 & $2 \mu_B$ & 3\\ \hline
$\{N \Lambda\}$& 2.054 & $2 \mu_B - \mu_S$ & 3\\ \hline
$\{\Lambda \Lambda\}$& 2.232 & $2 \mu_B - 2 \mu_S$& 1 \\ \hline
$\{N \Xi\}$& 2.260 & $2 \mu_B - 2 \mu_S$& 1 \\ \hline
$\{\Lambda \Xi\}$& 2.437 & $2 \mu_B - 3 \mu_S$& 1 \\ \hline
$\{\Xi \Xi\}$ & 2.636 & $2 \mu_B - 4 \mu_S$& 1 \\ \hline
$He^3$& 2.817 & $3 \mu_B$& 2 \\ \hline
$He^4$& 3.756 & $4 \mu_B$& 1 \\ \hline
$^{3}_{\Lambda}H$ & 2.994 & $3 \mu_B -\mu_S$& 2\\ \hline
$^{4}_{\Lambda}H$ & 3.933 & $4 \mu_B -\mu_S$& 1\\ \hline
$^{5}_{\Lambda}He$ & 4.866 & $5 \mu_B -\mu_S$& 2\\ \hline
$^{4}_{\Lambda \Lambda}He$ & 4.110 & $4 \mu_B -2 \mu_S$& 1\\ \hline
\end{tabular}
\caption{Properties of all considered multibaryonic states \label{table1}}
\end{table}
The hybrid approach used in this work is based on the integration of a hydrodynamic
evolution into the UrQMD transport model
\cite{Petersen:2008dd,Petersen:2008kb,Steinheimer:2007iy}.
During the first phase of the evolution the particles are described by UrQMD
as a string/hadronic cascade. Once the two colliding nuclei have passed through
each other the hydrodynamic evolution starts at the time
$t_{start}=2R/\sqrt{\gamma_{c.m.}^2-1}$, where $\gamma_{c.m.}$ denotes the Lorentz
factor of the colliding nuclei in their center of mass frame.
While the spectators continue to propagate in the cascade, all other particles,
i.e. their baryon charge densities and energy-momentum densities, are mapped to
the hydrodynamic grid. By doing so one explicitly forces the system into a local
thermal equilibrium for each cell. In the hydrodynamic part we solve the
conservation equations for energy and momentum as well as the net baryon number
current, while for the net strange number we assume it to be conserved and equal
to zero locally. Solving only
the equations for the net baryon number is commonly accepted in hydrodynamical
models, although we have shown in earlier \cite{Steinheimer:2008hr}
publications that net strangeness may fluctuate locally. It is planned to also
implement an explicit propagation for the net strange density.\\
The hydrodynamic evolution is performed using the SHASTA algorithm
\cite{Rischke:1995ir}. At the end of the hydrodynamic phase the fields are
mapped to particle degrees of freedom using the Cooper-Frye equation
\cite{Cooper:1974mv} with the properties of the clusters, which serve as input
for the computation, being listed in Table \ref{table1}. The transition from
the hydrodynamic prescription to the transport simulation is done gradually in
transverse slices of thickness 0.2 fm, once all cells in a given slice have an
energy density lower than five times the ground state energy density (see also
\cite{Steinheimer:2009nn}). The temperature at $\mu_B=0$ which corresponds to such
a switching density
is roughly $T=170$ MeV which is close to what is expected to be the critical
temperature. Detailed information of the transition curve in the phase diagram
can be found in \cite{Petersen:2008dd}.
In this work we neglected final state interactions of the clusters produced. This can be justified, as previous works have shown that
final state interactions reduce e.g. the deuteron yield by only about $20 \%$ \cite{Oh:2009gx}. \\
For an extensive description of the model the reader is referred to
\cite{Petersen:2008dd,Steinheimer:2009nn}.\\
\section{Coalescence from the DCM--QGSM approach}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig1.eps}
\caption{Mass dependence of calculated invariant yields of light fragments and
hyperfragments produced in central Au+Au collisions at 11.5 A GeV/c compared with
experimental data \cite{Armstrong:2000gz} for Au + Pb collisions. The lines are empirical
interpolations of the results.
\label{fig1coal}
}
\end{center}
\end{figure}
Another model used to describe the dynamical stage of the reaction is the
intra-nuclear cascade model developed in Dubna \cite{Toneev:1990vj,Amelin:1989ve}.
(We refer to it as the Dubna Cascade Model - DCM.) The DCM is based on
the Monte-Carlo solution of a set of the
Boltzmann-Uehling-Uhlenbeck relativistic kinetic equations with
the collision terms, including cascade-cascade
interactions. For particle energies below 1~GeV it is sufficient to
consider only nucleons, pions and deltas. The model includes a proper
description of pion and baryon dynamics for particle production and
absorption processes.
At energies higher than about 10~GeV, the Quark-Gluon String Model (QGSM)
is used to describe elementary hadron collisions.
The QGSM considers the two lowest SU(3) multiplets in
mesonic, baryonic and antibaryonic sectors, so interactions between almost
70 hadron species are treated on the same footing.
The above noted two energy extremes were bridged by the QGSM extension
downward in the beam energy \cite{Amelin:1989ve}.
For the present study the coalescence model has been modified in comparison
with its initial formulation in \cite{Toneev:1983cb}. As usual, the coalescence model
forms a deuteron from a proton and a neutron produced after the cascade stage of
reaction if their relative momenta are within a sphere of radius $p_C$, comparable
to the deuteron's momentum. The same momentum criterion can be used to describe
formation of tritons, $^3$He, and $\alpha$-particles. In particular, the parameters
$p_C(d)$=90 , $p_C(t)$=108 , $p_C(^3$He$)$=108 , and $p_C(\alpha)$=115 (MeV/c) were adopted
to reproduce the experimental data \cite{Toneev:1983cb}. An approach disregarding the spacial
coordinates of nucleons can be justified only for collisions with moderate energy
deposition in nuclei since the region for final state interaction is small enough.
However, this is not the case for central heavy ion collisions.
Here we assume that the coalescence criterion
used to form the composite particles includes the proximity of nucleons both in
the momentum and coordinate space. The coordinate coalescence parameters are determined
by the relation $r_C=\hbar / p_C$, with the same values of $p_C$ as were used
in \cite{Toneev:1983cb}. As a first approximation we use the same coalescence parameters for both
conventional fragments and hyperfragments. An example of the calculated invariant yields of the fragments
produced in the central Au + Au collisions at projectile momentum $11.5 A$ GeV is
shown in Fig.~\ref{fig1coal}. One can understand that at this energy the coalescence
model reproduces qualitatively the experimental data for conventional fragments.
The fragments yields fit very close to exponential dependence with a penalty
factor of approximately 50 for each nucleon added in agreement with the data.
Due to the fact that the same coalescence parameters were used a similar penalty
factor is obtained for hyperfragments, which is supplemented by additional
suppression if the neutron is replaced by a $\Lambda$.\\
For the following results we fixed the coalescence parameters as described, with a fit to the data at $11.5 A$ GeV, and assume that they do not change with beam energy. This allows us to predict cluster production over a wide range of experimental setups.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig2.eps}
\caption{Yields per event of different di-baryons in the mid rapidity region ($|y|<0.5$) of most central collisions of Pb+Pb/Au+Au. Shown are the results from the thermal production in the UrQMD hybrid model (lines) as compared to coalescence results with the DCM model (symbols). The small bars on the right hand axis denote results on di-baryon yields from a previous RQMD calculation at $\sqrt{s_{NN}}=200$ GeV \cite{SchaffnerBielich:1999sy}. In addition, the black lines and symbols depict results for the production rate of $\Lambda$'s from both models, compared to data (grey crosses) from \cite{Ahmad:1991nv,Mischke:2002wt,Alt:2008qm}.
\label{dibmidy}
}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig3.eps}
\caption{Yields per event of different (hyper-)nuclei in the mid rapidity region ($|y|<0.5$) of most central collisions of Pb+Pb/Au+Au. Shown are the results from the thermal production in the UrQMD hybrid model (lines) as compared to coalescence results with the DCM model (symbols).
\label{hypmidy}
}
\end{center}
\end{figure}
\section{Results}
Figures \ref{dibmidy} and \ref{hypmidy} show our results for the mid rapidity
yields ($|y|<0.5$) of di-baryons and hypernuclei as a function of the beam energy
$E_{lab}$. In our calculations we considered most central ($b<3.4$ fm) Pb+Pb/Au+Au
collisions at $E_{lab}=1$ - $160 A$ GeV. In addition, figure \ref{dibmidy} shows
the $\Lambda$ yield (black lines and squares) for the two different models compared
to data \cite{Ahmad:1991nv,Mischke:2002wt,Alt:2008qm}. In these figures, the UrQMD hybrid model calculations are shown as
lines, while the DCM Coalescence results are depicted as symbols. A striking feature
of our comparison is that, above $E_{lab} \sim 10 A$ GeV, both computations for most
(hyper-)nuclei and di-baryons agree very well. At lower energies the
strange cluster production is suppressed in the transport model due to the non-equilibrium of
strangeness. In the thermal calculations restrictions of energy and momentum
conservation, resulting in a phase space reduction for produced strange particles,
strongly decreases strange particle yields \cite{Becattini:1997rv,Cleymans:1990mn,Andronic:2005yp}. This behavior was also
observed in a core-corona implementation in the hybrid model \cite{Steinheimer:2011mp}.\\
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig4.eps}
\caption{Full acceptance yields per event of different di-baryons created in most central collisions of Pb+Pb/Au+Au. Shown are the results from the thermal production in the UrQMD hybrid model (lines) as compared to coalescence results with the DCM model (symbols).
\label{dibally}
}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig5.eps}
\caption{Full acceptance yields per event of different (hyper-)nuclei created in most central collisions of Pb+Pb/Au+Au. Shown are the results from the thermal production in the UrQMD hybrid model (lines) as compared to coalescence results with the DCM model (symbols).
\label{hypally}
}
\end{center}
\end{figure}
An instructive result is that the yields of most hypernuclei have a
maximum (or saturation) around 10--20 A GeV of beam energy. Therefore,
the investigation of hypernuclei can be effectively pursued at these energies.
On the other hand, the dependence of their yields up to energies of $\sim$200
A GeV can help to clarify the mechanisms of hypernuclei production.
Noticeably the yields for di-baryons inlcuding $\Xi$ hyperons differ strongly with
respect to the model applied, for the double $\Xi$ state the difference is as large
as one order of magnitude. The reason for this discrepancy can be understood
considering that the DCM model produces considerably, by a factor of 5 times,
less $\Xi$'s than the UrQMD hybrid model, therefore also the dibaryon formation
is strongly suppressed (note that the experimental $\Xi$ yield is quite well
reproduced by the UrQMD-hybrid model \cite{Steinheimer:2009zzb,Steinheimer:2011mp}).\\
Di-baryon production rates have also been calculated in a coalescence approach using
the RQMD model for $\sqrt{s_{NN}}=200$ GeV collisions of Au nuclei \cite{SchaffnerBielich:1999sy}. To relate
our calculations to these results, they are indicated as the colored bars on the
right axis of figure \ref{dibmidy}. The RQMD model used was in particular tuned
to reproduce multi strange particle yields (such as the $\Xi$) and the results
are therefore close to the ones obtained with our thermal/hydrodynamic approach.
Figures \ref{dibally} and \ref{hypally} show the integrated ($4 \pi$)
yields for all considered clusters as a function of beam energy. As with
the midrapidity results there is a remarkable agreement between both approaches.
However, the integrated yields of non-strange nuclei at high energies are
systematically larger in the coalescence approach, although the mid-rapidity
yield was smaller. This observation can be explained when the rapidity
distribution of the nuclei is considered. In the coalescence approach the
probability to produce a nucleus increases with rapidity and in particular
in the fragmentation region, where the nucleons have small relative transverse
momenta and can easily coalesce.
\begin{table}[b]
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$p_C$=& 5 & 20 & 50 & 90 \\ \hline\hline
$\Lambda$N& 4.4 $\cdot 10^{-4}$ & 2.7 $\cdot 10^{-2}$ & 3.0 $\cdot 10^{-1}$& 2.1 \\ \hline
$\Lambda\Lambda$& 3.0$\cdot 10^{-5}$ & 1.2$\cdot 10^{-3}$ & 6.6$\cdot 10^{-3}$ & 5.6$\cdot 10^{-2}$ \\ \hline
$\Xi$N & $ < 10^{-6}$ & 1.0$\cdot 10^{-3}$ & 1.1$\cdot 10^{-2}$ & 1.0$\cdot 10^{-1}$ \\ \hline
$\Xi\Lambda$ & $ < 10^{-6}$ & 7.4$\cdot 10^{-5}$ & 5.8$\cdot 10^{-4}$ & 1.0 $\cdot 10^{-2}$ \\ \hline
$\Xi\Xi$ & $ < 10^{-6}$ & $ < 10^{-6}$ & 3.8$\cdot 10^{-4}$ & 7.2$\cdot 10^{-4}$\\ \hline
\end{tabular}
\caption{Dependence of yield of strange dibaryons (per one event) on momentum
coalescence parameter ($p_C$ in units of [MeV/c]), in central $(b<3.5fm)$ Au+Au collisions
at $20 A$ GeV \label{tablecoal}}
\end{table}
In addition we point out that the coalescence results depend
on the parameters of the model. As mentioned, in the presented results the parameter $p_C$ for $\Lambda$'s was taken equal
to the one of the nucleon's. However, the hyperon-hyperon
and hyperon-nucleon interactions are not very well known
and we expect that these parameters may be different for clusters containing
$\Lambda$'s or even $\Xi$'s. In table \ref{tablecoal} we demonstrate how the yields of strange dibaryon nuclei
depend on the momentum parameter $p_C$. As discussed previously, we have
accordingly restricted the $r_C$ parameter, however, by imposing an
empirical limitation related to the nuclear force properties that $r_C$ can
not be larger than 4 fm. One can see, we expect a very large variation of
the yields depending on the parameters. For instance, the probability of a bound
$\Lambda$--nucleon state may decrease by many orders, if we assume a small $p_C$
corresponding to a low binding energy of this state.
Usually the parameters are fixed by comparison
with experiment. Nevertheless, ratios of hypernuclei yields should not be changed in
the coalescence model.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig6.eps}
\caption{Yields of anti-particle clusters in the mid rapidity region ($|y|<0.5$) of most central collisions of Pb+Pb/Au+Au as a function of $\sqrt{s_{NN}}$. Shown are only the results from the thermal production in the UrQMD hybrid model (lines with symbols).
\label{antimidy}
}
\end{center}
\end{figure}
When the beam energy of the collisions is increased, the system created becomes
almost net-baryon free. This means that the probability to create an anti-particle
cluster approaches that of the particle cluster. Figure \ref{antimidy} shows the
results for anti-particle cluster production at mid-rapidity ($|y|<0.5$) in
collisions of Pb+Pb/Au+Au at center of mass energies of
$\sqrt{s_{NN}}=3$ - $200$ GeV. We show only results for the UrQMD hybrid model
because the DCM calculations are restricted to energies up to $E_{lab}=160 A$ GeV
where the statistics needed for a meaningful estimate are quite significant.
The yields of the anti-particle clusters show a monotonous increase with beam
energy. They show that, at the highest RHIC energy (and at the LHC) the
reconstruction of $_{\Lambda}^{4}\rm{He}$ might be a feasible task.
\subsection{A special ratio}
In the following we will discuss the double ratio $R_{H}$ defined as:
\begin{equation}
R_{H}=_{\Lambda}^3 H / ^3 He \ \cdot p/\Lambda
\end{equation}
for collisions of Pb+Pb/Au+Au and a wide range of beam energies. This ratio is
especially interesting, as in thermal production, it does not depend on the
chemical potential of the particles (as fugacities cancel), and any canonical
correction factors for strangeness are canceled. It has been proposed that this
ratio is sensitive to the local correlation of strangeness and baryon number,
therefore being a measure of $c_{BS}$ \cite{Zhang:2009ba}.\\
\begin{equation}
c_{BS}=-3\frac{\left\langle N_B N_S\right\rangle- \left\langle N_B \right\rangle \left\langle N_S\right\rangle}{\left\langle N_S^2 \right\rangle - \left\langle N_S \right\rangle^2}
\end{equation}
To calculate $R_H$ we use the above obtained yields for hypernuclei and the
proton and $\Lambda$ yields from the same model. For the hadrons the feed
down from resonances is taken into account as well as the feed down to the
$ ^3 He$ from the hypertriton.\\
Our results for $R_H$ are shown in figure \ref{rh} as an excitation function
of the beam energy $\sqrt{s_{NN}}$. $R_H$ is evaluated for the mid rapidity
region of most central ($b<3.4$ fm) heavy ion collisions. The lines depict
results from the UrQMD-hybrid model and the symbols denote DCM coalescence results.
Experimental data are depicted as green symbols with error bars. Because
experiments usually cannot distinguish between $\Lambda$'s and $\Sigma^0$'s,
we show $R_H$ in the cases where the $\Lambda$ yield includes $\Sigma^0$
(black solid line and squares) and where the yield is corrected for the
$\Sigma^0$ (red dashed line and circles). This is in fact important as there is
no experimental indication for a bound $_{\Sigma^0}^3 H$ hypernucleus.\\
The double ratio $R_H$ from the hybrid model turns out to be almost energy
independent. The same behavior has been observed in previous thermal calculations \cite{Andronic:2010qu}.
On the other hand, the coalescence result increases with decreasing beam energy
and is in general larger than the thermal result.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig7.eps}
\caption{The Strangeness Population Factor $R_H=(^{3}_{\Lambda}H/^{3}He)\cdot(p/\Lambda)$ as a function of $\sqrt{s_{NN}}$ for most central collisions of Pb+Pb/Au+Au. We compare results from the thermal production in the UrQMD hybrid model (lines) with coalescence results with the DCM model (symbols). The red line and symbols denote values of $R_H$ where the $\Lambda$ yield has been corrected for the $\Sigma^0$ contribution.
\label{rh}
}
\end{center}
\end{figure}
To understand this behavior we plotted the single ratios
$^{3}_{\Lambda}H/^{3}He$ and $\Lambda/p$ from our two approaches
(lines hybrid model and symbols DCM coalescence) in figure \ref{special}.
Here it is obvious that even though the DCM calculation produces less
$\Lambda$'s per proton, the hypernuclei to nuclei ratio is still larger.
Hence, the $\Lambda$ is more likely to form a hypernucleus. There seems to
be a stronger correlation in the transport calculation as in the hydrodynamic
description. In fact the qualitative behavior of $R_H$ closely resembles the
behavior that is expected for $c_{BS}$, the baryon-strangeness correlation,
for a hadronic gas \cite{Koch:2005vg}. This observation leads to the conclusion that the
information on correlations of baryon number and strangeness is lost in the
thermal calculation because here $R_H$ essentially only depends on the
temperature. On the other hand, in the microscopic treatment the correlation
information survives and $R_H$ captures the trend of $c_{BS}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\textwidth]{fig8.eps}
\caption{Single ratios of $(^{3}_{\Lambda}H/^{3}He)$ (black solid line and circles) and $\Lambda/p$ (blue dashed line and squares) from the UrQMD hybrid model (lines) and DCM model (symbols).
\label{special}
}
\end{center}
\end{figure}
\section{Conclusion}
We have presented results on hyper-nuclei, anti-nuclei and di-baryon production
in heavy ion collisions over a wide beam energy range. To explore the
theoretical uncertainties we applied two distinct approaches: firstly, the
thermal production with the UrQMD-hydro hybrid model and secondly, the
coalescence calculation within the Dubna hadron cascade model. Concerning most
hyper-nuclei and di-baryons both approaches agree well in their predictions
which gives us confidence in robustness and significance of the obtained
results. We find that both the non-equilibrium and thermal models may be
considered as appropriate approaches to describe strange cluster
production. In agreement with previous studies we demonstrate that the most
promising energy range to produce hyper-clusters will be provided by the FAIR
and NICA facilities, $E_{lab}\approx 10$ - $20 A$ GeV. Anti-matter clusters
heavier than $\bar{t}$ are only feasible at RHIC and LHC energies.\\
The most interesting result of our study is the apparent difference in the
double ratio $R_H$ when we compare our thermal results with the coalescence.
This difference indicates that the information on correlations of baryon number
and strangeness are visible in the microscopic coalescence approach, while they
are washed out in the thermal picture. This could open the opportunity to
directly measure the strangeness-baryon correlation, which may be sensitive to
the onset of deconfinement. The present status of the experimental data does
unfortunately not allow for a comprehensive comparison with our model
calculations. We hope that this situation will improve in the upcoming RHIC
energy scan and FAIR experiments.
\section*{Acknowledgments}
This work has been supported by GSI and Hessian initiative for excellence (LOEWE)
through the Helmholtz International Center for FAIR (HIC for FAIR). J.~S. acknowledges a Feodor
Lynen fellowship of the Alexander von Humboldt foundation. This work was supported by the Office of Nuclear Physics in the US
Department of Energy's Office of Science under Contract No. DE-AC02-05CH11231. I.M. acknowledges partial support from grant NS-215.2012.2 (Russia). The computational resources were provided by the LOEWE Frankfurt Center for Scientific Computing (LOEWE-CSC).
|
1,477,468,751,424 | arxiv | \section{Introduction}\label{sec: intro}
Many investigations of quantum chromodynamics (QCD) and similar strongly-interacting models concentrate on a direct extraction of physical observables present
in these theories, see e.\,g.~\cite{Aoki:2016frl, Eichmann:2016yit} and references therein.~However, during the last few decades a fair amount of attention was
also dedicated to the analysis of the elementary degrees of freedom of these mathematical frameworks, which can generically be labelled as the quark and gluon
fields.~While not being directly detectable in experiments, the quark, gluon and ghost propagators (wherein ghosts arise from gauge-fixing), as well as their
corresponding interaction vertices still attract considerable interest among researchers.~This is because of the supposed connection of these objects to the
phenomena of confinement \cite{Gribov:1977wm, Zwanziger:1993dh, Zwanziger:2001kw, Zwanziger:2003cf, Kugo:1979gm} and dynamical chiral symmetry breaking
\cite{Alkofer:2008tt}, as well as the pivotal role they play in functional studies of bound states \cite{Eichmann:2016yit, Alkofer:2008tt, Vujinovic:2014ioa,
Sanchis-Alepuz:2015qra, Williams:2015cvx, Binosi:2016rxz, Eichmann:2016hgl, Sanchis-Alepuz:2017jjd, Rodriguez-Quintero:2018wma, Vujinovic:2018nko, Eichmann:2019tjk}.
The non-perturbative calculations of the elementary correlators of strongly-interacting theories roughly consist of two complimentary methods.~These two groups are the
various continuum approaches \cite{Alkofer:2008tt, Vujinovic:2014ioa, Sanchis-Alepuz:2015qra, Williams:2015cvx, Binosi:2016rxz, Eichmann:2016hgl, Rodriguez-Quintero:2018wma,
Vujinovic:2018nko, Schleifenbaum:2004id, Pawlowski:2005xe, Kellermann:2008iw, Alkofer:2008dt, Huber:2012zj, Huber:2012kd, Aguilar:2013xqa, Aguilar:2013vaa, Pelaez:2013cpa,
Blum:2014gna, Eichmann:2014xya, Cyrol:2014kca, Binosi:2014kka, Mitter:2014wpa, Aguilar:2014lha, Pelaez:2015tba, Binosi:2016wcx, Cyrol:2016tym, Aguilar:2016lbe, Cyrol:2017ewj,
Huber:2017txg, Oliveira:2018fkj, Corell:2018yil, Aguilar:2018csq}, as well as those based on lattice Monte Carlo simulations \cite{Parrinello:1994wd, Alles:1996ka, Boucaud:1998bq,
Skullerud:2002ge, Skullerud:2003qu, Cucchieri:2004sq, Cucchieri:2006tf, Ilgenfritz:2006he, Maas:2007uv, Cucchieri:2008qm, Maas:2011se, Sternbeck:2012mf,Boucaud:2013jwa,
Maas:2013aia, Duarte:2016jhj, Athenodorou:2016oyh, Sternbeck:2016ltn, Boucaud:2017obn, Sternbeck:2017ntv, Vujinovic:2018nqc, Maas:2018ska, Maas:2019tnm}.~While both frameworks
have their specific advantages and disadvantages, the lattice method features a particular issue which we think has not been adequately addressed so far.~Namely, for continuum
theories there exist unique and well-known ``recipes'' for obtaining complete tensor descriptions for virtually any correlator of interest, see e.\,g.~\cite{Hassani:1999sny}.~On
the lattice, such recipes are sorely lacking, and in some situations this leads to systematic errors which are not easy to quantify.~For instance, many lattice investigations of
the quark-gluon and three-gluon vertex employ the corresponding tensor bases from the continuum theory \cite{Parrinello:1994wd, Alles:1996ka, Boucaud:1998bq, Skullerud:2002ge,
Skullerud:2003qu, Boucaud:2013jwa, Duarte:2016jhj, Athenodorou:2016oyh, Boucaud:2017obn,Sternbeck:2017ntv}, and it is hard to say what is the systematic uncertainty associated
with such an approximation.
Here we attempt to resolve some of these matters by invoking symmetry-based arguments.~On symmetric (hyper)cubic lattices in $d$ dimensions, the rotational symmetry of an Euclidean
continuum theory reduces to the hypercubic group $H(d\,)$, whose elements can all be generated from parity transformations and $\pi/2$ rotations around the coordinate axes \cite{
Morty:1962prc}.~This affects the basis decompositions of lattice correlators, since the hypercubic operators induce far weaker constraints on the allowed tensor structures, compared
to the full continuum symmetry group.~A detailed account on how exactly this modifies the tensor representations of certain lattice vertices will be provided later.~To the best of
our knowledge, no systematic investigation of this kind has been performed before, although the issue was approached from different sides in the past, in the context of lattice
perturbation theory \cite{Kawai:1980ja}, improved gauge actions \cite{Weisz:1983bn} and lattice investigations of the anomalous magnetic moment of the muon \cite{Aubin:2015rzx}.
The biggest asset of our approach to vertex tensor bases is its generality.~Since we only employ the constraints coming from hypercubic symmetry, the applicability
of our method does not depend on a particular choice of lattice action or gauge-fixing method, as long as an equal treatment of all the coordinates is maintained in
all aspects of the calculation.~This means that any computations done with our bases can be taken from e.\,g.~the case of the Wilson gauge action \cite{Wilson:1974sk}
to the $\mathcal{O}(a^2)$ tree-level improved one \cite{Weisz:1983bn, Weisz:1982zw, Symanzik:1983gh}, without any alterations in parts of the code which deal only with
correlator form factors.~Apart from this, we will argue that our framework also allows one to (more-less) directly quantify the rotational symmetry breaking effects in
vertex dressing functions, perform tests of continuum extrapolation procedures, and identify special kinematic configurations where the lattice-modified bases get reduced
to their continuum form.~However, in the course of this work it will become clear that all of this comes at a cost:~when deriving the tensor structures of the lattice theory
using symmetry arguments alone, one may easily end up with so many elements that actual calculations with the full basis become very challenging, if not downright impossible.
\!Thus, for any particular problem at hand one has to judge if the potential gains provided by our framework outweight the considerable rise in algebraic difficulty.
Our paper is organised as follows. In section \ref{sec: basics} we discuss the basic ideas behind our method, and show how scalar and vector quantities get modified on the
lattice, as compared to their continuum counterparts.~In section \ref{sec: vertex_n_propag} we use the same principles to derive the most general tensor decompositions (up to
finite volume effects) for the lattice ghost-gluon vertex and gluon propagator.~In section \ref{sec: numerics} we apply the obtained tensor bases, in their lowest-order
(non-continuum) versions to the gluon and ghost-gluon correlators, as evaluated in numerical Monte Carlo simulations in Landau gauge.~We point to some interesting insights
which come out of these applications, including the fact that the gluon propagator approaches its continuum form at low energies, at a rate which is independent of the
parameters of the numerical lattice implementation.~Based on this observation we also comment on how the lattice studies of the anomalous magnetic moment of the muon may
(not) get affected by discretisation artifacts.~We conclude in section \ref{sec: conclude}.~Most of the purely technical discussions have been relegated to the four
detailed appendices.
\section{Continuum and hypercubic tensors:~basic ideas}\label{sec: basics}
\subsection{Tensor bases in the continuum theory}\label{sec: tens_cont}
We begin the discussion of our method by briefly reviewing some basic facts about tensor descriptions of certain vertex functions in the
continuum.~Most of the points we will cover here are well-known from elementary textbooks, and some of them could even be considered rather
trivial.~Nonetheless, we think it is important to go through these ``trivialities'' , in order to fully understand how the arguments change when
going from continuum to discretised spacetimes.~We emphasise that throughout this paper, we shall \textit{not} be using the Einstein summation
convention, since we will frequently encounter non-covariant objects and expressions.~Thus, in relations which feature summations over indices,
the sum symbol will always be explicitly written out.
A continuous, $d$-dimensional Euclidean space is often said to possess an $O(d\,)$ symmetry, meaning that the distances, or scalar products of
vectors in the space, are all preserved under an action of arbitrary orthogonal $d \times d$ matrices.~For orthogonal operators, it holds that the
operation of matrix transposition is equivalent to inversion, or explicitly
\begin{align}\label{eqn: ortho_define}
O_{\mu\nu} = O^{-1}_{\nu\mu} \, , \qquad \quad \mu,\, \nu = 1 \ldots d \, ,
\end{align}
\noindent
where indices $\mu$ and $\nu$ stand for operator components.~The fact that the matrices $O$ represent symmetry transformations of a continuous space means
that all of the quantities in the space (i.\,e.~scalars, vectors, second-rank tensors, etc.) have to be defined with respect to the orthogonal group.~As
an example, take a set of numbers which constitute the components of a vector $v$.~This means that, under arbitrary orthogonal transformations, these
numbers/components satisfy a particular transformation law, namely ($v_\mu$ denotes the $\mu$-th component of $v$):
\begin{align}\label{eqn: cont_trans_law}
v'_\mu = \sum_{\nu = 1}^d O_{\mu\nu} \, v_\nu \, , \qquad \quad \mu = 1 \ldots d .
\end{align}
In the above relation, prime ($'$) signifies the vector components in the transformed system.~Transformation laws like \eqref{eqn: cont_trans_law} put stringent
constraints on the possible momentum dependencies of tensor quantities of various rank.~Take as an example a tensor of rank zero i.\,e.~a scalar function
$S$ which depends on a single momentum variable $p$.~Being a scalar, or an invariant quantity, means that $S(p)$ does not change under arbitrary orthogonal
transformations of $p$.~In other words, for a general orthogonal matrix $O$, it holds that
\begin{align}\label{eqn: scalar_p_trans}
p'_\mu = \sum_{\nu = 1}^d O_{\mu\nu} \, p_\nu \, , \qquad \text{and} \qquad S'(p') = S(p) \, ,
\end{align}
\noindent
where index $\mu$ runs from 1 to $d$, the number of dimensions.~It is well known (see e.\,g.~\cite{Hassani:1999sny}) that the invariance of $S$ implies that it
can only depend on $p$ through the scalar product $p^2$, which is defined in $d$ dimensions as
\begin{align}\label{eqn: p2_cont}
p^2 = \sum_{\mu = 1}^d p^2_\mu = p_1^2 + p_2^2 + \ldots + p_d^2 \, .
\end{align}
Invariance of $p^2$ under arbitrary $d$-dimensional orthogonal transformations follows directly from the property \eqref{eqn: ortho_define}.~Going back to the
function $S$, one sees that the ``demand'' that it remain unchanged under general $O$ matrices leads to the conclusion that it can depend solely on the product
$p^2$.~Similar restrictions follow for tensors of arbitrary rank.~As an example, instead of a scalar quantity, one might be working with some vector $\Gamma$,
which is a function of momentum $p$.~Being a vector, $\Gamma$ has to obey a transformation law akin to \eqref{eqn: cont_trans_law}, meaning that
\begin{align}\label{eqn: gamma_p_trans}
p'_\mu = \sum_{\nu = 1}^d O_{\mu\nu} \, p_\nu \, , \qquad \text{and} \qquad \Gamma'_\mu(p') = \sum_{\nu = 1}^d O_{\mu\nu} \, \Gamma_\nu(p) \, .
\end{align}
Now, even if one had no prior knowledge on the way that $\Gamma$ depends on $p$, a careful consideration of \eqref{eqn: gamma_p_trans} would quickly lead one to
the deduction that $\Gamma$ has to have the form
\begin{align}
\Gamma_\mu(p) = A(p) \, p_\mu \, ,
\end{align}
\noindent
with dressing function (or form factor) $A(p)$ being an orthogonal invariant, i.\,e.~depending on $p^2$ alone.~In words, the vector $\Gamma$ has to be strictly
linear in \textit{components} of $p$, since any non-linear terms with an open vector index (e.\,g.~$p^2_\mu$) would not obey \eqref{eqn: gamma_p_trans}.~For
instance, a structure quadratic in $p$ components, the aforementioned object $p^2_\mu$, would transform under general $O$ matrices as
\begin{align}
p_\mu^2 \rightarrow p'^{\, 2}_\mu = \sum_{\nu = 1}^d \sum_{\rho = 1}^d O_{\mu\nu} O_{\mu\rho} \, p_\nu \, p_\rho \, ,
\end{align}
\noindent
which is clearly incompatible with the vector-like transformation law for $\Gamma$ itself, see \eqref{eqn: gamma_p_trans}.
We conclude this section with comments on how some of the above observations change when going from continuum spaces to discretised ones.~As stated in the
Introduction, on standard cubic lattices the orthogonal group $O(d\,)$ gets broken down to its hypercubic\footnote{In the context of our work, the term
``hypercubic'' is not really correct since it implies a four-dimensional setting, whereas most of our arguments will not depend on the number of dimensions.
\!Nonetheless, as we do not wish to keep switching between different group names for different dimensions, we will continue this mild abuse of terminology
throughout the rest of this paper.} subgroup $H(d\,)$, which is comprised of $d$-dimensional $\pi/2$ rotations and parity transformations.~We shall see soon
that, when represented as matrices, the hypercubic symmetry operations have a somewhat special structure, which makes the equations like \eqref{eqn: scalar_p_trans}
and \eqref{eqn: gamma_p_trans} far less restrictive than in the case of general orthogonal operators.~For scalar functions depending on momentum $p$, it is by now
well known that the hypercubic group has more invariants than just $p^2$ \cite{Weyl:1939prc}, a fact to which we shall return later.~We will show in this paper
that similar considerations apply for tensors of higher rank as well:~taking again vectors as an example, it will turn out that there are open-index objects
which are non-linear in momentum components (i.\,e.~$p^{\,n}_\mu$, with integer $n > 1$), which despite their non-linearity, still satisfy the adequate vector
transformation law \eqref{eqn: cont_trans_law} under hypercubic symmetry transformations.
\subsection{Hypercubic group as permutations and inversions of coordinates}\label{sec: hyper_perms}
As already mentioned, the group $H(d\,)$ consists of (powers of) $\pi/2$ rotations around the coordinate axes, and parity transformations (sometimes also called
inversions).~Here, we want to show that the hypercubic group can equally well be represented with permutations and inversions of coordinates, since a $\pi/2$ rotation
in an arbitrary plane can always be written as a composition of permutation and inversion transformations.~Demonstrating the aforementioned equivalence is important
since we wish to adopt the ``permutations + inversions'' viewpoint in this paper, because it makes much of the forthcoming analysis easier.
We start with the simplest possible example, that of ``hypercubic'' symmetry transformations in two dimensions.~First, we will need a matrix representation of
a clockwise $\pi/2$ rotation for $d=2$.~If we take some vector $p = (p_1, p_2)$, and denote its clockwise $\pi/2$-rotated version with $p' = (p'_1, p'_2)$, the
operation of rotation can be written down as
\begin{align}
p'_\mu = \sum_{\nu = 1}^2 L^{\pi/2}_{\mu\nu} \, p_\nu \, .
\end{align}
The explicit form of the matrix $L^{\pi/2}$ can be easily deduced with a bit of visual help, shown in Figure \ref{fig: rotation}.~From the Figure it should be
relatively clear that the primed components $p'_\mu$ are related to the un-primed ones $p_\mu$ via
\begin{align}
p'_1 = p_2 \, , \quad \text{and} \quad p'_2 = - p_1 \, ,
\end{align}
\noindent
from which it immediately follows that $L^{\pi/2}$ has a matrix representation
\begin{align}\label{eqn: pi_half_matrix}
L^{\pi/2} = \left[ \begin{array}{cc} 0 & \;\; 1 \\ -1 & \;\; 0 \end{array} \right] \, .
\end{align}
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.64\textwidth]{figures/rotate.pdf}\\
\caption{\textit{Left:}~A graphical representation of a clockwise $\pi/2$ rotation of a two-dimensional vector $p$.~$(p_1, p_2)$ and $(p'_1, p'_2)$ denote, respectively,
components of the vector before and after the rotation.}
\label{fig: rotation}
\end{center}
\end{figure}
Note that it makes no essential difference if one were to look at counter-clockwise rotations, instead of clockwise ones:~the only difference between the two kinds of
matrices is an overall minus sign, which is unimportant for the upcoming arguments.~Also note that the fourth power of the matrix $L^{\pi/2}$ is an indentity element:
\begin{align}\label{eqn: roto_2pi}
\left(L^{\pi/2}\right)^4 \, = \, L^{2\pi} \, = \, + \left[ \begin{array}{cc} 1 & \;\; 0 \\ 0 & \;\; 1 \end{array} \right] \, .
\end{align}
The above is to be expected, since a rotation by $2\pi$ is equivalent to making no change to the system at all\,\footnote{A $\pi/2$ rotation in a plane generates a
group isomorphic to the cyclic group $Z_4$, i.\,e.~an Abelian group with four elements (say, $a, b, c$ and $e$, with $e$ the identity element), and the multiplication
law(s) $a^2 = b , \; ab = a^3 = c$, and $a^4 = b^2 = e$.~In general, planar rotations by $2\pi/n$ degrees, with integer $n$, generate groups isomorphic to cyclic groups
$Z_n$ \cite{Morty:1962prc}.}.~Besides the operator $L^{\pi/2}$ and its powers (modulo 4), there are two remaining elementary operations which leave a two-dimensional
hypercube (i.\,e.~a square) intact, and these are the parity transformations.~Their matrix representations are \cite{Morty:1962prc}
\begin{align}\label{eqn: 2d_inver_def}
\rho_1 = \left[ \begin{array}{cc} -1 & \;\; 0 \\ 0 & \;\; 1 \end{array} \right] \, , \qquad
\rho_2 = \left[ \begin{array}{cc} 1 & \; 0 \\ 0 & \; -1 \end{array} \right] \, .
\end{align}
Now, a key observation to be made here is that the matrix $L^{\pi/2}$ of \eqref{eqn: pi_half_matrix} can itself be written as a composition of a parity transformation
and a permutation operator $\Pi^{12}$, or explicitly
\begin{align}\label{eqn: 2d_rot_example}
L^{\pi/2} = \Pi^{12} \cdot \rho_1 = \left[ \begin{array}{cc} 0 & \;\; 1 \\ 1 & \;\; 0 \end{array} \right] \cdot \left[ \begin{array}{cc} -1 & \;\; 0 \\
0 & \;\; 1 \end{array} \right] \, .
\end{align}
$L^{\pi/2}$ can also be obtained as $L^{\pi/2} = \rho_2 \cdot \Pi^{12}$.~In these expressions, $\Pi^{12}$ stands for a permutation which exchanges the first and
second momentum components, i.\,e.~
\begin{align}\label{eqn: 2d_perm_def}
\Pi^{12} \cdot p = \left[ \begin{array}{cc} 0 & \;\; 1 \\ 1 & \;\; 0 \end{array} \right] \cdot \left( \begin{array}{c} p_1 \\ p_2 \end{array} \! \right) =
\left( \begin{array}{c} p_2 \\ p_1 \end{array} \! \right) \, .
\end{align}
Since the matrix $L^{\pi/2}$ and its powers (modulo 4) can all be written as compositions of elementary transformations $\rho_1$, $\rho_2$ and $\Pi^{12}$, one concludes
that these last three operators, and their combinations, exhaust all of the possible symmetry operations of a two-dimensional hypercube.~The argument can be straightforwardly
extended to an arbitrary number of dimensions, by analysing each of the available rotation planes separately.~In three dimensions, for instance, there are three spatial
planes, which one might denote as (12), (13) and (23).~Taking the clockwise $\pi/2$ rotation in the plane (23) as an example, one has (we will leave out the `$\pi/2$'
designation in the following, since it should be clear that we are not working with any other angles of rotation):
\begin{align}\label{eqn: 3d_rot_example}
L^{23} = \left[ \begin{array}{ccc} 1 & \; 0 & \; 0 \\ 0 & \; 0 & \; 1 \\ 0 & -1 & \; 0 \end{array} \right] = \, \Pi^{\, 23} \cdot \rho_2 = \left[ \begin{array}{ccc}
1 & \; 0 & \; 0 \\ 0 & \; 0 & \; 1 \\ 0 & 1 & \; 0 \end{array} \right] \cdot \left[ \begin{array}{ccc} 1 & \; 0 & \; 0 \\ 0 & -1 & \; 0 \\ 0 & \; 0 & \; 1 \end{array}
\right] \, .
\end{align}
Thus, the rotation $L^{23}$ can be represented as a product of two elementary operations, where $\Pi^{\,23}$ permutes the second and third momentum components,
and $\rho_2$ turns momentum $p = (p_1, p_2, p_3)$ into $p' = (p_1, - p_2, p_3)$.~In the same vein, $\pi/2$ rotations in the other two planes follow as adequate
compositions of exchange operators $\Pi^{12}$ and $\Pi^{13}$, with one of the elementary inversions $\rho_k \, (k = 1\ldots 3)$.~It should be relatively clear
that by combining various powers (modulo 4) of rotation matrices $L^{12}$, $L^{13}$ and $L^{23}$, one can generate any symmetry rotation of a three-dimensional
hypercube\,\footnote{In all these proceedings, we have ignored the symmetry rotations around the (hyper)cube diagonals.~The fact that these too can be decomposed
into permutations and parity transformations (or equivalently, into inversions and $\pi/2$ rotations about the coordinate axes) is demonstrated in some detail in
Appendix \ref{sec: diagonals}.}.~And since these three rotation operators can themselves be decomposed into simpler permutation and inversion transformations, it
follows that combinations of coordinate permutations and inversions exhaust all possible operations which leave a hypercube unchanged, in $d = 3$.~A generalisation
of these arguments to higher dimensions is straightforward.
We wish to conclude this section by briefly going back to the ``point of the whole exercise'', i.\,e.~to why it was important to show that combinations of parity
transformations and permutations cover all of the elements of the symmetry group $H(d\,)$.~Suppose that one wished to find general expressions for hypercubic vectors,
meaning the most general possible functions of given momenta, which transform as vectors under the hypercubic group.~From the above analysis, it should be clear that
it is enough to identify those functions, which transform as vectors under $both$ permutations and inversions:~the said quantity will surely constitute a vector under
an arbitrary hypercubic symmetry transformation.~In these matters, it will turn out to be very useful to analyse the two kinds of alterations (parity and permutation)
independently from each other, as they obviously have different effects on momentum components.
\subsection{Hypercubic scalars}
Assume that one is working in a theory with a single momentum $p$, in an arbitrary number of dimensions $d$.~We already mentioned that in the continuum, the only available
scalar quantity in this scenario would be the product $p^2$ defined in \eqref{eqn: p2_cont}.~The said product is invariant under general orthogonal operators.~Now, on a
$d$-dimensional hypercube, the very definition of a scalar gets generalised:~instead of being a function which is left unchanged by arbitrary orthogonal matrices, it ``only''
needs to be unvarying under the effects of parity transformations and permutations, in accordance with the analysis of the previous section.~For a $d$-dimensional vector $p$
with components $p = (p_1, p_2, \ldots p_d)$, all of the following functions would constitute hypercubic invariants:
\begin{align}\label{eqn: hyper_scalars}
p^{\,[2n]} = \sum_{\mu = 1}^d p^{\,2n}_\mu = p_1^{\,2n} + p_2^{\,2n} + \ldots + p_d^{\,2n} \, , \qquad n \in N ,
\end{align}
\noindent
with $N$ the set of positive integers.~The bracketed superscript notation (i.\,e.~$[2n]$) was taken from \cite{Becirevic:1999uc, deSoto:2007ht}.~Note that $p^{
[2]} = p^2$ is an invariant of the continuum theory.~Throughout the rest of this paper, we will use the notation $p^2$ instead of $p^{[2]}$, but only for this
particular scalar product:~all the other hypercubic invariants of a single momentum $p$ will follow the convention \eqref{eqn: hyper_scalars}.~The fact that the
quantities \eqref{eqn: hyper_scalars} do not change under arbitrary permutations and inversions of momentum components should be relatively self-evident.~What is
perhaps not so obvious, is that not all functions of the form \eqref{eqn: hyper_scalars} can be algebraically independent from each other.~In \cite{Weyl:1939prc}
it is shown that the number of independent hypercubic scalars, with a single momentum $p$ at ones disposal, does not exceed the number of dimensions of the
theory under consideration.~This is best illustrated with an example, for which we (again) turn to the simplest case of two dimensions.~For $d = 2$, and momentum
$p = (p_1,p_2)$, there are only two independent invariants of the form \eqref{eqn: hyper_scalars}, which one might choose to be (say) $p^2$ and $p^{\,[4]}$.~All the
other hypercubic scalars follow as polynomial functions of these two, for an example
\begin{align}
&p^{\,[6]} = p_1^6 + p_2^6 = \frac{1}{2} \left( 3 \, p^2\cdot p^{\,[4]} - \big(p^2\big)^3\right) \, , \nonumber \\
&p^{\,[8]} = p_1^8 + p_2^8 = \frac{1}{6} \left( 3 \, \big(p^{\,[4]}\big)^2 - 3 \, \big(p^2\big)^4 + 6 \, \big(p^2\big)^2 \cdot p^{\,[4]} \right) \, ,
\end{align}
\noindent
and similarly for invariants with higher mass dimensions.~In the same manner, a three-dimensional theory would contain three independent hypercubic scalars (say,
$p^2, p^{\,[4]}$ and $p^{\,[6]}$) and so on.~An elegant proof, for an arbitrary dimension number, can be found in \cite{Weyl:1939prc}, while \cite{deSoto:2007ht}
treats a more specific case of four dimensions.~Note that, for a $d$-dimensional theory, one can choose any $d$ functions of the form \eqref{eqn: hyper_scalars}, and
use them as a ``basis'' for calculations:~the symmetry itself does not dictate which invariants should be chosen.~We shall see soon that similar ambiguities arise
for tensor bases of lattice vertex functions.~To at least partially address the ambiguity, we will always choose the bases according to the ascending order of mass
dimension, meaning that the preference will be given to elements which feature the smallest powers of momentum components.
To conclude, we want to mention some practical implications of the observations made in this section.~Suppose that one is studying some lattice vertex function,
which depends on momentum $p$, and that one has obtained the corresponding data for vertex form factors.~For reasons discussed above, the said form factors will not
be functions of the product $p^2$ alone, but will also depend on other hypercubic scalars like $p^{\,[4]},p^{\,[6]}$ etc.~In this context, the presence of additional
invariants is an unwanted lattice artifact, which one would generally like to mitigate as much as possible.~To this end, a powerful computational tool has been
developed, the so-called hypercubic extrapolation, where one attemps to extrapolate the available lattice data towards the limit where some of the ``extra'' lattice
invariants $(\text{e.\,g.~}p^{\,[4]})$ vanish.~For some examples on the use of this method, see e.\,g.~\cite{Becirevic:1999uc, deSoto:2007ht, Becirevic:1999hj,
Blossier:2014kta, Boucaud:2018xup} and references therein.
\subsection{Hypercubic vectors}\label{sec: hyper_vector}
As was argued at the end of section \ref{sec: hyper_perms}, the task of finding general expressions for hypercubic vectors amounts to finding the functions which
transform as vectors under both permutations and inversions of momentum components.~We shall split this task into two parts, wherein we analyse the two kinds of
tranformations separately, since they have different effects on vector components.
We shall start with permutations.~We consider a situation with a single momentum variable $p$, in an arbitrary number of dimensions $d$.~Let $\Pi^{\sigma\tau}$
denote a permutation which exchanges the $\sigma$-th and $\tau$-th momentum components, where each index can run from 1 to $d$, and $\sigma = \tau$ corresponds
to an indentity matrix.~The operators which swap only two elements at a time are sometimes called transpositions, and the fact that we consider only such matrices
does not diminish the generality of our upcoming results.~This is because an arbitrary permutation can always be broken down into a product of transpositions, in
infinitely many different ways \cite{Hassani:1999sny}:~thus a quantity which transforms as a vector under arbitrary transpositions will also constitute a vector
under any one permutation.~Now, an operator $\Pi^{\sigma\tau}$ is obtained from a $d$-dimensional unity matrix $\mathbb{1}$, by swapping
the identity elements $\sigma$-th and $\tau$-th rows \cite{Hassani:1999sny}.~As an example, the matrix $\Pi^{14} = \Pi^{41}$ in (say) four dimensions follows as
\begin{align}
\mathbb{1}_{d=4} = \left[ \begin{array}{cccc} 1 & \; 0 & \; 0 & \; 0 \\ 0 & \; 1 & \; 0 & \; 0 \\ 0 & 0 & \; 1 & \; 0 \\ 0 & \; 0 & \; 0 & \; 1 \end{array} \right]
\overset{\text{1st row $\leftrightarrow$ 4th row}}{\longrightarrow} \left[ \begin{array}{cccc} 0 & \; 0 & \; 0 & \; 1 \\ 0 & \; 1 & \; 0 & \; 0 \\ 0 & 0 & \; 1 & \; 0 \\
1 & \; 0 & \; 0 & \; 0 \end{array} \right] = \Pi^{14}
\, .
\end{align}
It is straightforward to check that the operator $\Pi^{14}$ permutes the first and fourth components of a four-dimensional vector $p$.~The above construction
principle implies that, in terms of matrix components, a transposition $\Pi^{\sigma\tau}$ can be written as
\begin{align}\label{eqn: transpos_one}
\Pi^{\sigma\tau}_{\mu\nu} \, = \, \delta_{\mu\nu} \, , \qquad \text{if} \quad \mu \neq \sigma, \, \tau \, ,
\end{align}
\noindent
whereas for the $\sigma$-th and $\tau$-th rows of $\Pi^{\sigma\tau}$ it holds that
\begin{align}\label{eqn: transpos_two}
\Pi^{\sigma\tau}_{\sigma\tau} \, = \, 1 \, = \, \Pi^{\sigma\tau}_{\tau\sigma} \, ,
\end{align}
\noindent
with all the other elements in the aforementioned rows being zero.~With the help of the above component-wise representation for $\Pi^{\sigma\tau}$, it is
easy to see how an arbitrary $d$-dimensional vector $p$ changes under transpositions.~By plugging in the equations \eqref{eqn: transpos_one} and \eqref{eqn:
transpos_two} into the vector-like transformation law
\begin{align}\label{eqn: vector_trans}
p_\mu \rightarrow p'_\mu = \sum_{\nu = 1}^d \Pi^{\sigma\tau}_{\mu\nu} \, p_\nu \, ,
\end{align}
\noindent
one notes that, regardless of the value of the index $\mu$, the above sum is always ``killed'' i.\,e.~there is always only a single momentum component that
survives the summation.~As an example, for $\mu = \sigma$, one has
\begin{align}\label{eqn: perm_example}
p'_\sigma = \sum_{\nu = 1}^d \Pi^{\sigma\tau}_{\sigma\nu} \, p_\nu \, = \Pi^{\sigma\tau}_{\sigma\tau} \, p_\tau = p_\tau \, ,
\end{align}
\noindent
wherein we used the fact that, in the $\sigma$-th row of $\Pi^{\sigma\tau}$, only the element $\Pi^{\sigma\tau}_{\sigma\tau} = 1$ is non-vanishing.~The full
change of vector $p$ under this transposition is
\begin{align}\label{eqn: perm_vect}
&p'_\mu = \, p_\mu \, , \qquad \text{if} \quad \mu \neq \sigma, \, \tau \, , \nonumber \\
&p'_\sigma = p_\tau \, , \qquad \: p'_\tau = p_\sigma \, .
\end{align}
With the above small machinery set up, it is very little additional effort to show that the vector-like modifications akin to \eqref{eqn: perm_vect} are obeyed
by arbitrary polynomial functions of $p$, with an open vector index $\mu$.~In other words, any expression of the form $p_\mu^{\,m}$, with $m \in N$, will
transform as a vector with respect to permutations of momentum components.~Under the action of $\Pi^{\sigma\tau}$, one gets
\begin{align}\label{eqn: multi_trans}
p_\mu^{\,m} \rightarrow \left(\,p'_\mu\,\right)^m \: = \: \overbracket{p'_\mu \cdot p'_\mu \cdot p'_\mu \cdot \ldots \cdot p'_\mu}^{m \: \text{terms}} \: = \:
\underbracket{\sum_{\rho = 1}^d \, \Pi^{\sigma\tau}_{\mu\rho} \, p_\rho \, \sum_{\lambda = 1}^d \, \Pi^{\sigma\tau}_{\mu\lambda} \, p_\lambda \ldots \sum_{
\xi = 1}^d \,\Pi^{\sigma\tau}_{\mu\xi} \, p_\xi}_{m \: \text{terms}} \, .
\end{align}
The above alteration rule obviously involves multiple sums and looks quite dissimilar to the way that the momentum $p$ itself changes.~Nonetheless, there is no
actual difference between \eqref{eqn: vector_trans} and \eqref{eqn: multi_trans}.~To show this, we again concentrate on the case $\mu = \sigma$, for which it holds
\begin{align}\label{eqn: fin_example}
\left(\,p'_\sigma\,\right)^m \: = \: \underbracket{\sum_{\rho = 1}^d \, \Pi^{\sigma\tau}_{\sigma\rho} \, p_\rho \, \sum_{\lambda = 1}^d \, \Pi^{\sigma\tau}_{\sigma\lambda}
\, p_\lambda \ldots \sum_{\xi = 1}^d \,\Pi^{\sigma\tau}_{\sigma\xi} \, p_\xi}_{m \: \text{terms}} \: = \: \overbrace{\Pi^{\sigma\tau}_{\sigma\tau} \, p_\tau \, \Pi^{\sigma
\tau}_{\sigma\tau} \, p_\tau \ldots \Pi^{\sigma\tau}_{\sigma\tau} \, p_\tau}^{m \: \text{terms}} \: = \: \Pi^{\sigma\tau}_{\sigma\tau} \, p_\tau^{\,m} \: = \: \sum_{\nu =
1}^d \,\Pi^{\sigma\tau}_{\sigma\nu} \, p_\nu^{\,m} \,\, .
\end{align}
In obtaining the final result in \eqref{eqn: fin_example}, we again made use of the fact that $\Pi^{\sigma\tau}_{\sigma\tau} = 1$ is the only non-zero entry in the $\sigma$-th
row of the operator $\Pi^{\sigma\tau}$, as well as the fact that $\left(\Pi^{\sigma\tau}_{\sigma\tau}\right)^m = \Pi^{\sigma\tau}_{\sigma\tau}$.~The final form in \eqref{eqn:
fin_example} was written in a suggestive way, to make it clear that $p_\mu^{\,m}$ indeed behaves as a vector under arbitrary transpositions, for any integer value $m$ and
for a fixed index $\mu = \sigma$.~The full tranformation rule for the functions $p_\mu^{\,m}$ is
\begin{align}
&\left(\,p'_\mu\,\right)^m = \, p^{\,m}_\mu \, , \qquad \text{if} \quad \mu \neq \sigma, \, \tau \, , \nonumber \\
&\left(\,p'_\sigma\,\right)^m = p^{\,m}_\tau \, , \qquad \: \left(\,p'_\tau\,\right)^m = p^{\,m}_\sigma \, ,
\end{align}
\noindent
and it matches the vector-like change of momentum $p$ itself, see equation \eqref{eqn: perm_vect}.~Thus, when it comes to finding general expressions for hypercubic
vectors, permutations alone offer very few restrictions, as any open-indexed polynomial in $p$ will change in a vector-like fashion under these kinds of operators.
The argument is not done however, as we also have to take into account the parity transformations.~We will start with the functions $p^{\,m}_\mu$, where $m \in N$,
and see if we can impose some additional constraints on them, by invoking their vector-like nature under inversions.~As an example, let us take the $d$-dimensional
momentum $p = (p_1, p_2,\ldots,p_d)$ and perform a parity transformation on its $\eta$-th component, so that $p'_\eta = - p_\eta$.~Here the index $\eta$ can take on
any value between 1 and $d$.~Our prospective lattice vector $p^{\,m}_\mu$ should transform in exactly the same way as momentum $p$ itself, meaning that
\begin{align}
p^{\,m}_\eta \rightarrow p'^{\,m}_\eta \: = \: - p^{\,m}_\eta \, , \qquad \quad \eta = 1, \ldots d \, ,
\end{align}
\noindent
with all the other components (with $\mu \neq \eta$) of $p^{\,m}_\mu$ being intact.~The above considerations lead us to a conclusion that $m$ has to be an \textit{odd}
integer.~If $m$ were even, the polynomial functions $p^{\,m}_\eta$ would be completely indifferent to inversions of momentum components, while any combination of even
and odd factors (e.\,g.~$p^{\,2}_\mu + p^{\,3}_\mu$) would have no definitive symmetry properties under parity changes.~To conclude, any polynomial expression of the form
\begin{align}\label{eqn: latt_vector}
t_\mu(p) = p_\mu^{\,2k+1} \, , \quad k \in N_0 \, ,
\end{align}
\noindent
will constitute a lattice vector, i.\,e.~it will transform as a vector under coordinate permutations and inversions.~In the above relation, $N_0$ stands for
a set of non-negative integers.~An immediate corollary is that any function which can be expanded in an odd Taylor series in $p_\mu$, would also comprise the
components of a lattice vector.~As an example of this, in lattice perturbation theory one often encounters expressions where the standard continuum momentum
$p_\mu$ is replaced with the following function \cite{Rothe:1992nt, Capitani:2002mp}
\begin{align}\label{eqn: def_hatp}
\hat{p}_\mu = 2 \sin\left(\frac{p_\mu}{2}\right) \, .
\end{align}
It is obvious that the quantity $\hat{p}$ doesn't change in a vector-like fashion, under general orthogonal transformations of $p$, but it does transform as a
vector with respect to inversions and permutations of momentum components.~In other words, $\hat{p}$ is a lattice vector (as it arguably should be), and its
Taylor series expansion would result in a summation over infinitely many terms of the form \eqref{eqn: latt_vector}.
This finally brings us to the question of some practical relevance.~Given some lattice vector $\Gamma$, which is a function of momentum $p$, what is the suitable
tensor representation of $\Gamma(p)$?~Based on our preceding arguments, any object like \eqref{eqn: latt_vector} could be used as a tensor element when describing
$\Gamma(p)$, since they all behave as vectors with respect to lattice symmetry operations.~At first, this kind of ``infinite freedom'' of choice might seem rather
absurd, especially if one considers the fact that basis decompositions of continuum functions are unique.~However, such ambiguities are nothing new in the world
of lattice calculations, as it is well known that any field theory can be discretised in infinitely many different ways, all of which have the same continuum
limit.~Concerning the vertex tensor parametrisations, we shall partially resolve the ambiguity by adhering to the order of ascending mass dimension, meaning that
tensors with smaller powers in $p$ will be preferred.~Then, a tensor description of a vector-like quantity $\Gamma$ would be
\begin{align}
\Gamma_\mu(p) = \sum_{k = 1}^d \mathcal{F}^k \, p_\mu^{\,2k + 1} \, ,
\end{align}
\noindent
with $\mathcal{F}^k$ a form factor of the $k$-th tensor element.~Note that the sum in the above relation does not include infinitely many terms, but rather
terminates at the $d$-th contribution, with $d$ the number of dimensions.~This is because, in a $d$-dimensional space, there can be no more than $d$ linearly
independent basis vectors.~In fact, the notion of dimension is often \textit{defined} as the number of linearly independent vectors needed to cover the space
\cite{Hassani:1999sny, Morty:1962prc}.~For concreteness, let us assume a three-dimensional setting, so that a complete tensor description of a lattice vector
$\Gamma_\mu$ should be given by
\begin{align}\label{eqn: basis_primer}
\Gamma_\mu(p) = A(p) \, p_\mu + B(p) \, p_\mu^3 + C(p) \, p_\mu^5 \, .
\end{align}
Now, one might notice an apparent problem with the above arguments.~In $d$ dimensions, \textit{any} collection of $d$ linearly independent elements will constitute a
complete tensor representation, for vector-like quantities.~If basis completeness was the only relevant criterion, then a decomposition like \eqref{eqn: basis_primer}
should not be favoured, for $d=3$, over any other choices of three linearly independent objects with a vector index $\mu$.~This might even include the previously mentioned
structures of the kind $p_\mu^{\,m}$, with an even integer $m$.~However, one ought to remember that the vertex form factors [\,i.\,e.~the functions like $A(p), \, B(p)$
and $C(p)$ of \eqref{eqn: basis_primer}\,] should be hypercubic invariants, and this will not happen for arbitrary choices of tensor parametrisation.~To see why, take the
particular example of the basis \eqref{eqn: basis_primer} and suppose that one has obtained the corresponding projectors $P_\mu$.~Then, the dressing (say) $A(p)$ would follow
as
\begin{align}\label{eqn: a_project}
A(p) = P_\mu^{\,A} \cdot \Gamma_\mu = \sum_{\mu = 1}^d \, P_\mu^{\,A} \, \Gamma_\mu \, .
\end{align}
In order for the contraction \eqref{eqn: a_project} to be a hypercubic invariant, both the projector ($P_\mu^{\,A}$) and the vertex $(\Gamma_\mu)$ itself ought to transform
as hypercubic vectors.~Since the correlator $\Gamma_\mu$ is assumed to be a lattice vector from the outset, the symmetry (or lack thereof) of $A(p)$ is determined completely
by the projector $P_\mu^{\,A}$.~In turn, the transformation properties of $P_\mu^{\,A}$ follow directly from the choice of basis, since the projectors are always linear
combinations of basis elements themselves, see e.\,g.~Appendix \ref{sec: projectors}.~This also means that it takes only a single wrong (non-vector) basis structure
to ruin the symmetry properties for \textit{all} of the involved coefficient functions.~In the case of the parametrisation like \eqref{eqn: basis_primer}, one should feel
somewhat ``safe'' since all the tensor elements behave as hypercubic vectors.~These claims regarding the symmetry features of correlator form factors will be addressed
directly in our Monte Carlo simulations, where we shall compare the values of the dressing functions before and after averaging over all possible permutations and
inversions of momentum components:~we shall see that (within statistical uncertainties) all of the relevant dressings pass the test of hypercubic invariance.
Equipped with these basic facts on how scalar and vector functions get modified on the lattice, compared to their continuum counterparts, we may proceed towards
some practical applications for the knowledge we've acquired.~Namely, we wish to deduce the tensor representations of two concrete vertex functions of lattice
Yang Mills theory, the ghost-gluon vertex and gluon propagator.~We shall see if we can learn something interesting about these lattice correlators along the way.
\section{Tensor bases for lattice ghost-gluon vertex and gluon propagator}\label{sec: vertex_n_propag}
\subsection{Ghost-gluon vertex:~continuum basis}
The ghost-gluon vertex is the lowest-order correlation function which encodes the interaction of ghost and gluon fields.~It plays a pivotal role in many truncations of
functional equations of motion (see e.\,g.~\cite{Schleifenbaum:2004id, Kellermann:2008iw, Alkofer:2008dt, Huber:2012zj, Huber:2012kd, Aguilar:2013xqa, Aguilar:2013vaa,
Binosi:2014kka, Huber:2017txg, Alkofer:2004it}), due to its non-renormalisation in Landau gauge \cite{Taylor:1971ff}.~Here, we wish to see how the tensor description of
the function changes when going from continuum to discretised spacetimes, and what this can tell us about the relation between the lattice and continuum versions of the
correlator.
Let us start with the tensor basis in a continuum setting.~In this section, we will keep the discussion independent of the number of dimensions:~a definitive
value for $d$ will be chosen only when we start considering the lattice vertex.~The momenta pertaining to the ghost, antighost and the gluon leg of the function
will be denoted, respectively, with $p,\, q \, \text{and} \, k$.~Due to momentum conservation at the vertex, with $p + q + k = 0$, only two out of these three
momenta are linearly independent, and any two can be chosen for constructing the vertex tensor elements.~We opt to work in terms of vectors $q$ and $p$:~with this
choice, the continuum correlator can be represented as
\begin{align}\label{eqn: vert_cont}
\Gamma_\mu(q, k) = A(q,p) \, q_\mu + B(q,p) \, p_\mu \, .
\end{align}
The projectors for the above basis can be found straightforwardly, with standard methods of linear algebra.~Their construction is explained in Appendix \ref{sec: projectors},
and here we simply cite the final answer:
\begin{align}\label{eqn: ab_project}
P_\mu^{\,A} = \frac{-p^2 \, q_\mu + q\cdot p \, p_\mu}{(q\cdot p)^2 - q^2 \, p^2} \, , \qquad \qquad \: P_\mu^{\,B} = \frac{ q\cdot p \, q_\mu - \, q^2 \, p_\mu}{(q\cdot p)^2
- q^2 \, p^2} \, .
\end{align}
Note that both of the above functions are ill-defined for a kinematic configuration
\begin{align}\label{eqn: paral_momenta}
q_\mu = c\cdot p_\mu \, , \qquad c = \text{const.}
\end{align}
Namely, for the momentum setup of \eqref{eqn: paral_momenta}, the projectors in \eqref{eqn: ab_project} reduce to undefined expressions of the form ``$0/0$''.~For the
purposes of latter discussion, it is worthwhile to dwell on the origin of this problem.~Let us thus look at (say) the function $P_\mu^{\, A}$, and its two defining
equations:
\begin{align}
\sum_{\mu = 1}^d P_\mu^{\,A} \, q_\mu = 1 \, , \qquad \text{and} \qquad \sum_{\mu = 1}^d P_\mu^{\,A} \, p_\mu = 0 \, .
\end{align}
For the kinematic choice \eqref{eqn: paral_momenta}, the above set of equations becomes contradictory, as one gets
\begin{align}\label{eqn: contradict}
c \, \sum_{\mu = 1}^d P_\mu^{\,A} \, p_\mu = 1 \, , \qquad \text{and} \qquad \sum_{\mu = 1}^d P_\mu^{\,A} \, p_\mu = 0 \, .
\end{align}
It should be clear that no well-defined object $P_\mu^{\, A}$ can obey the constraints of \eqref{eqn: contradict}.~The same holds for $P_\mu^{\, B}$.~These issues are closely
related to the concept of linear (in)dependence of basis elements.~For the particular configuration \eqref{eqn: paral_momenta}, the vectors $q$ and $p$ are evidently not linearly
independent, since one of the momenta is proportional to the other one.~It is a rather general statement of linear algebra, that no well-defined projectors can be constructed
for basis descriptions with feature linearly dependent elements, see e.\,g.~\cite{Hassani:1999sny} or Appendix \ref{sec: projectors}.
The solution for the above problems is simple, and it amounts to using a reduced basis, where needed.~For the kinematic choice of \eqref{eqn: paral_momenta}, any one of the
following descriptions would work
\begin{align}\label{eqn: cont_reduce}
\Gamma_\mu(q, k) = A(q,p) \, q_\mu \, , \qquad \text{or} \qquad \Gamma_\mu(q, p) = B(q,p) \, p_\mu \, .
\end{align}
Basis completeness of these reduced decompositions follows from the kinematics \eqref{eqn: paral_momenta} itself.~Namely, for parallel momenta $q$ and $p$, any one
of the vectors will contain the full information about the vertex, since the other element has no ``new information'' to add.~This constitutes a general rule,
concerning the tensor representations of vertex functions (both continuum and lattice ones):~if a given basis becomes redundant, for a particular kinematic choice,
one is allowed to ``throw away'' the basis elements, until a non-redundant description is reached.~Here, by a redundant decomposition we mean the one where some of
the basis structures can be expressed as linear combinations of other tensor elements.~In the next section, we will see that there exist special kinematic choices
on the lattice, where for reasons of linear dependence, the continuum tensor description of \eqref{eqn: vert_cont} determines the lattice correlator fully.
\subsection{Ghost-gluon vertex:~lattice-modified basis}\label{sec: ghost_latt}
In section \ref{sec: hyper_vector} we already discussed possible tensor representations for lattice vector-valued functions, which depend on a single momentum
variable.~For the ghost-gluon correlator, these arguments need to be generalised to a situation with two independent momenta, in order to capture the full
kinematic dependence of the vertex.~Such a generalisation is rather straightforward, and we shall not provide the details here.~We merely state without proof,
that functions of two momenta (say, $q$ and $p$), which transform as vectors under permutations and parity transformations, will neccessarily have one of the
following two forms:
\begin{align}\label{eqn: taus_lattice_vertex}
&\tau^{1, \, rs}_\mu(q,p) = p_\mu^{\, 2r} q_\mu^{\, 2s + 1} \, , \nonumber \\
&\tau^{2, \, rs}_\mu(q,p) = q_\mu^{\, 2r} p_\mu^{\, 2s + 1} \, , \qquad \text{with} \qquad r, \, s \in N_0 \, .
\end{align}
Of course, any linear combinations of the above structures are also allowed.~At the risk of overstating the obvious, we highlight that for functions of multiple
momenta, the same symmetry transformation (permutation, inversion) always has to be applied to \textit{all} of the vectors involved.~Thus, if one wished to change
the sign on a (say) second momentum component, it has to be done to both vectors $q$ and $p$, so that (in three dimensions)
\begin{align}
q' = (q_1, - q_2, q_3) \, , \qquad p' = (p_1, - p_2, p_3) \, .
\end{align}
The above comes from the fact that the symmetry operations relate to the whole lattice, not just to individual momentum vectors.~For instance, if one wanted to rotate
the space by $\pi/2$ degrees in a certain plane, there is no way to perform this transformation without affecting all of the lattice momenta equally.~Also, in the absence
of the aforementioned rule the scalar products like $q\cdot p$ would not be invariant under the supposed lattice symmetry operations.
If we now go back to the tensor structures of \eqref{eqn: taus_lattice_vertex}, we see that the lowest-order terms give us the continuum basis elements $q_\mu$ and
$p_\mu$, while the leading-order lattice-induced structures would be $q_\mu^{\,3}, \: p_\mu^{\,3}, \: q_\mu^{\,2} \, p_\mu$ and $p_\mu^{\,2} \,q_\mu$.~For concreteness,
let us say that we work in three spatial dimensions, since this is the case we will consider in our numerical simulations.~For $d=3$, any three linearly independent
vectors would suffice to describe the vertex fully, which means that we only need to add one more element to the continuum representation of \eqref{eqn: vert_cont}.~Any one of
the leading-order lattice modifications would fit equally well, from the symmetry perspective, and we simply choose the additional vector to be $q^{\,3}_\mu$.~This brings
us to the following basis for the lattice correlator
\begin{align}\label{eqn: vert_latt}
\Gamma_\mu(q, p) = E(q,p) \, q_\mu + F(q,p) \, p_\mu \, + G(q,p) \, q^{\,3}_\mu \, .
\end{align}
Again, the construction of the corresponding projectors in explained in Appendix \ref{sec: projectors}, and here we will abstain from providing the full expressions
for these objects, due to their considerable length and complexity.~For the parallel configuration \eqref{eqn: paral_momenta}, one of the continuum basis vectors (either
$q_\mu$ or $p_\mu$) can be neglected in the overall tensor representation, as it contains no information which is not already present in some of the other elements.
\!However, what is arguably more interesting is to identify those kinematic choices where the entire lattice correlator collapses onto the continuum structures
of \eqref{eqn: vert_cont}.~In other words, one may attempt to find such momentum points where \textit{all} of the tensor elements \eqref{eqn: taus_lattice_vertex} become
proportional to the continuum vectors.~While this might seem like an impossible task at first, it is in fact very easy to think of at least one situation where this must
happen:~if both momenta $p$ and $q$ point along the lattice diagonal, with $p = (m,m,m)$ and $q = (n,n,n)$ (where in general $m \neq n$), all of the lattice-induced tensor
elements will become parallel to either $q$ or $p$, with
\begin{align}\label{eqn: taus_diagonal}
&\tau^{1, \, rs}_\mu(q,p) = m^{\, 2r} n^{\, 2s} q_\mu \, , \nonumber \\
&\tau^{2, \, rs}_\mu(q,p) = n^{\, 2r} m^{\, 2s} p_\mu \, , \qquad \text{with} \qquad r, \, s \in N_0 \, .
\end{align}
The fully diagonal setup is also a special case of the parallel kinematics \eqref{eqn: paral_momenta}, meaning that the actual tensor parametrisation shrinks even further to
\eqref{eqn: cont_reduce}.~The fact that the lattice function $\Gamma_\mu(q,p)$ is described exactly by a continuum tensor basis (within statistical errors), for fully diagonal
kinematics, will be demonstrated in our numerical Monte Carlo simulations.~Another example where the lattice modifications of the basis \eqref{eqn: vert_cont} become redundant,
is the one where at least one momentum is diagonal [\,say, $q = (n,n,n)$\,], while the other vector is either on-axis [\,with $p = (m,0,0)$, plus permutations thereof\:] or
is of the form $p = (m,0,m)$ (plus permutations of $p$ components).~Some other interesting cases will be discussed in due course.~What is important to note here is that the
completeness of the continuum tensor description does not imply that the lattice vertex is ``equal'' to its continuum counterpart, and that the two functions could/should be
directly compared to each other.~In all special kinematic cases, the discretisation-induced tensor structures become effectively degenerate with the continuum basis elements,
but this does not mean that the vertex cannot host a myriad of finite-spacing artifacts within its ``continuum'' dressing functions.~Ideally, elaborate continuum extrapolations
should be performed before any serious comparisons between the continuum and lattice functions are made.~The only kinematic choices where the lattice correlators could be
regarded as being truly continuum-like are those in the deep infrared (IR) energy region:~since most lattice corrections have a comparatively high mass dimension [\,see e.\,g.
\!\eqref{eqn: hyper_scalars} and \eqref{eqn: basis_primer}\,], one can naturally expect them to be suppressed at low momenta, thus bringing about the dominance of the continuum terms
(barring the finite volume effects).
\subsection{Gluon propagator:~lattice-modified basis}\label{sec: gluon_lattice}
In this section we want to deduce, using hypercubic symmetry considerations, the tensor description of the lattice gluon propagator.~The arguments we will cover
here also apply to any other (lattice) second-rank tensors which depend on single momentum $p$, such as the photon two-point function, or the hadronic
electromagnetic current $\Pi_{\mu\nu}$, see e.\,g.~\cite{Aubin:2015rzx}.~To keep things simple, we will refer only to the ``gluon propagator'' in the following
text, since this is the one function which we will study in some detail in numerical Monte Carlo simulations.~We will also employ the corresponding standard notation
$D_{\mu\nu}(p)$.
The tensor parametrisation of the gluon two-point function is well known on the lattice, and is determined unambiguously by the gauge-fixing procedure \cite{Alles:1996ka}.
\!Thus, one might wonder why we are investing effort to tackle a problem which already has a satisfying solution.~We can provide two justifications in this regard.~First,
deriving a basis description which follows purely from symmetry can potentially provide some interesting insights which would otherwise remain hidden.~Second, the calculations
to be carried out here can be seen as a preparation for obtaining symmetry-based decompositions of other lattice correlators of higher rank, like the three-gluon vertex, whose
tensor representations are not fully constrained by gauge-fixing \cite{Vujinovic:2018nqc}.~We start the discussion with the continuum propagator:~the corresponding basis
decomposition is
\begin{align}\label{eqn: gluon_cont}
D_{\mu\nu}(p) = A(p) \, \delta_{\mu\nu} + B(p) \, p_\mu p_\nu ,
\end{align}
\noindent
where indices $\mu$ and $\nu$ run from 1 to $d$.~The above two basis elements are the most general structures which satisfy the appropriate tensor
transformation law
\begin{align}\label{eqn: second_tensor_law}
\tau_{\mu\nu}(p) \rightarrow \tau''_{\mu\nu}(p) = \sum_{\sigma = 1}^d \sum_{\rho = 1}^d O_{\mu\sigma} O_{\nu\rho} \, \tau_{\sigma\rho} ,
\end{align}
\noindent
with $O$s being arbitrary $d$-dimensional orthogonal matrices.~The projectors for the above basis will be provided later.~We now wish to see how the
representation \eqref{eqn: gluon_cont} may be generalised in a discretised theory.~Similarly to hypercubic scalars and vectors, one needs to look for most
general possible functions which satisfy the transformation rule \eqref{eqn: second_tensor_law}, with matrices $O$ belonging to to the hypercubic symmetry
group.~The obvious course of action would be to look for second-rank extensions of the equation \eqref{eqn: latt_vector}, i.\,e.~to find operators with higher
powers in $p_\mu$ and $p_\nu$, which can be added to the continuum tensor description.~The addition of such new terms is indeed possible on a lattice, but
right now we want to discuss a different type of modification of the continuum basis, which is reminiscent of how rotational symmetry breaking manifests
itself in QCD and Yang-Mills studies at finite temperature, see e.\,g.~\cite{Fischer:2018sdj} and references therein.~In particular, we wish to argue that
the tensor structure \eqref{eqn: gluon_cont} generalises to
\begin{align}\label{eqn: latt_glue_2d}
&D_{\mu\mu} = E(p)\, \delta_{\mu\mu} \: + \: F(p) \, p_\mu^{\,2} \, , \qquad \quad \mu = 1, \ldots d \nonumber \\
&D_{\nu\mu} = G(p) \, p_\nu p_\mu \, , \qquad \qquad \qquad \quad \: \: \mu, \, \, \nu = 1, \ldots d \, , \qquad \mu \neq \nu \, ,
\end{align}
\noindent
on discretised spacetimes.~Put in words, the diagonal and off-diagonal components of the lattice gluon propagator are parametrised by different dressing
functions, in contrast to \eqref{eqn: gluon_cont}.~The above splitting comes from the fact that, unlike general orthogonal operators, permutations and inversions
cannot mix diagonal and off-diagonal components of second-rank tensors, and the two kinds of terms transform independently from each other, under hypercubic
symmetry operations.~To demonstrate this, we start (yet again) with an example of a two-dimensional theory.~In section \ref{sec: hyper_perms} we argued that
the three operators $\rho_1, \rho_2$ and $\Pi^{12}$, and their matrix compositions, exhaust all of the symmetry transformations of a square, see equations
\eqref{eqn: 2d_inver_def} and \eqref{eqn: 2d_perm_def}.~Now, under the action of a permutation $\Pi^{12}$, the 2\,$\times$\,2 gluon propagator transforms in the
following fashion
\begin{align}
D\,'' = &\left( \begin{array}{cc} D\,''_{11} & D\,''_{12} \\ D\,''_{21} & D\,''_{22} \end{array} \right) \: = \: \Pi^{12} \cdot D \cdot \left(\Pi^{12}
\right)^T = \nonumber \\[0.12cm]
\left[ \begin{array}{cc} 0 & \;\; 1 \\ 1 & \;\; 0 \end{array} \right] \cdot & \left( \begin{array}{cc} D_{11} & D_{12} \\ D_{21} & D_{22} \end{array}\right)
\cdot \left[ \begin{array}{cc} 0 & \;\; 1 \\ 1 & \;\; 0 \end{array} \right] = \left( \begin{array}{cc} D_{22} & D_{21} \\ D_{12} & D_{11} \end{array}\right) \, .
\end{align}
In the above relations, $T$ stands for a matrix transpose.~The overall effect of the exchange operator $\Pi^{12}$ is thus
\begin{align}\label{eqn: gluon_permuted_2d}
&D\,''_{11} = D_{22} \, , \qquad \: D\,''_{12} = D_{21} \, , \nonumber \\
&D\,''_{21} = D_{12} \, , \qquad \: D\,''_{22} = D_{11} \, .
\end{align}
In the transformation rule \eqref{eqn: gluon_permuted_2d}, a diagonal component (i.\,e.~a term of the form $D_{\mu\mu}$) always changes into another diagonal
component, while an off-diagonal factor (a quantity like $D_{\mu\nu}$, with $\mu \neq \nu$) always changes into another off-diagonal factor.~Thus, the
transformation itself does not combine the diagonal and off-diagonal terms, and the two kinds of contributions change separately from each other, under
the action of $\Pi^{12}$.~The same remark holds also for the operators $\rho_1$ and $\rho_2$ of \eqref{eqn: 2d_inver_def}.~For instance, the $\rho_1$ matrix
changes the propagator as follows
\begin{align}\label{eqn: gluon_reflected_2d}
&D\,''_{11} = D_{11} \, , \qquad \: D\,''_{12} = - D_{12} \, , \nonumber \\
&D\,''_{21} = -D_{21} \, , \qquad \: D\,''_{22} = D_{22} \, .
\end{align}
In \eqref{eqn: gluon_reflected_2d}, there is again no mixing between off-diagonal and diagonal pieces of the gluon two-point function.~The same observation is
also true for the inversion element $\rho_2$.~One concludes that none of the elementary transformations $\rho_1, \rho_2$ and $\Pi^{12}$, and thus also none
of their compositions, combines the propagator terms of the kind $D_{\mu\mu}$ with those of the kind $D_{\mu\nu}$ (with $\mu \neq \nu$).~This is the origin
of the splitting given in equation \eqref{eqn: latt_glue_2d}, for $d=2$.~Of course, one would like to check if the argument generalises naturally to higher
dimensions as well.~The most direct way of testing this would be to take the three- or higher-dimensional permutation and inversion matrices, apply them to
the gluon propagator of appropriate dimensionality, and deduce the corresponding constraints on the correlators tensor decomposition.~Such a procedure is
however very tedious, as one has to check the overall effect of every elementary permutation and parity operator.~To make matters simpler, we will now try to
formulate relatively general and dimensionality-independent arguments on why hypercubic symmetry transformations cannot combine the off-diagonal and diagonal
components of second rank tensors.~In the process, we will also attempt to extend these considerations to other correlators of interest, like the lattice
three-gluon vertex.
As usual, we will look at the hypercubic group as a composition of permutation and inversion transformations, and will shape our line of reasoning for each
of the two symmetry operations separately.~We start with the easier case of parity changes.~For these matrices, it is somewhat obvious that they cannot
induce mixing of different components of second-rank tensors, or indeed tensors of arbitrary rank.~Namely, the inversion operators always have the overall
structure of a unity matrix $\mathbb{1}$:~the ``only'' difference between parity transformations and the identity element comes from the minus signs, see
e.\,g.~\eqref{eqn: 2d_inver_def} and \eqref{eqn: 3d_rot_example}.~While the minuses are evidently important, the general unity-like composition of these operators
means that they cannot rearrange the components of arbitrary tensors in any non-trivial way, see \eqref{eqn: gluon_reflected_2d} as an example.~This also implies
that inversions cannot mix the diagonal and off-diagonal terms of second-rank tensors, independent of the number of dimensions.
This brings us to permutations.~To understand why permutations cannot mix contributions of the kind $D_{\mu\mu}$ with those of the kind $D_{\mu\nu}$ (with
$\mu \neq \nu$), we will go back to the example of a two-dimensional theory and the transformation rule \eqref{eqn: gluon_permuted_2d}.~The change \eqref{eqn:
gluon_permuted_2d} can be written in a more abstract and concise manner as
\begin{align}\label{eqn: abstract_permute}
11 \leftrightarrow 22 \: , \qquad 12 \leftrightarrow 21 \: .
\end{align}
The above is a symbolic way to say that under a permutation $\Pi^{12}$, the gluon component $D_{11}$ gets exchanged with $D_{22}$, while $D_{12}$
exchanges places with $D_{21}$:~these swaps constitute the full content of the equation \eqref{eqn: gluon_permuted_2d}.~One now notes that the rule \eqref{eqn:
abstract_permute} matches the way in which an ordered set of numbers `$jk$' (with $j\, , k = 1, 2$) transforms under a permutation $1 \leftrightarrow2$.~With
this abstraction, it becomes somewhat obvious why $\Pi^{12}$ cannot mix diagonal and off-diagonal components of second-rank tensors:~there is no possible
permutation of numbers which can turn configurations of the form ``11'' and ``22'' into those of the form ``12'' or ``21'', and vice versa.~The reasoning
straightforwardly extends to an arbitrary dimension number.~In three dimensions, for instance, there are three elementary permutations (these are $1
\leftrightarrow 2 \, , 1 \leftrightarrow 3$ and $ 2 \leftrightarrow 3 $), none of which can turn any of the diagonal configurations (11, 22 and 33) into any
of the off-diagonal ones (12, 13, 23 + permutations).~Thus, the splitting between off-diagonal and diagonal components of the gluon propagator persists also
for $d = 3$, and indeed for any dimension number.~A mathematically more formal version of this heuristic reasoning is given in Appendix \ref{sec: append_permutes}.
We wish to point out that the different treatment of diagonal and off-diagonal tensor terms, as in equation \eqref{eqn: latt_glue_2d}, was also noted in the lattice
study of the hadronic vacuum polarisation contribution to the anomalous magnetic moment of the muon \cite{Aubin:2015rzx}.~However, no direct comparison between our
approach and that of \cite{Aubin:2015rzx} is possible, since there an asymmetric four-dimensional lattice was used, with different temporal and spatial extensions.
\!Also, the authors of \cite{Aubin:2015rzx} eventually abandon the explicit consideration of discretisation artifacts in favour of a careful analysis of finite volume
effects, which are here ignored.~We will make a qualitative/semi-quantitative argument about the validity of their approximation later in this paper.
The arguments which had led us to the decomposition \eqref{eqn: latt_glue_2d} can also be applied to other correlators, like the three-gluon vertex.~With a bit of
work, one may quickly deduce that the lattice three-gluon correlator contains five independent ``cycles'', which cannot combine with each other under either
parity or permutation transformations.~The five cycles are 1) $\Gamma_{\mu\mu\mu}$ 2) $\Gamma_{\nu\mu\mu}$ 3) $\Gamma_{\mu\nu\mu}$ 4) $\Gamma_{\mu\mu\nu}$ 5)
$\Gamma_{\mu\nu\rho}$:~for cycles 2) to 4), it holds that $\mu \neq \nu$, while in cycle 5) all the indices $\mu, \, \nu$ and $\rho$ are different from each
other .~In practice, this means that a single tensor entity of the continuum theory, like (say) $p_\mu \, p_\nu \, p_\rho$, will break into five independent
pieces on the lattice, each with its own dressing function.~We leave explicit calculations concerning this vertex function for future studies.
Going back to the gluon propagator, the equation \eqref{eqn: latt_glue_2d} does not exhaust all the possibilities concerning the correlators tensor representations
on the lattice.~Namely, with the same arguments as employed in section \ref{sec: hyper_vector}, it can be easily shown that any functions of the form
\begin{align}\label{eqn: gluon_higher}
2 \,\tau_{\mu\nu}(p) \, = \, p_\mu^{\,2k + 1} p_\nu^{\,2n + 1} \: + \:\: p_\nu^{\,2k + 1} \, p_\mu^{\,2n + 1} \, , \qquad \: \: k, \, n \in N_0 \, , \qquad \: \:
\mu \, , \nu = 1
\ldots d \, ,
\end{align}
\noindent
will satisfy the transformation laws adequate for a second-rank tensor, under permutations and inversions\,\footnote{In principle, the Kronecker tensor $\delta_{\mu\nu}
$ can also receive higher-order lattice corrections, like e.\,g.~$\delta_{\mu\nu}\,p_\mu^{\,2}$ \cite{Kawai:1980ja, Weisz:1983bn}.~However, the only non-vanishing part
of such a term is $\delta_{\mu\mu} \, p^{\,2}_\mu = p_\mu^{\,2}$, which is already present in \eqref{eqn: latt_glue_2d}.~We thus do not consider such contributions separately,
as they are in fact indistinguishable from the diagonal parts of \eqref{eqn: gluon_higher}.}.~The symmetrisation in \eqref{eqn: gluon_higher} was carried out to comply with the
symmetry property of the propagator itself, namely $D_{\mu\nu}(p) = D_{\nu\mu}(p)$.~In \eqref{eqn: gluon_higher}, we did not explicitly indicate a split between the diagonal
and off-diagonal contributions, for reasons of simplicity.~It should be understood that this kind of separate treatment is applicable to the above higher-order tensors,
just as it is for the decomposition \eqref{eqn: latt_glue_2d}.~Among the elements \eqref{eqn: gluon_higher}, the leading-order correction to the continuum term $p_\nu p_\mu$
has the form
\begin{align}\label{eqn: gluon_lead}
2 \, \tau^\text{\,lead}_{\mu\nu}(p) \, = \, p_\mu \, p_\nu^{3} \: + \: p_\nu \, p_\mu^3 \, = \, p_\nu \, p_\mu \, (p_\mu^2 \: + \: p_\nu^2) \, , \qquad \mu \, , \nu = 1
\ldots d \, ,
\end{align}
\noindent
and it appears in gluon propagator representations involving the $\mathcal{O}(a^2)$ improved lattice gauge action \cite{Weisz:1983bn}.~Now, while it is important
to keep in mind that the decomposition \eqref{eqn: latt_glue_2d} can be augmented with higher-order corrections, throughout this paper we will work \textit{only} with
the tensor structures of \eqref{eqn: latt_glue_2d}.~In two dimensions, it actually turns out that this basis is complete, i.\,e.~that it describes the gluon propagator
without any loss of information.~This follows from the fact that, being a symmetric $d \times d$ matrix in $d$ dimensions, the gluon two-point function cannot
contain more that $N_d$ free parameters (for fixed momentum $p$), where \cite{Morty:1962prc}
\begin{align}\label{eqn: nd_def}
N_d = \frac{d\,(d + 1)}{2} \, .
\end{align}
For a two-dimensional theory, $N_d$ equals three, which is exactly the number of free parameters present in \eqref{eqn: latt_glue_2d}.~To solidify the case for
completeness of this basis in two dimensions, in Appendix \ref{sec: projectors} we show that the leading-order correction \eqref{eqn: gluon_lead} can be described
exactly as a linear combination of the elements in \eqref{eqn: latt_glue_2d}.~In three dimensions, $N_d$ is equal to six, and the decomposition \eqref{eqn: latt_glue_2d}
is no longer complete.~In our Monte Carlo simulations, we will show that even for $d = 3$, the lattice-modified representation \eqref{eqn: latt_glue_2d} describes the
propagator rather well, and certainly significantly better than the continuum one.~Of course, showing an (approximate) completeness of a given basis is not enough,
as we argued at the end of section \ref{sec: hyper_vector}\,:~it is always possible to find basis decompositions which are ``trivially'' complete, by virtue of
exhausting all of the free parameters of a correlator at hand.~The real issue is whether the basis in question features form factors which have adequate symmetry
properties.~Therefore the hypercubic invariance of dressing functions pertaining to the decomposition \eqref{eqn: latt_glue_2d} will be tested numerically, and they
will be shown to perform quite well in this regard.~Explicit formulas for calculating the coefficients of \eqref{eqn: latt_glue_2d} will be given later.
To conclude, we want to point to an interesting notion concerning the lattice propagator basis.~Naively, one would expect that the gluon two-point function
becomes more ``continuum-like'' as one approaches the infrared energy region.~In the context of the parametrisation \eqref{eqn: latt_glue_2d}, this suggests
that the dressing functions $F(p)$ and $G(p)$ should become equal to each other, so that the form \eqref{eqn: gluon_cont} is recovered, as one considers smaller
and smaller values for momentum components $p_\mu$.~Indeed, such a behaviour will be confirmed in our numerical calculations.~However, it will also turn out
that the scenario ``$F \approx G$'' is not tied exclusively to the infrared limit, and that there exist alternative lattice kinematics, some at rather high
momenta, where the decomposition \eqref{eqn: latt_glue_2d} effectively reduces to the continuum basis \eqref{eqn: gluon_cont}.
\section{Numerical calculations with lattice-adjusted bases}\label{sec: numerics}
\subsection{General setup and vertex reconstruction}\label{sec: general_setup}
We now wish to perform lattice Monte Carlo calculations with the gluon propagator and ghost-gluon vertex, using both the continuum and lattice-modified tensor
bases for these functions.~Our aim in the following will be roughly threefold.~The first goal is to show that, for general kinematics, the lattice-adjusted
bases are ``more complete'' than their continuum counterparts.~Details on how this (approximate) completeness is tested will be given shortly.~Our second aim
is to demonstrate numerically and analytically that there exist such kinematics on the lattice, where the continuum bases describe the examined correlation
functions without any loss of information.~For the ghost-gluon vertex, the analytic part of this problem was already partly discussed in \ref{sec: ghost_latt},
while the appropriate calculations for the gluon propagator have been postponed since they are more involved.~Our third goal is to show numerically, that the
lattice-modified tensor bases for these $n$-point functions $(n = 2, 3)$ feature form factors which are invariant under arbitrary permutations and inversions
of momentum components, i.\,e.~that the dressing functions are actual hypercubic invariants.~Since we are mostly concerned with proof-of-principle evaluations
here, in our numerics we will only consider two- and three-dimensional lattices.~While this obviously does not correspond to the physical situation, it still
captures many of the essential feautures which should be present in higher-dimensional settings.
To begin, we shall provide some details on the setup of our Monte Carlo calculations.~We consider equally-sided lattices in two and three dimensions, with
periodic boundary conditions.~The gauge field configurations are thermalised and subsequently updated for measurements using the standard gauge action of
Wilson \cite{Wilson:1974sk}:
\begin{align}\label{eqn: wilson_action}
S = \frac{\beta}{N_c} \sum_\text{plaq} \text{Re} \left[ \, \text{Tr} \left(\mathbb{1} - U_\text{plaq} \right)\right] \, ,
\end{align}
\noindent
where $N_c$ is the number of colours (in our case $N_c = 2$), and $U_\text{plaq}$ is the Wilson plaquette operator:
\begin{align}\label{eqn: plaquette}
U_\text{plaq}(x) \, = & \,\,\, U_\mu(x)\,U_\nu(x+\hat{\mu})\,U^\dagger_\mu(x+\hat{\nu})\,U^\dagger_\nu(x) \, .
\end{align}
The operators $U_\sigma$ in the above equation belong to the $SU(2)$ gauge group, and are parametrised as $U \equiv U_0\,\mathbb{1} + i\,\vec{U}
\cdot\vec{\sigma}$, with $\mathbb{1}$ standing for a $2 \times 2$ unity element, and $\vec{\sigma} = (\sigma^1,\,\sigma^2,\,\sigma^3)$ being the
vector of Pauli matrices.~The coefficients $(U_0, \, \vec{U})$ are real numbers satisfying $U_0^2 + \vec{U}^2 = 1$.~The symbol $\beta$ in \eqref{eqn:
wilson_action} denotes the bare lattice gauge coupling.
The gauge field configurations are updated by means of the hybrid-over-relaxation algorithm (HOR), consisting of three over-relaxation \cite{Adler:1981sn,
Adler:1987ce} and one heat-bath step:~for the heat-bath sweep, we use the Kennedy-Pendleton procedure \cite{Kennedy:1985nu}.~Starting from a cold initial
guess, we perform 5000 HOR sweeps for thermalisation, while in actual measurements we discard a certain number of updated configurations, to lessen the
effect of autocorrelations.~Concretely, we perform 1.5\,$N$\,HOR updates before each measurement, with $N$ denoting the linear extent of the lattice in one
direction:~as an example, for lattices with $N=32$, we perform 48 HOR steps prior to measurement.~In the end, we use 9600 configurations for evaluations
of the gluon propagator, and 480 configurations for the ghost-gluon vertex, for each $(N,\beta)$ pair considered in this work.~We obtain estimates for
statistical errors via an integrated autocorrelation time analysis, according to the automatic windowing procedure outlined in section 3.3 of \cite{Wolff:2003sm},
with parameter $S = 2.5$.~For all of the calculated quantities in this work, the integrated autocorrelation time was always estimated to be less than 0.75
(we remind that $\tau_\text{int}$ = 0.5 means no autocorrelations), but this might be an underestimation caused by gauge-fixing, which can ``artificially''
decrease autocorrelations.
One of the basic quantities needed in the upcoming simulations is the lattice gluon potential $A_\nu$, which is defined in terms of the link variables $U_\nu(x)$
as
\begin{align}\label{eqn: intro_latt_glue}
A_\nu(x) \, \equiv \, \frac{1}{2} \left[ \, U_\nu(x) - U^\dagger_\nu(x) \, \right] = i \, \vec{U} \cdot \vec{\sigma} \, .
\end{align}
The colour components of $A_\nu(x)$ can be projected out with appropriate Pauli matrices, where one has $A^b_\nu(x) = (1/2i) \cdot \text{Tr}\,[A_\nu(x) \, \sigma^b]$,
with $b = 1, \, 2, \, 3$.~Some other ingredients, needed in calculations of specific lattice correlation functions, will be discussed in due course.~Concerning our
general numerical setup, there are two remaining issues to clarify.~One is the gauge-fixing procedure:~since we are interested in gauge-dependent quantitites, we have
to specify a gauge to work in, lest all our Monte Carlo averages end up being zero.~Here, we shall concentrate exclusively on (lattice) Landau gauge, as it is
computationally by far the easiest one to implement.~Certain other choices will be discussed only briefly in due time.~For gauge-fixing to Landau gauge, we choose the
so-called over-relaxation method \cite{Mandula:1990vs, Cucchieri:1995pn}:~the corresponding iterative steps are explained in detail in e.\,g.~section 3.3 of \cite{
Cucchieri:1995pn}.~The algorithm features a free parameter $\omega \in (1,2)$, which may be tuned to improve convergence.~The ``optimal'' values of $\omega$, for each
set of considered gauge field configurations, can be found in Table \ref{tab: config_details}.~The gauge-fixing process is stopped when $e_6 \leq 10^{-14}$, where \cite{
Cucchieri:1995pn}:
\begin{align}\label{eqn: e6_def}
e_6 \equiv \frac{1}{3\,N\,d} \sum_{\nu = 1}^d \sum_{b = 1}^3 \sum_{x_\nu = 1}^N \frac{\left[ Q_v(x_v) - \hat{Q}_\nu \right]_b^2}{\left[\hat{Q}_\nu\right]_b^2} \, ,
\end{align}
\noindent
with
\begin{align}\label{eqn: def_q}
Q_\nu(x_\nu) \equiv \sum_{\mu \neq \nu} \sum_{x_\mu} A_\nu(x) \, , \quad \text{and} \quad \hat{Q}_\nu \equiv \frac{1}{N} \sum_{x_\nu = 1}^N Q_\nu(x_\nu) \, .
\end{align}
In \eqref{eqn: def_q}, the index $\nu$ runs from 1 to $d$, and $A_\nu(x)$ is the gluon potential introduced in \eqref{eqn: intro_latt_glue}.~Also, the index $b = 1,\,2,
\,3$ in \eqref{eqn: e6_def} stands for the colour components of the bracketed expressions.~$e_6$ essentially measures the spatial fluctuations of the quantity $Q_\nu$,
defined in \eqref{eqn: def_q}:~according to \cite{Mandula:1987rh}, the functions $Q_\nu$ should be independent of $x_\nu$, for periodic lattices and for gauge field
configurations fixed to Landau gauge.
This brings us to one of the final notions we will need for the upcoming analysis, and this is the vertex reconstruction procedure.~The method is discussed in some
detail in \cite{Vujinovic:2018nqc}, but here we wish to repeat the main ideas.~Vertex reconstruction is a way of quantifying how (un)well some tensor basis describes
a given correlation function.~Suppose that one is working with some generic lattice $n$-point function $\Gamma$, and that one wishes to test if a tensor basis $\tau$,
with the appropriate quantum numbers, describes the correlator $\Gamma$ well.~One approach to doing this would be to assume that the elements $\tau$ form a complete
basis, and that $\Gamma$ can thus be written as a linear combination of these tensor structures:
\begin{align}\label{eqn: recon_rep}
\Gamma = \sum_{j} \: \mathcal{F}^{\,j} \, \tau^{\,j} \, ,
\end{align}
\noindent
with $\tau^{j\,}$ denoting the $j$-th basis element, and $\mathcal{F}^{\,j}$ the corresponding form factor.~The first step in the procedure is to calculate the
dressing functions $\mathcal{F}^{\,j}$ of the lattice vertex in the usual way.~One then reconstructs the correlator via equation \eqref{eqn: recon_rep}, by using
the obtained form factors and the basis $\tau$ itself.~The final part is to compare the reconstructed and the original lattice vertex, in whatever way seems
appropriate.~The main idea behind this method is that, if the basis $\tau$ is truly complete, then no information will be lost when computing the coefficients
$\mathcal{F}^{\,j}$.~Thus, the original and the reconstructed correlator should exactly match.~Any difference between the two correlation functions suggests that
the structures $\tau$ do not contain the full information about $\Gamma$, and the ``size'' of the difference can be seen as a measure of the (un)suitability of
the basis, for given kinematics.~This strategy will be used to test the (approximate) completeness of tensor bases to be considered in the following.
\newpage
Concerning the above procedure, there is one more issue of practical importance to be discussed.~Namely, we will look at vertex/propagator functions with Lorentz indices,
and comparing the original and the reconstructed correlator for each value of these indices would be highly impractical for the presentation of results.~To address
this, in our plots we will always give the results for ratios of index-averaged quantities.~In the case of the gluon two-point function, the said ratio would look like
\begin{align}\label{eqn: recon_ratio}
\mathcal{R} = \frac{D^\text{\,origo}_{|\left\langle \mu\nu \right\rangle|}}{D^\text{\,recon}_{|\left\langle \mu\nu \right\rangle|}} \equiv \frac{\sum_{\mu}
\sum_{\nu} | D_{\mu\nu}^\text{\,origo} |}{\sum_{\mu} \sum_{\nu} | D_{\mu\nu}^\text{\, recon} |}.
\end{align}
In the above relation, superscripts ``origo'' and ``recon'' denote, respectively, the original and the reconstructed correlator, while $|.|$ stands for a (complex
number) absolute value.~The reasons for using the absolute value when evaluating $\mathcal{R}$ are discussed in some detail in section III of \cite{Vujinovic:2018nqc},
and will not be repeated here.~Note that, in all these proceedings, the original correlation functions (``origo'') are the only ones for wich statistical uncertainties
are calculated directly, by means of the aforementioned integrated autocorrelation time analysis.~For all the other quantities, like the reconstructed correlators and
ratios akin to \eqref{eqn: recon_ratio}, the corresponding errors are estimated from those of the original function, via error propagation.~Regarding the propagation
of uncertainty itself, we always consider only the leading-order (variance) formulas, meaning that all of the involved variables are treated as if being statistically
independent from each other.~With these important computational details clarified, we may finally proceed towards some actual results.
\subsection{Gluon propagator in two dimensions}\label{sec: 2d_glue}
In lattice Monte Carlo simulations, the gluon two-point function can be calculated as
\begin{align}\label{eqn: d_propagator}
D^{\,ab}_{\mu\nu}(p) = \frac{1}{V} \left\langle \tilde{A}^{a}_\mu(p) \, \tilde{A}^{b}_\nu(-p) \right\rangle \, ,
\end{align}
\noindent
with $V = N^d$ being the lattice volume, and $\tilde{A}^{a}_\mu(p)$ is the Fourier tranform of $A^{a}_\mu(x)$:
\begin{align}\label{eqn: fourier_glue}
\tilde{A}^a_\mu(p) & \equiv \sum_x \, A^a_\mu(x) \, \exp \left[ 2\pi i (p\cdot x + p_\mu/2) \right] \, , \quad \text{where} \nonumber \\
& p_\mu \equiv \frac{2\pi \, n_\mu}{aN} \, , \quad n_\mu \in [0, \, N-1] \, .
\end{align}
Note that all of the momenta in our plots and text will be given in terms of the vector $n_\mu$ defined above, with one exception:~components lying exactly half-way on
the lattice sides (corresponding to $p_\mu = \pi$) will be written in the text as `$\pi$'.~One may also observe that the equation \eqref{eqn: fourier_glue} contains an
additional term $p_\mu/2$, as opposed to the standard definition of a discrete Fourier transform:~the purpose of this modification is to make the lattice gluon potential
obey the continuum Landau gauge condition with $\mathcal{O}(a^2)$ corrections, instead of $\mathcal{O}(a)$ ones \cite{Alles:1996ka}.~With lattice gauge field configurations
fixed to Landau gauge, and the Fourier transform of the gluon potential defined according to \eqref{eqn: fourier_glue}, the gluon propagator of Monte Carlo simulations should
have the following colour and tensor structure \cite{Alles:1996ka}:
\begin{align}\label{eqn: gauge_tensor}
D^{\,ab}_{\mu\nu}(p) = \left( \delta_{\mu\nu} - \frac{\hat{p}_\mu \, \hat{p}_\nu}{\hat{p}^2} \right) \, \delta^{ab} D(p) \, ,
\end{align}
\noindent
with the lattice vector $\hat{p} = 2\, \sin(p/2)$.~Henceforth, we shall assume that this function is diagonal in colour space, as indicated above, and will work only with
colour-averaged quantities $D_{\mu\nu} = \frac{1}{3}\sum_{a} D^{aa}_{\mu\nu}$.~The tensor representation \eqref{eqn: gauge_tensor} will not be used for vertex reconstruction
in the upcoming analysis, but it should still be kept in mind since many of the results we will obtain can only be properly understood with the help of \eqref{eqn:
gauge_tensor}.~Also, for comparison purposes, we will plot the results for the dressing function $D(p)$ of \eqref{eqn: gauge_tensor}, which is easily evaluated in $d$ dimensions
as
\begin{align}\label{eqn: dressing_d}
D(p) = \frac{1}{d - 1} \sum_{\mu = 1}^d \, D_{\mu\mu}(p) \, .
\end{align}
The above formula does not apply for vanishing momentum $p$, but since the case $p = 0$ will not be considered in our numerics, this is of
no concern to us.~This finally brings us to the two tensor representations to be actively explored in this and the next section:~we shall
repeat the corresponding definitions for convenience.~The first is the continuum parametrisation for the gluon propagator, given by
\begin{align}\label{eqn: cont_glue}
D_{\mu\nu}(p) = A(p) \, \delta_{\mu\nu} + B(p) \, p_\mu p_\nu ,
\end{align}
\noindent
with the appropriate projectors (assuming $p \neq 0$ and a $d$-dimensional space):
\begin{align}\label{eqn: cont_project_glue}
P^{\,A}_{\mu\nu} = \frac{1}{d-1}\left(\delta_{\mu\nu} \: - \: \frac{p_\mu \, p_\nu}{p^2}\right) \, , \qquad \qquad
P^{\,B}_{\mu\nu} = \frac{1}{d-1}\left(-\frac{\delta_{\mu\nu}}{p^2} \: + \: \frac{ d \, p_\mu \, p_\nu}{(p^2)^2}\right) \, .
\end{align}
The above projectors are constructed explicitly in Appendix \ref{sec: projectors}.~Note that $P^A_{\mu\nu}$ is the standard transverse projector in $d$
dimensions.~The second basis to be scrutinised in detail is the lattice-modified version of \eqref{eqn: cont_glue}, with
\begin{align}\label{eqn: lattice_glue}
&D_{\mu\mu} = E(p)\, \delta_{\mu\mu} \: + \: F(p) \, p_\mu^{\,2} \, , \qquad \quad \mu = 1, \ldots d \nonumber \\
&D_{\nu\mu} = G(p) \, p_\nu p_\mu \, , \qquad \qquad \qquad \quad \: \: \mu, \, \, \nu = 1, \ldots d \, , \qquad \mu \neq \nu \, ,
\end{align}
The dressing functions of the above decomposition can be calculated in $d$ dimensions as [\,equations \eqref{eqn: diag_ffs} and \eqref{eqn: off_diag_ff} of Appendix \ref{sec:
projectors}\,]\,:
\begin{align}\label{eqn: latt_dress_glue}
&E(p) \: = \: \frac{ p^{\,[4]} \sum_{\mu} D_{\mu\mu} \:\: - \:\: p^2 \sum_{\mu} p_\mu^2 \, D_{\mu\mu} }{d \, p^{\,[4]} \, - \, (p^2)^2} \, , \nonumber \\[0.2cm]
&F(p) \: = \: \frac{ - p^2 \sum_{\mu} D_{\mu\mu} \:\: + \:\: d \sum_{\mu} p_\mu^2 \, D_{\mu\mu} }{d \, p^{\,[4]} \, - \, (p^2)^2} \, , \nonumber \\[0.2cm]
&G(p) \: = \: \frac{ \sum_{\substack{\mu, \nu \\ \mu \neq \nu}} \: p_\nu p_\mu D_{\mu\nu}}{(p^2)^2 \, - \, p^{\,[4]}} \, .
\end{align}
In the above expressions, all of the sums run from 1 to $d$ [\,with the appropriate restriction $\mu \neq \nu$ in the case of the function $G(p)$\,], and $
p^{\,[4]}$ is a hypercubic invariant $p^{\,[4]} = \sum_{\mu = 1}^{d} \, p_\mu^{\,4}$.~Vertex reconstruction results according to the basis descriptions of
\eqref{eqn: cont_glue} and \eqref{eqn: lattice_glue} are given in Figure \ref{fig: 2d_glue}, for the gluon propagator of two-dimensional lattice Monte Carlo
simulations.~In the same Figure we show the data for the dressing functions of \eqref{eqn: dressing_d} and \eqref{eqn: latt_dress_glue}.~More accurately, instead
of form factors $F(p)$ and $G(p)$ in \eqref{eqn: latt_dress_glue}, in the plots we provide the results for functions $-p^2 \cdot F(p)$ and $-p^2 \cdot G(p)
$\,:~the reason for this choice will become clear shortly.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.41\textwidth]{figures/2d_glue_recon_one.pdf}\includegraphics[width = 0.41\textwidth]{figures/2d_glue_recon_two.pdf}
\includegraphics[width = 0.41\textwidth]{figures/2d_glue_dress_one.pdf}\includegraphics[width = 0.41\textwidth]{figures/2d_glue_dress_two.pdf}
\caption{Upper panel:~Results of vertex reconstruction on a $32^2$ lattice, according to continuum [\,equations \eqref{eqn: cont_glue} and \eqref{eqn:
cont_project_glue}\,] and lattice [\,equations \eqref{eqn: lattice_glue} and \eqref{eqn: latt_dress_glue}\,] decompositions.~Lower panel:~Data for form
factors of \eqref{eqn: dressing_d} and \eqref{eqn: latt_dress_glue}.~Results are plotted as functions of $|p| = \sqrt{p^2}$ (in lattice units), with momenta
defined in terms of vector $n_\mu$ in \eqref{eqn: fourier_glue}.~$\beta$ is the bare coupling of \eqref{eqn: wilson_action}.}
\label{fig: 2d_glue}
\end{center}
\end{figure}
Let us first discuss the data points for vertex reconstruction.~As might be expected, use of the continuum basis \eqref{eqn: cont_glue} leads to appreciable
differences between the reconstructed and the original propagator, with the deviations peaking at about 15 percent, for the considered kinematics.~On the
other hand, within statistical uncertainties there are no discrepancies present for the lattice-modified basis of \eqref{eqn: lattice_glue}.~This is in accord
with the arguments made towards the end of section \ref{sec: gluon_lattice}, wherein we claimed that the basis \eqref{eqn: lattice_glue} is complete, in a
two-dimensional setting.~Some further analytic calculations that support this claim can be found in Appendix \ref{sec: projectors}.~From Figure \ref{fig:
2d_glue}\,b) one also notes that the diagonal momentum point, corresponding to $p = (\pi, \pi)$, is somewhat special as the continuum decomposition describes
the lattice correlator fully.~However, we do not yet want to elaborate on diagonal kinematics in detail, and instead turn our attention to the results of Figure
\ref{fig: 2d_glue}\,c).
The arguably most interesting thing about the said Figure is that the displayed data points for functions $D(p)$, $E(p)$ and $-p^2 \cdot F(p)$ seem to lie on top
of each other, i.\,e.~these functions seem to have the same values.~The momenta examined in the plot are all of the form $p = (1,m)$ [\,with $m \in (1,16) $\,],
which is ``very close'' to the kinematic choice $p = (0,m)$.~For momentum vectors of the kind $p = (0,m)$, with non-vanishing $m$, one can easily demonstrate that
the exact equalities $D(p) = E(p) = -p^2 \cdot F(p)$ hold.~For instance, by plugging the vector $p = (0,m)$ into the function $E(p)$ of \eqref{eqn: latt_dress_glue}
(with $d = 2$), one gets
\begin{align}\label{eqn: e_axis_one}
E(p) = \frac{m^4\cdot(D_{11} + D_{22}) - m^4 \, D_{22}}{2 \, m^4 - m^4} = \frac{ m^4 \, D_{11}}{m^4} = D_{11} \, .
\end{align}
To fully evaluate the above expression, we turn to the relation \eqref{eqn: gauge_tensor}.~For on-axis momentum $p = (0,m)$, the representation \eqref{eqn: gauge_tensor}
states that
\begin{align}\label{eqn: e_axis_two}
D_{11} = \left( \delta_{\,11} - \frac{ \sin^2(0) }{\hat{m}^2} \right) \cdot D(p) = D(p) \, ,
\end{align}
\noindent
where $\hat{m} = 2 \, \sin(m/2)$.~Combining the equations \eqref{eqn: e_axis_one} and \eqref{eqn: e_axis_two} gives $E(p) = D(p)$.~In the same manner, one can show that
the relation $D(p) = -p^2 \cdot F(p)$ holds, for on-axis momentum $p$.~In Figure \ref{fig: 2d_glue}\,c), we purposefully do \textit{not} look at the situation $p =
(0,m)$, choosing instead the kinematic points $p = (1,m)$.~This is because we wanted to be able to include also the form factor $G(p)$ of \eqref{eqn: latt_dress_glue}
in the same graph.~Namely, for kinematic configurations like $p = (0,m)$, the function $G(p)$ evaluates to an ambiguous expression ``0/0'', or explicitly
\begin{align}\label{eqn: off_diag_axis}
G(p) = \frac{ p_1 \, p_2 \cdot (D_{12} + D_{21}) }{m^4 - m^4} = \frac{0}{0} \, ,
\end{align}
\noindent
wherein we used the fact that $p_1 \cdot p_2 = 0$, if $p = (p_1, p_2) = (0,m)$.~The dressing $G(p)$ is indeterminate here because the off-diagonal part of the propagator vanishes
for on-axis momentum $p$, since $p_\mu \, p_\nu = 0$ (if $\mu \neq \nu$) .~With the help of the representation \eqref{eqn: gauge_tensor}, these results [\,the equalities $D(p) = E(p) =
-p^2 \cdot F(p)$, ill-defined function $G(p)$\,] can be extended to any $d$-dimensional vectors with only a single non-zero component.~Put differently, for on-axis momenta, and for
lattices of arbitrary dimension, the full information about the gluon correlator is contained in its diagonal part $D_{\mu\mu}$, as the off-diagonal terms are anyway zero.~This
reasoning can be taken even further, to an eventual conclusion that the on-axis propagator should be described fully with a continuum basis of \eqref{eqn: cont_glue}.~We will discuss
this last point in detail in the next section, where we analyse the gluon two-point function on a three-dimensional lattice.
From data in Figure \ref{fig: 2d_glue}\,c) it may also be observed that, as one goes deeper into the infrared (IR) energy region, the off-diagonal dressing $-p^2 \cdot G(p)$ becomes
almost equal to the diagonal form factors $E(p)$ and $-p^2 \cdot F(p)$.~This is what one would expect, because it means that the continuum (Landau gauge) form of the propagator
is recovered at low energies.~However, for the decomposition \eqref{eqn: lattice_glue} it is not so obvious why the relations like $F(p) \approx G(p)$ should hold at small momentum
values.~To fully explore this issue, we first need to discuss diagonal lattice kinematics, with non-zero momenta of the kind $p = (m, m)$.
In terms of the representation \eqref{eqn: lattice_glue}, diagonal kinematics are special in two ways.~First, the dressing functions $E(p)$ and $F(p)$ take an ill-defined
form ``0/0'' in such cases:~this is shown in Appendix \ref{sec: projectors}, for an arbitrary number of dimensions.~This ambiguity in the definitions of $E(p)$ and $F(p)$ is
the reason that some results for the basis \eqref{eqn: lattice_glue} are missing in Figure \ref{fig: 2d_glue}.~The issue has to do with linear dependence of basis elements:~for
diagonal momenta $p = (m,m)$ it holds that $p^{\,2}_\mu = m^2 \, \delta_{\mu\mu} \, (\mu = 1, 2)$, and so the tensor structures of the propagator $D_{\mu\mu}$ become degenerate.
\!This suggests that a reduced tensor description is needed for such momentum points.~The second interesting feature concerning diagonal kinematics is the fact that the
off-diagonal form factor $- p^2 \cdot G(p)$ becomes equal to $D(p)$.~To see this, one may put the momentum $p = (m,m)$ into the definition of $G(p)$ in \eqref{eqn: latt_dress_glue},
and get
\begin{align}\label{eqn: glue_diag}
G(p) = \frac{m^2 \cdot (D_{12} + D_{21})}{ 4 \, m^4 - 2 \, m^4} = \frac{- m^2 \cdot 2 \, \hat{m}^2 D(p) }{(2 \, m^4) \cdot (2 \, \hat{m}^2)} = \frac{-D(p)}{2 \, m^2} \, .
\end{align}
In obtaining \eqref{eqn: glue_diag}, we again used the parametrisation \eqref{eqn: gauge_tensor} for gluon components $D_{21} ~ \text{and} ~ D_{12}$, with $p = (m,m)$.~From
the above result it quickly follows that $-p^2 \cdot G(p) = -2 \,m^2 \cdot G(p) = D(p)$.~With the help of \eqref{eqn: gauge_tensor}, this argument can be generalised to
diagonal momenta $p$ of arbitrary dimension.~The behaviour $-p^2 \cdot G(p) \rightarrow D(p)$ can also be seen in Figure \ref{fig: 2d_glue}\,d), as one approaches the
rightmost point $p = (\pi, \pi)$.~Along the lattice diagonal, it thus holds that the form factors $E(p)$ and $F(p)$ are ill-defined, whereas the off-diagonal dressing
$G(p)$ is proportional to the coefficient function $D(p)$.~This all implies that the continuum tensor description of the gluon propagator should suffice, which is confirmed
numerically in Figure \ref{fig: 2d_glue}\,b).
We may now tackle the question on why the approximate equalities like $ F(p) \approx G(p) $ hold in the infrared region.~For this we will take a look at a specific IR momentum
point, namely the kinematic choice $v = (1,1)$.~In a sense, this vector is doubly exceptional.~First, it is an example of diagonal kinematics, meaning that the relation $-
v^2 \cdot G(v) = D(v)$ must hold exactly, as shown in \eqref{eqn: glue_diag}.~Second, $v$ is kinematically close to the on-axis point $ p = (0,1)$, for which one has the exact
relations $ - p^2 \cdot F(p) = E(p)= D(p) $, as exemplified in \eqref{eqn: e_axis_one} and \eqref{eqn: e_axis_two}.~Putting these two tendencies together, one sees that for any
points $k$ in the vicinity of $v = (1,1)$, the approximate equalities $ - k^2 \cdot F(k) \approx E(k) \approx -k^2 \cdot G(k)$ should hold, indicating the recovery of the
propagators continuum form.~Note that the coarseness of the lattice plays a central role here.~On very coarse lattices, with only a few momentum points in each direction,
the diagonal vector $v= (1,1)$ is ``far away'' from the on-axis one $p = (0,1)$, and there is no reason to expect that the above relations should hold even approximately at
the ``infrared'' energies.
In most of the above arguments, the representation \eqref{eqn: gauge_tensor} played a crucial role:~without it, it is hard to imagine how the results of Figure \ref{fig:
2d_glue} could be explained analytically.~Nonetheless, at least some of the observations made here should hold regardless of \eqref{eqn: gauge_tensor}.~For instance, the
applicability of the continuum basis along the lattice diagonal should follow solely from the fact that the description \eqref{eqn: lattice_glue} is redundant, if $p = (m,m)
$.~Also, the approximate equalities $ F(p) \approx G(p)$ ought to be true in the infrared region, without any reference to \eqref{eqn: gauge_tensor}, since one expects that
the lattice tensor decomposition reduces to the continuum form at low momenta.~It would thus be interesting to see how some of these results hold up for second-rank lattice
tensors, whose basis elements are not determined (at least not fully) by gauge-fixing.~At the moment, we are not aware of any correlators which would constitute suitable
candidates for such an investigation.
To conclude this section, we want to comment on how the tensor representation \eqref{eqn: lattice_glue} may be used to test some of the continuum extrapolation methods.~We
know that in the continuum, the exact relations like $ - p^2 \cdot F(p) = E(p)$ ought to hold for arbitrary momentum $p$, and not just in the infrared.~It could thus be
potentially useful to check if on a lattice, certain extrapolation methods can bring about the expected continuum behaviour(s) even at relatively high values of $p^2$.~This
would constitute one of the most direct possible tests of how successful some of these methods actually are, at least for the case of gluon two-point function.~In fact, if
one wanted \textit{only} to test such approaches, then there would be no need to consider actual Monte Carlo simulations, since it should be enough to look at (say) the gluon
propagator of lattice perturbation theory.~This would make the corresponding calculations numerically far cheaper, and there would be virtually no restrictions on lattice
sizes and the amount of data one can collect, to perform the said extrapolations with a desired accuracy.~We postpone such endeavours for future studies.
\subsection{Gluon propagator in three dimensions}
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.40\textwidth]{figures/3d_glue_recon_one.pdf}\includegraphics[width = 0.40\textwidth]{figures/3d_glue_recon_two.pdf}
\includegraphics[width = 0.40\textwidth]{figures/3d_glue_recon_three.pdf}\includegraphics[width = 0.40\textwidth]{figures/3d_glue_dress_one.pdf}
\includegraphics[width = 0.40\textwidth]{figures/3d_glue_dress_two.pdf}\includegraphics[width = 0.40\textwidth]{figures/3d_glue_dress_three.pdf}
\caption{Plots a) to c):~Results of vertex reconstruction on a $32^3$ lattice, according to continuum [\,equations \eqref{eqn: cont_glue} and \eqref{eqn:
cont_project_glue}\,] and lattice [\,equations \eqref{eqn: lattice_glue} and \eqref{eqn: latt_dress_glue}\,] decompositions.~Plots d) to f):~Data for form
factors of \eqref{eqn: dressing_d} and \eqref{eqn: latt_dress_glue}.~Results are plotted as functions of $|p| = \sqrt{p^2}$ (in lattice units), with momenta
defined in terms of vector $n_\mu$ in \eqref{eqn: fourier_glue}.~Note the logarithmic $y$ scale in plot d), see text for comments.~$\beta$ is the bare
coupling of \eqref{eqn: wilson_action}.}
\label{fig: 3d_big_glue}
\end{center}
\end{figure}
Some of our results for the gluon propagator on a three-dimensional lattice are given in Figure \ref{fig: 3d_big_glue}.~Concretely, in plots a) through c) we provide
the data regarding the propagator reconstruction at certain kinematic points, using the tensor bases of \eqref{eqn: cont_glue} and \eqref{eqn: lattice_glue}.~In graphs d)
though f), one can find the results for the dressing functions of \eqref{eqn: dressing_d} and \eqref{eqn: latt_dress_glue}.~We note that the reconstruction results are given
for two values of the parameter $\beta$ of \eqref{eqn: wilson_action}, whereas for correlator dressings only one gauge coupling value is considered, to prevent the plots
from getting too cluttered.~Apart from a noisier signal in the case of three dimensions, there are quite a few similarities with the two-dimensional situation.~For instance,
for the near-axis momentum $p = (1, 0, m) ~\, [\,\text{with} ~ m \in (1,16)\,] $, one can see the same general tendencies as for the corresponding vector $p =(1,m)$ in
two dimensions, both in terms of correlator reconstruction [\,compare graphs \ref{fig: 2d_glue}\,a) and \ref{fig: 3d_big_glue}\,a)\,] and the corresponding form factors
[\,compare plots \ref{fig: 2d_glue}\,c) and \ref{fig: 3d_big_glue}\,d)\,].~Note that in Figure \ref{fig: 3d_big_glue}\,d), we use a logarithmic scale for the $y$ axis,
as otherwise it would be very hard to make out the details at higher momentum values.
The above similarities notwithstanding, the case $d = 3$ also features some substantial differences, compared to the two-dimensional scenario.~Arguably the most obvious
one is the fact that the basis \eqref{eqn: lattice_glue} does not perform so well, with respect to the propagator reconstruction, as in two dimensions.~In particular, in graph
\ref{fig: 3d_big_glue}\,c) one can see that for certain momentum points of the form $p = (\pi, \pi, m)$, the reconstructed correlator deviates appreciably from the original
one.~This is not surprising, as we argued at the end of section \ref{sec: gluon_lattice} that the representation \eqref{eqn: lattice_glue} is not complete for $d \geq 3$, and
that additional structures of the kind \eqref{eqn: gluon_higher} ought to be added to the tensor basis for the gluon two-point function.
Another interesting feature of the three-dimensional propagator, which does not have a proper counterpart in two dimensions, is the recovery of the correlators continuum
form at non-zero lattice momenta $p = (m, 0, m)$ (or any component permutations thereof).~To be more precise, all of the dressing functions in \eqref{eqn: latt_dress_glue}
are well-defined at such kinematics, and they are all proportional to the form factor $D(p)$, even at high values of $m$.~As an example, using the vector $p = (m,0,m)$ in
the definitions of $E(p)$ and $G(p)$ gives
\begin{align}\label{eqn: special_mom_3d}
&E(p) = \frac{ 2 \, m^4\cdot(D_{11} + D_{22} + D_{33}) - 2 \, m^4 \, (D_{11} + D_{33})}{6 \, m^4 - 4 \, m^4} = \frac{ 2\, m^4 \, D_{22}}{ 2 \, m^4} = D(p) \, , \nonumber
\\[0.25cm] &G(p) = \frac{ m^2\cdot(D_{13} + D_{31}) }{4 \, m^4 - 2 \, m^4} = \frac{ - D(p)}{2\, m^2} \, .
\end{align}
In obtaining the final results in \eqref{eqn: special_mom_3d}, we again made use of the representation \eqref{eqn: gauge_tensor}, for momentum $p = (m , 0, m)$.~In the same
way, one may show that $ - p^2 \cdot F(p) = D(p) $ holds for the aforementioned vectors $p$.~Thus, the two-point function obtains its continuum tensor form.~These results
are confirmed numerically in plots \ref{fig: 3d_big_glue}\,b) and \ref{fig: 3d_big_glue}\,e), as the kinematic point $p = (\pi, 0, \pi)$ is approached from the left.~As already
discussed in the previous section, all of these outcomes ultimately stem from the parametrisation \eqref{eqn: gauge_tensor}, but it would be interesting to see if they also remain
true for second-rank lattice correlators whose tensor bases are not determined completely by gauge-fixing.
It should also be pointed out that the results of \eqref{eqn: special_mom_3d} are not altered in any way if the decomposition \eqref{eqn: lattice_glue} is augmented by additional
tensor structures like \eqref{eqn: gluon_higher}, for momenta of the kind $p = (m,0,m)$.~This is because, for the said kinematics, all of the tensor elements with higher mass
dimension are proportional to the continuum momentum factor $ p_\mu \, p_\nu $.~As an example, for the leading-order correction of \eqref{eqn: gluon_lead} it holds that
\begin{align}\label{eqn: gluon_reduced}
\tau^\text{\,lead}_{\mu\nu}(p) \, = \, p_\nu \, p_\mu \, (p_\mu^2 \: + \: p_\nu^2) \, = m^2 \cdot p_\nu \, p_\mu \, , \qquad \: \: \mu \: , \nu = 1 \ldots 3 \, ,
\end{align}
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.44\textwidth]{figures/glue_recon_axis.pdf}
\caption{Results for on-axis vertex reconstruction on a 32$^3$ lattice, according to the continuum [\,equations \eqref{eqn: cont_glue} and \eqref{eqn: cont_project_glue}\,]
tensor decomposition.~Results are given as a function of $|p| = \sqrt{p^2}$, in lattice units.~$\beta$ is the bare lattice coupling of \eqref{eqn: wilson_action}. }
\label{fig: on_axis_glue}
\end{center}
\end{figure}
\noindent
for the kinematic choice $p = (m, 0, m)$ (or any permutations theoreof).~The same remark holds for all of the structures akin to \eqref{eqn: gluon_higher}:~for appropriate
momentum $p$, they are all proportional to $p_\mu\,p_\nu$, and can thus be excluded from the propagators tensor description.~Besides the situation $ p= (m, 0, m)$, this
argument also extends to on-axis configurations $p = (0,0,m)$, as well as the diagonal ones $p = (m,m,m)$.~For all of these kinematic points, the lattice propagator ought
to be described fully by the continuum tensor representation.~Concerning the momenta like $p = (m,0,m)$, as well as the diagonal vectors, we already provided some
numerical evidence for these claims, in Figure \ref{fig: 3d_big_glue}.~Up to now we have avoided looking at exact on-axis configurations, since the off-diagonal dressing
$G(p)$ is ill-defined at such points.~In Figure \ref{fig: on_axis_glue} we correct this ommision, by showing the numerical results which confirm that the on-axis gluon
correlator is described exactly by the continuum tensor elements.
There is another interesting thing to be noted from the reconstruction results of Figures \ref{fig: 3d_big_glue} and \ref{fig: on_axis_glue}.~Namely, the gauge parameter
$\beta$ of \eqref{eqn: wilson_action} seems to have little to no influence on the deviations between the original and the reconstructed propagator:~these discrepancies appear
to depend almost exclusively on lattice kinematics.~This would also indicate that $\beta$ has no bearing on the rate at which the correlators continuum form is recovered,
as one goes deeper into the IR region.~To check this, we've taken a look at the ratio of form factors $F(p)$ and $G(p)$ from \eqref{eqn: lattice_glue}, at two different $\beta$
values, to see if the gauge coupling affects the way in which $F(p)/G(p)$ approaches unity at low momenta.~The results are shown in Figure \ref{fig: beta_glue}, and they
support the notion that this ratio depends solely on kinematics, within statistical errors.~To further strengthen this argument, in the same plot we show the data for the
function $R(p) = \sqrt{\hat{p}^2}/\sqrt{p^2}$, which can arguably be used as a measure of ``how fast'' the decomposition \eqref{eqn: gauge_tensor} reduces to the continuum
propagator parametrisation, as the product $p^2$ decreases.~The fact that $R(p)$ describes most of the $F(p)/G(p)$ points with good accuracy shows that the latter ratio
depends on kinematics alone.~Of course, the $\beta$-independence only holds when the results are given in lattice units, as the coupling controls the value of the lattice
spacing $a$ in physical units.
If one wished to improve the above situation, so that the ratio of functions $G(p)$ and $F(p)$ goes ``faster'' to unity at low(er) momentum, one would have a few
options to consider.~One possibility would be the use of continuum extrapolation methods, as was already discussed at the end of the previous section.~The other
recourse is to modify the numerical gauge-fixing method, since it is ultimately this procedure, along with the `$p_\mu/2$' modification in \eqref{eqn: fourier_glue},
that brings about the tensor structure of \eqref{eqn: gauge_tensor}.~However, for numerical lattice simulations it is not yet known how to systematically improve the
gauge-fixing algorithms, to a desired order in the lattice spacing $a$, even though some attempts in this direction have been made in the past \cite{Bonnet:1999bw}.
\!This could anyway be an interesting research topic for future studies.~Going back briefly to Figure \ref{fig: beta_glue}, one may also note a relatively large
deviation between $F(p)$ and $G(p)$ at the lowest considered momentum, $p = (1,0,1)$.~We are yet to check if this disagreement is a finite volume artifact, as the
basis \eqref{eqn: lattice_glue} does not take such effects into account.~Finally, in the Figure we also included two vertical lines, which mark the rough location of
the physical momentum $|\hat{p}| \approx 0.312$ GeV, for the two considered $\beta$ values.~This physical scale ultimately has to do with the muon $g-2$ study
of \cite{Aubin:2015rzx}, but since the full related discussion is lengthy and lies a bit outside our current main line of development, we will only provide the details
at the end of this section.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.44\textwidth]{figures/beta_gluon.pdf}
\caption{Ratio of form factors $F(p)$ and $G(p)$ of \eqref{eqn: latt_dress_glue}, on a 32$^3$ lattice, as a function of $|p| = \sqrt{p^2}$ (in lattice units).~The values for
the function $R(p) = |\hat{p}|/|p|$ are also provided, where $\hat{p} = 2 \, \sin{p/2}$.~The two vertical lines denote the physical momentum scale $|\hat{p}| = 0.316$ GeV,
for two $\beta$ couplings of \eqref{eqn: wilson_action} considered in our numerics.~The scale setting and significance of the physical momentum $\hat{p}^2 = 0.1$ GeV$^{\,2}$
are discussed in the text.}
\label{fig: beta_glue}
\end{center}
\end{figure}
As one of the last tests concerning the basis description \eqref{eqn: lattice_glue}, we want to show that the corresponding dressing functions are hypercubic invariants.~To do
so, we shall examine a collection of relatively random momentum points $p$, which are not close to any of the special configurations like e.\,g.~on-axis or diagonal momenta.
\!Our goal is to show that averaging the functions over permutations and inversions of momentum components does not change their value, within statistical errors.~For this
purpose, the form factors calculated at momenta $p = (p_1, p_2, p_3)$ [\,here denoted generically as $\mathcal{F}(p)$\,] will be compared with their appropriate permutation
and parity averages, where for instance
\begin{align}\label{eqn: permute_average}
\mathcal{F}^{\,\text{perms}} = \frac{1}{6} \cdot \left[\, \mathcal{F}(p_1, p_2, p_3) + \mathcal{F}(p_1, p_3, p_2) + \mathcal{F}(p_2, p_1, p_3) + \mathcal{F}(p_2, p_3, p_1) +
\mathcal{F}(p_3, p_1, p_2) + \mathcal{F}(p_3, p_2, p_1)\, \right] \, .
\end{align}
In the same manner, the inversion average is obtained by going over all momenta of the form $ p^{\pm} = (\pm p_1, \pm p_2, \pm p_3)$, with all possible combinations of plus
and minus signs.~For both permutations and parity changes, we've checked that all of the functions which enter the sum like \eqref{eqn: permute_average} have the same sign,
meaning that there can be no accidental cancellations during the averaging procedure.~The results are given in Figure \ref{fig: aver_test_glue}, and they indicate that the
form factors of \eqref{eqn: latt_dress_glue} are indeed invariant under hypercubic symmetry transformations.~This also (in)directly confirms that the gluon propagator itself
constitutes a second-rank tensor with respect to these symmetry operations, a fact which is all but guaranteed by the tensor description \eqref{eqn: gauge_tensor}.~However,
in Monte Carlo simulations, the validity of \eqref{eqn: gauge_tensor} depends crucially on the $p_\mu/2$ modification in the Fourier transform for the gluon potential, see
\eqref{eqn: fourier_glue}.~In the absence of this correction factor, the numerical propagator would in fact not transform as a second-rank tensor under inversions.~This issue
is further discussed in Appendix \ref{eqn: inversions_append}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.41\textwidth]{figures/dress_aver_diag.pdf}\includegraphics[width = 0.41\textwidth]{figures/dress_aver_off.pdf}\\
\caption{\textit{Left}:~Test of permutation and inversion invariace of functions $E(p)$ and $F(p)$ of \eqref{eqn: latt_dress_glue}, on a $32^3$ lattice.~Labels ``$E(p)$'' and ``$
- p^2 \cdot F(p)$'' refer to results for a single momentum point $p$, while additional remarks ``perms'' and ``invers'' signify averages over permutations and inversions of $p$
components, see equation \eqref{eqn: permute_average} and accompanying text.\,\textit{Right}:~The same, for the dressing $G(p)$ of \eqref{eqn: latt_dress_glue}.~Data are provided as
functions of $|p| = \sqrt{p^2}$, in lattice units.~$\beta$ is the gauge coupling of \eqref{eqn: wilson_action}.}
\label{fig: aver_test_glue}
\end{center}
\end{figure}
\newpage
To conclude this part, we wish to briefly cover two more points.~First, in deriving the tensor decomposition \eqref{eqn: lattice_glue} we did not make any explicit reference to lattice
Landau gauge, and the basis itself should be applicable to virtually any covariant settings, wherein all of the coordinates are treated equally.~In our Monte Carlo simulations we
chose to work in Landau gauge because it is the easiest one to implement, with numerical considerations of its (linear) covariant generalisations featuring some non-trivial complications,
see \cite{Giusti:1996kf,Cucchieri:2009kk,Bicudo:2015rma,Cucchieri:2018doy} for a related discussion.~Nonetheless, there should be no principal difficulties in using the basis \eqref{eqn:
lattice_glue} and its modifications of the kind \eqref{eqn: gluon_higher} in any covariant calculations, once the numerical gauge-fixing part is done.~In the future we would thus like to
check how some of the more general conclusions of this section hold up in other covariant gauges.
As a second point, we return to the Figure \ref{fig: beta_glue} and its vertical lines denoting the momentum scale $|\hat{p}| \approx 0.316$ GeV, for two $\beta$ couplings we considered
in our simulations.~To convert the lattice spacing $a$ into physical units, we've set the string tension to $\sqrt{\sigma} = 0.44$ GeV and used the fit of equation (67) in \cite{
Teper:1998te}:~the fit requires the values of the 1$\times$1 Wilson loops, which are provided in Table \ref{tab: config_details}.~We marked the point(s) $|\hat{p}| \approx 0.316$ GeV
as significant since the momenta around or below the scale $p^2 \approx 0.1$ GeV$^{\,2}$ are presumably the ones for which the hadronic vacuum polarisation contributes the most to the
anomalous magnetic moment of the muon \cite{Aubin:2015rzx, Golterman:2014ksa}.~Now, due to (lattice) vector-current conservation, the hadronic electromagnetic current $\Pi_{\mu\nu}(p)
$ will be described by equation \eqref{eqn: gauge_tensor}, up to higher-order scaling violations which can be ignored at low momenta \cite{Shintani:2010ph}.~This means that, up to certain
effects which we shall discuss shortly, our results in Figure \ref{fig: beta_glue} also apply to $\Pi_{\mu\nu}(p)$ and subsequently to the vacuum polarisation $\Pi(p)$.~The main point
here is that, at the relevant energy scale below 0.1 GeV$^{\,2}$, the discretisation artifacts seen in Fig.~\ref{fig: beta_glue} constitute a sub-percent effect with $F/G > 0.99
$:~we assume that the discrepancy at the lowest momentum is purely due to the finite volume.~To put these finite-spacing effects into perspective, in the muon $g-2$ study of \cite{
Aubin:2015rzx} the finite volume was estimated to incur a systematic uncertainty on the order of ten to fifteen percent.
Of course, our results in Figure \ref{fig: beta_glue} should not actually be directly applied/compared to \cite{Aubin:2015rzx}, because of different scale-setting procedures and
completely different lattice setups (our symmetric three-dimensonal lattice versus the \textit{a}symmetric four-dimensional one of \cite{Aubin:2015rzx}).~Nonetheless, it is hard to
imagine that our conclusions on this matter could get modified drastically with more realistic comparisons, and it remains an almost absolute certainty that the discretisation effects
will be by far the sub-dominant source of systematic errors, in lattice studies of the anomalous muon magnetic moment.
\subsection{Ghost-gluon vertex in three dimensions}\label{sec: ghost_glue}
As for the gluon two-point function, we start our discussion on the ghost-gluon vertex by specifying the corresponding numerical procedure.~On the lattice, the ghost-gluon
Greens function can be obtained as the following Monte Carlo average \cite{Cucchieri:2004sq}:
\begin{align}\label{eqn: green_ghost}
\Gamma_\mu^{\,abc}(p, q, k) = \frac{1}{V} \left\langle \left( M^{-1} \right)^{ab}(p,q) \, A_\mu^c (k) \right\rangle \, .
\end{align}
In the above relation, $V$ stands for the lattice volume, $A_\mu^c (k)$ denotes the colour components of the gluon potential of \eqref{eqn: intro_latt_glue}, and $(M^{-1})^{ab}(p,
q)$ is the Fourier transform of the (inverse) Faddeev-Popov operator, i.\,e.
\begin{align}\label{eqn: faddeev_foury}
\left( M^{-1} \right)^{ab} (p,q) = \sum_{x, y} \, \mbox{e}^{\, 2 \pi i \, ( p \cdot x \, + \, q\cdot y)} \: \left( M^{-1} \right)^{ab}(x, y) \, .
\end{align}
The Faddeev-Popov (FP) matrix itself is defined through its action on a scalar test function $\omega^b (x)$, where $b$ is a colour index and one has that \cite{Zwanziger:1993dh}
(in the following, the sums over $y$ and $b$ are implied):
\begin{align}\label{eqn: faddeev_define}
M^{ab}(x, y) \, \omega^b (y) = \delta_{xy} \, & \sum_{\mu} \, G_\mu^{\,ab} (y) \, [\, \omega^b(y) - \omega^b(y^+)\,] - G_\mu^{\,ab} (y^-) \, [\, \omega^b(y^-) -
\omega^b(y)\,] \, + \nonumber \\
& \sum_c f^{abc} \, [\, A^b_\mu(y) \, \omega^c(y^+) - A^b_\mu(y^-) \, \omega^c(y^-) \,] \, .
\end{align}
In the above expression, $y^{\pm}$ stands for $y \pm e_\mu$, with $e_\mu$ being the unit vector in the $\mu$-th direction [\,the dummy index $\mu$ matches the one being
summed over in \eqref{eqn: faddeev_define}\,].~Also, the quantities $G_\mu^{\,ab} (y)$ used in the definition \eqref{eqn: faddeev_define} are equal to
\begin{align}\label{eqn: g_faddeev_def}
G^{\,ab}_\mu (y) = \frac{1}{8} \, \text{Tr} \left( \{\sigma_a, \sigma_b\} \cdot \left[ \, U_\mu (y) \, + \, U_\mu^\dagger(y) \, \right] \right) \: ,
\end{align}
\noindent
where the curly brackets denote an anticommutator and $U_\mu (y)$ are the lattice links.~In writing down the relation \eqref{eqn: faddeev_define}, we've taken into account the fact
that we work in lattice Landau gauge, as otherwise there would be additional terms present.~To compute the Fourier transform of the inverse FP operator [\,i.\,e.~the quantity
\eqref{eqn: faddeev_foury}\,], we used the so-called plane-wave source method \cite{Cucchieri:1997dx}.~The matrix inversion itself is performed via a preconditioned conjugate
gradient (CG) algorithm:~the preconditioning procedure is described in detail in \cite{Sternbeck:2005tk}.~At each iterative CG step we orthogonalise the prospective solution with
respect to the constant subspace, since constant fields constitute zero modes of the FP matrix.~In the end, we use a total of 480 gauge field configurations for the evaluation of
the Greens functions \eqref{eqn: green_ghost}.~The vertex to be studied is extracted from $\Gamma_\mu^{\,abc}$ with a contraction
\begin{align}\label{eqn: final_vertex}
\Gamma_\mu = \frac{1}{6} \, \sum_{abc} \, f^{\,abc} \, \Gamma^{\,abc}_\mu \: .
\end{align}
The colour normalisation factor (1/6) stems from the $SU(N)$ identity $f^{\,ade} \, f^{\,bde} = N \, \delta^{\,ab}$, as applied to the particular case of $SU(2)$ gauge theory we
study here.~In lattice calculations, one is generally not really interested in correlators like \eqref{eqn: green_ghost}, but rather in the so-called amputated vertices, wherein
amputation includes (loosely speaking) dividing out the propagators pertaining to a function like \eqref{eqn: green_ghost} \cite{Parrinello:1994wd}.~Here, the procedure will be
completely ignored, and we shall work directly with \eqref{eqn: final_vertex}.~We do this because amputation can potentially increase the overall statistical uncertainty [\,the
amputated vertex inherits its errors from both \eqref{eqn: green_ghost} and the appropriate propagators\,], while changing none of the quantities we are mostly interested here.~In
particular, it does not alter the tensor structure of the vertex, the symmetry properties of the dressing functions, nor the \textit{relative} values of vertex form factors,
i.\,e.~the ``sizes'' of vertex dressings relative to one another.~Hence our focus on working directly with \eqref{eqn: final_vertex}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.39\textwidth]{figures/vertex_recon_one.pdf}\includegraphics[width = 0.39\textwidth]{figures/vertex_recon_two.pdf}
\includegraphics[width = 0.39\textwidth]{figures/vertex_recon_three.pdf}\includegraphics[width = 0.39\textwidth]{figures/vertex_recon_four.pdf}
\includegraphics[width = 0.39\textwidth]{figures/vertex_recon_five.pdf}\includegraphics[width = 0.39\textwidth]{figures/vertex_recon_six.pdf}
\caption{Vertex reconstruction data for the correlator \eqref{eqn: final_vertex} on a 32$^3$ lattice, as functions of $|q| = \sqrt{q^2}$ (in lattice units).~We use the continuum
[\,equations \eqref{eqn: ghost_cont} and \eqref{eqn: ghost_project}\,] and lattice-modified [\,equations \eqref{eqn: ghost_lattice}, \eqref{eqn: project_general}, \eqref{eqn: matrix_vertex}
and \eqref{eqn: determinant}\,] tensor descriptions.~$\beta$ is the gauge coupling of \eqref{eqn: wilson_action}.}
\label{fig: ghost_recon}
\end{center}
\end{figure}
In our vertex reconstruction tests, the correlator \eqref{eqn: final_vertex} will be described by two different tensor parametrisations.~One is the continuum basis, given by
\begin{align}\label{eqn: ghost_cont}
\Gamma_\mu(q, k) = A(q,p) \, q_\mu + B(q,p) \, p_\mu \, ,
\end{align}
\noindent
with the appropriate projectors
\begin{align}\label{eqn: ghost_project}
P_\mu^{\,A} = \frac{-p^2 \, q_\mu + q\cdot p \, p_\mu}{(q\cdot p)^2 - q^2 \, p^2} \, , \qquad \qquad \: P_\mu^{\,B} = \frac{ q\cdot p \, q_\mu - \, q^2 \, p_\mu}{(q\cdot p)^2 - q^2
\, p^2} \, .
\end{align}
The construction of the above projector functions is briefly discussed in Appendix \ref{sec: projectors}.~The other vertex decomposition to be studied here is (we will only
consider a three-dimensional theory and hence three basis elements will suffice):
\begin{align}\label{eqn: ghost_lattice}
\Gamma_\mu(q, p) = E(q,p) \, q_\mu + F(q,p) \, p_\mu \, + G(q,p) \, q^{\,3}_\mu \, .
\end{align}
The explicit expressions for the projectors of \eqref{eqn: ghost_lattice} are also provided in Appendix \ref{sec: projectors}.~Reconstruction results for the correlator \eqref{eqn:
final_vertex} are given in Figure \ref{fig: ghost_recon}, for the two above-mentioned tensor representations.~A brief glance at the corresponding data reveals a somewhat surprising
fact:~namely, apart from a few ``critical points'' in Figure \ref{fig: ghost_recon}\,d), all the deviations pertaining to the continuum basis are within a twenty percent range, an
arguably small discrepancy.~In fact, for most of the examined momentum points the continuum decomposition can be said to represent the vertex exactly, within somewhat large statistical
uncertainties.~To the best of our knowledge, there is no \textit{a priori} reason that this should happen.~Unlike the case of the lattice gluon propagator, the tensor decomposition of
the ghost-gluon correlator is not determined (at least not completely) by gauge-fixing.~For the vertex, this means that there are no obvious constraints on possible deviations from
the continuum tensor forms, and it is not clear why the function would show relative restraint in this regard.~It would be interesting to see if other lattice vector-valued quantities
like e.\,g.~the quark-gluon vertex, display similar tendencies (this would however be unexpected since fermions are usually significantly affected by finite spacing artifacts, see
e.\,g.~\cite{August:2013jia}).
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.41\textwidth]{figures/vert_dress_e.pdf}\includegraphics[width = 0.41\textwidth]{figures/vert_dress_f.pdf}
\includegraphics[width = 0.41\textwidth]{figures/vert_dress_g.pdf}\includegraphics[width = 0.41\textwidth]{figures/real_e.pdf}
\caption{Plots a) to c)\,:~Imaginary parts of lattice-modified vertex form factors [\,equations \eqref{eqn: ghost_lattice}, \eqref{eqn: project_general}, \eqref{eqn: matrix_vertex} and \eqref{eqn:
determinant}\,] on a 32$^3$ lattice, as functions of $|q| = \sqrt{q^2}$ (in lattice units).~Labels ``permutations'' and ``inversions'' refer to results averaged (respectively) over all
possible permutations and inversions of momentum components, as opposed to ``no averaging'' data obtained for a single kinematic set $(q,p)$.~Plot d)\,:~the real part of the vertex dressing
$E(q,p)$ of \eqref{eqn: ghost_lattice}.~$\beta$ is the gauge coupling of \eqref{eqn: wilson_action}.}
\label{fig: ghost_dress}
\end{center}
\end{figure}
In the absence of large lattice-induced modifications, it is also a bit challenging to precisely identify the special kinematic configurations.~In other words, from numerical results
it is hard to see where the continuum basis should describe the vertex exactly, due to linear dependencies among available tensor elements.~Arguably the most clear-cut examples of the
continuum description being sufficient are those in plots a),\,e) and f).~All the points in graph a) correspond to a situation with one diagonal vector [\,in this case $p = (1,\,1,\,
1)$\,] and the other one being almost on-axis [\,with $q = (1,\,1,\,m)$\,], which is a kinematic choice that was discussed in section \ref{sec: ghost_latt}.~In the same vein, the
rightmost points in plots e) and f) feature a diagonal momentum $q$, with the other dynamic variable being equal or almost equal to the two special cases $p = (m,\,0,\,m)$ and $p = (m,
\,m,\,m)$:~hence the apparent applicability of the basis decomposition \eqref{eqn: ghost_cont}, see arguments of section \ref{sec: ghost_latt}.~Another set of interesting configurations
are the first and the last momentum points in Figure \ref{fig: ghost_recon}\,c).~We concentrate on the first one, where vector $p$ almost has the form $p \approx (\pi,\,0,\,0)$, and $q$
is close to the situation $q \approx (0,\,\pi,\,0)$.~One observes that with vectors $p = (\pi,\,0,\,0)$ and $q = (0,\,\pi,\,0)$, all of the mixed tensor structures of the kind $\tau^{\,
rs}_\mu = p_\mu^{\,r}\,q_\mu^{\,s}$, with appropriate non-zero integers $r$ and $s$ [\,see equation \eqref{eqn: taus_lattice_vertex} for details\,] will vanish, because $p_\mu \, q_\mu = 0$
(no summation implied), for all values of the index $\mu$.~Therefore only the lattice vectors $p_\mu^{\,2k + 1}$ and $q_\mu^{\,2k + 1}$ remain (with $k \in N_0$), which for the considered
kinematics are proportional to the continuum terms:~for an example, $p_\mu^{\,3}$ equals $\pi^2 \, p_\mu$ and so on.~This brings about the seeming near-completeness of the continuum
description in the first momentum point in Figure \ref{fig: ghost_recon}\,c), and the same explanation holds for the rightmost kinematic choice in the plot.
The above interesting cases notwithstanding, most of the results in Figure \ref{fig: ghost_recon} are somewhat trivial, since they amount to a claim that a three-dimensional vector will
be described fully by a set of three linearly independent elements with an open vector index $\mu$.~However, what is not trivial is the claim that the form factors of the vertex basis
\eqref{eqn: ghost_lattice} will be hypercubic invariants.~In Figure \ref{fig: ghost_dress} we provide the results of a hypercubic symmetry test, similar to the one we went over for the
gluon propagator in Figure \ref{fig: aver_test_glue}.~Before we discuss the data points themselves, we need to clarify two things about the overall setup in the Figure.~First, instead of
the dressing functions of equation \eqref{eqn: ghost_lattice}, we plot the modified quantities $E', \, F'$ and $G'$, where $E'_{q,\,p} = |q|^3 \cdot E_{q,\,p}$, $F'_{q,\,p} = |q|^3 \cdot F_{
q,\,p}$, and $G'_{q,\,p} = |q|^5 \cdot G_{q,\,p}$\,:~here $|q|$ stands for $\sqrt{q}$.~This momentum-dependent alteration was done for presentation purposes, as without it the functions
$F$ and $G$ would feature very different scales in the IR and UV energy regions, making it hard to distinguish any details at relatively high momentum $q$.~The form factors $F$ and $G$
seemingly diverge in the IR because we work with an un-amputated Greens function:~the amputated version should have the dressing functions which are far more flat at low momenta, see
e.\,g.~\cite{Cucchieri:2004sq, Cucchieri:2006tf, Maas:2007uv, Cucchieri:2008qm}.~The second important thing concerning Figure \ref{fig: ghost_dress} is that we mostly examine the imaginary
parts of (generally complex-valued) vertex dressings.~This is because the corresponding real parts anyway vanish upon averaging over inversions of momentum components, as illustrated in
plot \ref{fig: ghost_dress}\,d).~The reason that the real components are nullified upon parity-averaging is explained in Appendix \ref{sec: projectors}.
\begin{figure}[!t]
\begin{center}
\includegraphics[width = 0.39\textwidth]{figures/low_dress.pdf}\includegraphics[width = 0.39\textwidth]{figures/high_dress.pdf}
\includegraphics[width = 0.39\textwidth]{figures/ratio_low.pdf}\includegraphics[width = 0.39\textwidth]{figures/ratio_high.pdf}
\caption{Top panel:~Vertex form factors $F(q,p)$ and $q^2 \cdot G(q,p)$ [\,see equations \eqref{eqn: ghost_lattice}, \eqref{eqn: project_general}, \eqref{eqn: matrix_vertex} and \eqref{eqn:
determinant}\,], as evaluated on a $32^3$ lattice.~Bottom panel:~absolute value of the ratio $q^2 \cdot G(q,p)/F(q,p)$, for the same kinematics as in the upper two plots.~All points are
given as functions of $|q| = \sqrt{q^2}$, in lattice units.~$\beta$ is the gauge coupling of \eqref{eqn: wilson_action}.}
\label{fig: ratio_vertex}
\end{center}
\end{figure}
First and foremost, the plots from a) to c) in Figure \ref{fig: ghost_dress} confirm that (imaginary parts of) the dressing functions of \eqref{eqn: ghost_lattice} are invariant with respect
to permutations and inversions of momentum components, albeit within rather ``generous'' error bars.~The precision of the hypercubic test can be improved with better statistics, but since
the evaluations of the lattice ghost-gluon correlator are computationally far more expensive than those of the gluon propagator (or three-gluon vertex), for now we've decided to stay with
a relatively modest sample of 480 gauge-fixed field configurations.~Going back to Fig.~\ref{fig: ghost_dress} itself, one may also note that the function $G'_{q,\,p} = |q|^5 \cdot G_{q,\,
p}$ (which has the same mass dimension as the dressings $E'$ and $F'$), is substantially suppressed in the IR region, compared to its continuum counterparts.~This is in accordance with the
expectation that the vertex should be dominated by the continuum tensor structures, as one goes to lower values for both momenta $p$ and $q$.~However, what is arguably surprising about Figure
\ref{fig: ghost_dress}\,c) is that the values of $G'_{q,\,p}$ are consistent with zero (within big error bars) even when one of the components of $q$ is made to be relatively large.~This
is probably more a sign of insufficient statistics, than an actual indication that $G'_{q,\,p}$ should be negligibly small in the `UV' region for momentum $q$.~Again, improved statistics
could lead to more accurate conclusions.~In this regard it ought to be mentioned that extracting a good-quality signal for lattice operators with high(er) mass dimensions is a non-trivial
task even with comparatively large configuration samples:~as an example of this, one may consult Figure 12 in \cite{Vujinovic:2018nqc}.
Even though the signal for the form factor $G_{q,\,p}$ is not particularly good in our current setup, we decided to check if changing the gauge parameter $\beta$ has any appreciable
impact on the relative size of this function, compared to the continuum dressing $F$ (which seems to be the dominant contribution, for kinematics in Fig.~\ref{fig: ghost_dress}).~The
results are shown in Figure \ref{fig: ratio_vertex}, with two choices for the $\beta$ coupling and two kinematic configurations, one with small and the other with relatively large values
for components of vectors $q$ and $p$.~In the Figure the functions $F_{q,p}$ and $q^2 \cdot G_{q,p}$, which have the same mass dimension, are compared both directly (upper panel) and as a
ratio $|q^2\cdot G/F|$ (lower panel), where $|.|$ denotes an absolute value.~As expected, the ratios $|q^2\cdot G/F|$ are considerably larger when both $q^2$ and $p^2$ have comparatively
big values, signalling that the lattice corrections to continuum basis decompositions become more prominent in the UV energy region.~Nonetheless, even in the UV the dressing $q^2 \cdot
G_{q,p}$ appears to be substantially smaller than $F_{q,p}$, which is a non-trivial result.~Concerning the lattice interaction parameter $\beta$, within statistical uncertainties it
has little to no impact on the relative sizes of the two form factors.~Again, this is probably more a consequence of modest statistics than a sign that the gauge coupling has no
influence on the ratios akin to $|q^2\cdot G/F|$.
Based on the data in Figures \ref{fig: ghost_recon} and \ref{fig: ratio_vertex}, it could be said that for most kinematic configurations on the lattice one may neglect the corrections
to continuum tensor bases, if a quantitatively semi-accurate description of the vertex is desired.~More precisely, the continuum tensor decomposition should arguably be sufficient if
one finds an uncertainty on the order of five to twenty percent tolerable in a given study.~Investigations where a more precise representation is desired ought to either consider only
special kinematic situations, or use the lattice-modified bases for the correlator $\Gamma_\mu$.~Besides improving the accuracy of the basis decomposition itself, an explicit evaluation
of the lattice-induced dressing functions like $G_{q,p}$ can be useful for testing the continuum extrapolation methods.~Namely, in the continuum a form factor like $G_{q,p}$ ought to
vanish, and so one expects the ratios akin to those in the bottom panel of Fig.~\ref{fig: ratio_vertex} to go to zero as the continuum limit of the theory is approached.~Vanishing of
these ratios (within error bars) can be used as one of the indicators that the said extrapolations were successful.~On this note, we want to point out that these procedures for the
ghost-gluon vertex (or indeed any functions beyond the propagators) are quite involved.~Since vertices generally feature multiple momentum variables, there are many more non-continuum
hypercubic invariants than just those shown in \eqref{eqn: hyper_scalars}, see e.\,g.~equations \eqref{eqn: more_invariants} and \eqref{eqn: matrix_vertex} or section 5 in \cite{deSoto:2007ht}.
\!This means that, in order to eliminate all of the non-continuum scalars of a given mass dimension, many data points are needed to perform the extrapolations with reasonable precision.~In
case of the ghost-gluon correlator the problem is further aggravated by the fact that the lattice Monte Carlo calculations of this function are numerically quite expensive, as mentioned
before.~Attempts to improve the situation might constitute an interesting, albeit difficult, research topic for future studies.
\section{Conclusions and outlook}\label{sec: conclude}
In this paper we have presented a way of deriving the tensor bases for lattice vertex functions, such that the corresponding form factors are invariant under the hypercubic symmetry
transformations.~We've used the method to derive the most general possible (barring the finite volume artifacts) basis structures for lattice tensors of first and second rank, with
up to two independent momenta in the former, and a single kinematic vector-like variable in the latter case.~The lowest-order non-continuum variants of these decompositions were
applied to the ghost-gluon vertex and gluon propagator of lattice Monte Carlo simulations, resulting in a few interesting observations.~First, it was shown analytically and confirmed
numerically that there exist special momentum configurations wherein the tensor structures of both correlators reduce to their continuum form.~For the gluon propagator $D_{\mu\nu}(p)$
in three dimensions, special kinematic situations correspond to the on-axis momentum $p = (m, 0, 0)$, the diagonal vector $p = (m,m,m)$, and the `in-between' points $p = (m,m,0)$:~any
non-equivalent permutations of $p$ components are also allowed.~For the ghost-gluon vertex $\Gamma_\mu(q,\,p)$, all of the possible special combinations will not be given here, but we
merely state that one of these is the fully diagonal kinematic choice with $p = (m,m,m)$ and $q = (n,n,n)$.~The second notable result is that the rate at which the gluon propagator
approaches its continuum form in the infrared is dictated solely by the numerical gauge-fixing algorithm:~it is however questionable if it is worthwhile to invest effort in improving
this situation, since for momenta $\sqrt{p^2} \leq 1$ (in lattice units), the finite spacing incurs a quantitative effect below five percent, see Figure \ref{fig: beta_glue}.~We also
commented on how this reflects on the lattice investigations of the anomalous magnetic moment of the muon and argued that discretisation artifacts are negligible at the relevant energy
scales.
As possible future applications of our framework, we've already discussed how it can be used to directly test some of the continuum extrapolation methods, but no actual results of this
kind were provided.~We leave such endeavours for future investigations.~It also remains to be seen how the symmetry-based lattice modifications may affect some other correlators of
interest, like the three-gluon or quark-gluon vertices.~While the three-gluon interaction kernel was briefly considered in section \ref{sec: gluon_lattice}, for now we've completely
ignored the spinor fields and related $n$-point functions.~This is because we are not yet certain about all of the possible generalisations in this regard, when going from continuum to
discretised spacetimes:~for single-momentum functions we expect for hypercubic symmetry to allow for additional couplings apart from $\gamma \cdot p$, where $\gamma_\mu$ are the Euclidean
Dirac matrices and $p_\mu$ is the appropriate momentum vector.~This too constitutes an interesting research topic for the future, especially as one expects for finite spacing artifacts
to be more pronounced for fermions that bosons, see \cite{August:2013jia} as an example.~Finally, we are yet to check if the low-momentum discrepancy seen in Figure \ref{fig: beta_glue}
for the gluon propagator is a finite volume effect:~if this turns out to be true, then perhaps our formalism may allow one to quantify such deviations as well, or indeed any (un)expected
alterations with respect to the continuum tensor form, for lattice correlators of interest.
\section*{ACKNOWLEDGMENTS}
We gratefully acknowledge the support of the Austrian science fund FWF, under Schr\"odinger grant J3854-N36.~Parts of the numerical simulations were done on HPC clusters of the
University of Graz.
|
1,477,468,751,425 | arxiv | \section{INTRODUCTION}
Recently, we have shown that layered sine-Gordon type models
are probably not suitable for the description of Josephson-coupled
layered superconductors, because the linear, confining potential
that binds the vortices together cannot be obtained from the
interaction of the topological excitations of the model, no matter
how the interlayer interaction term is chosen~\cite{NaEtAl2007jpc}.
On the other hand, vortex dominated properties of high $T_{\rm c}$
layered superconductors and
other types of layered materials, e.g. superconducting
sandwiches, have already received a considerable amount of
attention (see, e.g., Refs.~\cite{Pe1964,dG1966,Ef1979,
ArKr1990,BuFe1990,FeGeLa1990,Cl1991,Fi1991,KoVi1991,Pu1993,
BlEtAl1994,MiKoCl2000,ClemPancake,GoHo2005,CoGeBl2005}), and the
intuitively obvious connection of sine-Gordon models to these
materials makes one wonder if at least, a restricted applicability
of the layered, field-theoretical model persists.
We also observe that recently, there is an increasing interest
in the literature
\cite{BeCaGi2007,BeCaGi2007magnetic,Ar2007}
in constructing sine--Gordon type field theoretical models in
order to understand better the vortex dynamics in layered
superconducting systems. Our aim in this paper to follow this
route by constructing a two-dimensional multi-layer sine-Gordon
type model which can be used to describe the vortex behaviour of
magnetically as opposed to Josephson-coupled layered superconductors,
and to contrast and enhance our recent investigation~\cite{NaEtAl2007jpc}.
In a two-dimensional (2D) isolated superconducting thin film, the
Pearl vortices~\cite{Pe1964,dG1966} are naturally identified as
the topological excitations and can be considered as the charged
analogues of the vortices in a 2D superfluid which generate the
Kosterlitz--Thouless--Berezinski (KTB) phase transition~\cite{KTBPhase}.
The logarithmic interaction between the vortices of the superfluid
extends to infinity and as a consequence they remain bound below
the finite KTB transition temperature ($T^{\star}_{\rm KTB}$) and
dissociate above it~\cite{KTBPhase}. Since the Pearl vortices carry
electric charge, they always remain unbound due to the screening length
$\lambda_{\rm eff}$ generated by the electromagnetic field which cuts off
the logarithmic interaction \cite{PiVa2000,BlEtAl1994,ReEtAl1996} and
leads to the absence of any KTB phase transition. However, for realistic
finite 2D superconducting films where the lateral dimension of the
film can be smaller then the screening length $R_0 < \lambda_{\rm eff}$
the KTB transition can be restored \cite{PiVa2000,BlEtAl1994}.
This constitutes an intrinsic finite size effect.
In layered materials, the interlayer coupling modifies the 2D picture
and leads to new types of topological defects. If the layers are
coupled by Josephson coupling (like for many HTSC materials), the
vortex-antivortex pairs on the same layer interact with each other
via a logarithmic term for small distances but they feel a linear
confining potential for large distances (see e.g. \cite{BlEtAl1994}
and references therein). The vortices in neighboring layers always
interact via a linear potential which can couple them by forming
vortex loops, rings, or vortex ``strings'' piercing all layers.
If the layers are coupled by purely magnetic interaction (e.g. in
artificially produced superlattices where the Cooper pair tunneling
between the superconducting layers is suppressed by relatively large
insulating layers) the topological defects for a system which consists
of infinitely many layers are pancake vortices~\cite{Cl1991,ClemPancake}
which undergo a KTB phase transition at $T^{\star}_{\rm KTB}$.
As explained e.g.~in Ref.~\cite{CoGeBl2005}, the
Josephson coupling can be essentially neglected when
the confinement length, i.e.~the length scale at
which the linear confining potential due to the
Josephson coupling dominates over the logarithmic interaction
due to magnetic effects, is pushed beyond the effective
screening length for the logarithmic interaction among vortices.
This situation is present when the tunneling between the
superconducting layers is suppressed by relatively large insulating
layers, and a proposal for a experimental
realization has recently been given~\cite{CoGeBl2005}.
For a finite number $N$ of magnetically coupled layers, the Pearl type
vortex stack~\cite{Pe1964} is broken up into a number of coupled pancake
vortices of fractional flux~\cite{Pu1993,MiKoCl2000,ClemPancake}, and this
configuration undergoes a KTB-type phase transition at a layer-dependent
temperature $T^{(N)}_{\rm KTB}=T^{\star}_{\rm KTB}(1-N^{-1})$ which is
connected with the dissociation of the stack. This result has been
obtained on the basis of the entropy method first introduced in the
ground-breaking work~\cite{Pu1993}. Recently, a real space
renormalization-group (RG) analysis of the case $N=2$ has been performed
in Ref.~\cite{CoGeBl2005} using the dilute gas approximation.
A priori,it appears to be rather difficult to generalize this RG
analysis for $N>2$ layers.
In general, the Ginzburg--Landau (GL) theory \cite{Gi1952} provides
us with a good theoretical framework in which to investigate the
vortex dynamics in
thin films and in layered materials. Several equivalent models, like
field-theoretical, statistical spin models and a gas of topological
defects have also been used to consider the vortex properties
of films and layered systems. The 2D-GL, 2D-XY and the 2D Coulomb
gas models (see e.g. \cite{BlEtAl1994,NaEtAl2007jpc} and references
therein) are considered as the appropriate theoretical background for
the vortex dynamics of superfluid films. The field theoretical
counterpart is the 2D sine-Gordon (SG) model \cite{SG2D}. Both kinds
of these models belong to the same universality class and produce
the KTB phase transition. For superconducting films one has to
consider the 2D-GL model in the presence of electromagnetic
interactions \cite{BlEtAl1994} or the equivalent gas of topological
excitations, the 2D Yukawa gas \cite{PiVa2000}. The corresponding
field theory is the 2D-SG model with an explicit mass term, the
massive 2D-SG model \cite{PiVa2000}.
For Josephson-coupled layered superconductors in the case of very
large anisotropy one should investigate the layered GL model
including the Josephson coupling between the layers~\cite{BlEtAl1994}
(i.e. the Lawrence-Doniach model~\cite{LaDo1971}). In case of not
too large anisotropy on can use the anisotropic, continuous GL
theory~\cite{Gi1952,BlEtAl1994,ChDuGu1995} which can be mapped
onto the isotropic GL model by an appropriate rescaling
method~\cite{RESCALE}. The corresponding spin model is the
3D-XY model~\cite{3DXY} and the equivalent gases of topological
excitations are the layered vortex~\cite{Pi1995prb} or
vortex-loop~\cite{3DXY} gases. There are attempts in the literature
to construct the field theoretical countpart of the isotropic
model~\cite{Sa1978}. In case of strong anisotropy, the layered
sine--Gordon (SG) model~\cite{LSGPierson} has been proposed as a
candidate model where the interlayer interaction between the
topological defects has been described by a mass matrix
which couples the SG fields
\begin{align}
\hf \underline{\varphi}^{\rm T} \, \underline{\underline m}^2
\, \underline{\varphi} \equiv
\sum_{n = 1}^{N-1} \frac{J}{2} (\varphi_{n+1}-\varphi_n)^2
\nonumber
\end{align}
where $\underline{\varphi}=\left(\varphi_1, \dots, \varphi_{N}\right)$
and $\varphi_n$ (n=1,...,N) are one component scalar fields.
Recently, we showed in Ref.~\cite{NaEtAl2007jpc} that the layered SG
model with the above mass matrix is not apropriate for the description
of vortex dynamics of Josephson coupled layered superconductors.
In case of purely magnetically coupled layered systems, the
layered GL model has to be used but excluding the Josephson
coupling. Although the interaction potentials between the topological
defects of magnetically coupled layered systems are given in
Refs.~\cite{CoGeBl2005,ClemPancake,BuFe1990,KoVi1991}, no field
theoretical model has been proposed for the description of vortex
dynamics in a finite system of magnetically coupled superconductors.
Here, our aim is to open a new platform for considering the vortex
dynamics of magnetically coupled layered systems by constructing a
multi-layer sine--Gordon (MLSG) type field theoretical model where the
two-dimensional sine--Gordon (2DSG) fields characterizing the layers
are coupled by an appropriate general mass matrix,
\begin{align}
\hf \underline{\varphi}^{\rm T} \, {\underline{\underline M}}^2
\underline{\varphi} \equiv
\hf G \left(\sum_{n=1}^N \varphi_n \right)^2 \,.
\nonumber
\end{align}
By the exact mapping of the MLSG model onto an equivalent gas of
topological defects, we recover the interaction potential given in
Refs.~\cite{BuFe1990,KoVi1991,ClemPancake,CoGeBl2005}
and, hence, prove the applicability of the model. We analyse the phase
structure of the MLSG model by a differential renormalization group
(RG) method performed in momentum space, which is in general easier
to perform than that in real space, and determine the layer-dependence of
$T^{(N)}_{\rm KTB}$. In our field theoretical RG approach, the RG
flow can be calculated in one step for an arbitrary number of layers,
and the study of the intrinsic finite size effect of thin film
superconductors \cite{BlEtAl1994,PiVa2000} and of finite layered
systems is facilitated.
This paper is organized as follows. In Sec.~\ref{sec2}, we define
the multi-layer sine--Gordon model and show by its exact mapping
onto the equivalent gas of topological excitations that it is
suitable to describe the vortex dominated properties of magnetically
coupled layered superconductors. In Sec.~\ref{sec3}, a renormalization
group analysis of the multi-layer sine--Gordon model is
performed within the framework of the Wegner--Houghton renormalization
group method, in momentum space for general $N$,
and with a solution that spans the entire domain from the
ultraviolet (UV) to the infrared (IR).
The layer-number dependence of the critical temperature
of the multi-layer sine--Gordon model is determined by using the
mass-corrected linearized RG flow. Conclusions are reserved
for Sec.~\ref{sec4}.
\section{Multi-Layer Sine--Gordon Model}
\label{sec2}
The multi-layer sine--Gordon (MLSG) model consists of $N$ coupled
two-dimensional sine--Gordon (2D-SG) models of identical ``frequency''
$b$, each of which
corresponds to a single layer described by the scalar fields
$\varphi_n$ $(n=1,2,\ldots,N)$. Its Euclidean bare action
(we imply here the sum over $\mu = 1,2$)
\begin{align}
\label{mlsg}
S[\underline{\varphi}] = \int {\rm d}^2 r
\biggl[ \hf (\partial_{\mu} \underline\varphi)^{\rm T}
(\partial_{\mu} \underline\varphi) +
V( \underline\varphi) \biggr]
\end{align}
contains the interaction terms
\begin{align}
V(\underline{\varphi}) =
\hf \underline\varphi^{\rm T} \, {\underline{\underline M}}^2
\underline\varphi - \sum_{n=1}^N y_n \cos (b \, \varphi_n)
\label{pepot}
\end{align}
with the $O(N)$ multiplet $\underline{\varphi}=
\left(\varphi_{1}, \dots, \varphi_{N}\right)$.
We can choose the fugacities $y_n > 0$ without loss
of generality, ensuring that the
zero-field configuration is a local minimum of the action
(see Chap.~31 of Ref.~\cite{ZJ1996}).
The mass-matrix describes the interaction between the layers
and is chosen here to be of the form
\begin{align}
\label{mass_matrix}
\underline{\varphi}^{\rm T} \, {\underline{\underline M}}^2
\underline{\varphi}
= G \left(\sum_{n=1}^N a_n \varphi_n \right)^2 \,,
\end{align}
where $G$ is the strength of the interlayer interactions,
and the $a_n$ are free parameters.
As will be explained below, any choice with $a^2_n = 1$ for all $n=1,\ldots,N$
reproduces exactly the same layer-dependence of $T^{(N)}_{\rm KTB}$
as found in Refs.~\cite{Pu1993,CoGeBl2005}.
In this case, the layers can be assumed
to be equivalent and, as a consequence, the fugacity $y_n \equiv y$
for $n=1,2,\ldots,N$.
The most obvious choice fulfilling $a^2_n = 1$,
namely $a_n =1$ for all $n=1,\ldots,N$,
reproduces the interlayer interaction between
pancake vortices given, e.g.,
in Eq.~(89) of Ref.~\cite{ClemPancake},
and we will restrict our attention to this choice in the
following.
The MLSG model
has a discrete symmetry under the shift of the field variable
$\underline{\varphi} \to \underline{\varphi} + \underline{\Delta}$
with $\underline{\Delta} = \left( l_1 2\pi/b, \dots, l_N 2\pi/b \right)$
where the ``last'' integer $l_N = -\sum_{n=1}^{N-1} l_n$ is fixed
but all the other
integers $l_n$ ($n=1,\ldots,N-1$) can be chosen freely (to see this,
one just diagonalizes the mass-matrix).
The single non-vanishing mass eigenvalue is
$M_N = \sqrt{N G}$, and hence the model
possesses $N-1$ massless 2D-SG fields and a single massive 2D-SG
field. After the diagonalization of the mass matrix by a
suitable rotation of the fields,
the model thus is invariant under the independent separate shifts of
$N-1$ massless fields, but the explicit mass term of the single
massive mode breaks the periodicity in the ``massive'' direction
of the $N$-dimensional internal space.
One crucial observation is that the partition function of the MLSG model,
whose path-integral formulation reads
\begin{equation}
\label{Z_mlsg}
{\cal Z} = {\mathcal N} \int {\mathcal D} [\underline{\varphi}]
\exp{\left(-S[\underline{\varphi}]\right)},
\end{equation}
can be identically rewritten in terms of an equivalent gas of
topological excitations (vortices), whose interaction potentials
are exactly equivalent to those of
Refs.~\cite{BuFe1990,KoVi1991,CoGeBl2005}.
This finding constitutes a generalization of known connections of the
$d$-dimensional globally neutral Coulomb gas and the $d$-dimensional
sine--Gordon model, as discussed in Chap.~32 of Ref.~\cite{ZJ1996},
and can be seen as follows. In Eq.~(\ref{mlsg}), one artifically introduces
the vectors $f_n \equiv \left( \delta_{1n}, \dots, \delta_{Nn} \right)$
as projection operators to rewrite $\sum_{n=1}^N \cos(b \, \varphi_n)
= \sum_{n=1}^N \cos(b \, \underline{f}_n^{\rm T} \underline{\varphi})$,
one expands the periodic piece of the partition function
(\ref{Z_mlsg}) in a Taylor series, and one introduces the integer-valued
charges $\sigma_\alpha = \pm 1$ of the topological defects which are subject
to the neutrality condition $\sum_{\alpha=1}^{2\nu} \sigma_\alpha = 0$.
This leads to the intermediate result,
\begin{align}
\label{step1}
& {\cal Z} = {\mathcal N} \sum_{\nu =0}^\infty \frac{(y/2)^{2\nu}}{(2\nu)!}
\prod_{i=1}^{2 \nu} \left( \sum_{n_i = 1}^N \int {\rm d}^2 r_i \right)
\sum_{\begin{array}{c}
\scriptstyle \sigma_1, \dots, \sigma_{\nu} = \pm 1 \\
\scriptstyle \sigma_{\nu+\gamma} = -\sigma_\gamma , \;
\gamma \in \{ 1, \dots \nu \} \end{array}}
\\
& \times
\int {\mathcal D}[\underline{\varphi}]
\exp{\left[-\int\!\! {\rm d}^2 r \, \frac{1}{2}
\underline\varphi^{\rm T} \,
(-\partial^2 + {\underline{\underline M}}^2)
\underline\varphi
+ {\rm i} \, b \, {\underline\rho}^{\rm T} \, \underline\varphi \right]},
\nonumber
\end{align}
where $\partial^2 \equiv \partial_\mu \partial_\mu$,
and
\begin{equation}
{\underline \rho}(r) = \sum_{\alpha =1}^{2\nu}
\sigma_{\alpha} \delta(r - r_{\alpha})
\underline{f}_{n_{\alpha}}\,.
\end{equation}
We have thus placed the $2 \nu$ vortices, labeled by the index $i$,
onto the $N$ layers, with vortex $i$ being placed onto the layer $n_i$.
The Gaussian integration in Eq.~(\ref{step1}) can now be performed
easily, and the inversion of the matrix
$-\partial^2 + {\underline{\underline M}}^2$
can be accomplished by going to momentum space. Via a subsequent
back-transformation to coordinate space, we finally arrive at the
result
\begin{align}
\label{gte}
& {\cal Z} =
\sum_{\nu =0}^\infty \frac{(y/2)^{2\nu}}{(2\nu)!}
\left(\prod_{i=1}^{2\nu}
\sum_{n_{i}=1}^N \int {\rm d}^2 r_i \right)
\sum_{\begin{array}{c}
\scriptstyle \sigma_1, \dots, \sigma_{\nu} = \pm 1 \\
\scriptstyle \sigma_{\nu+\gamma} = -\sigma_\gamma , \;
\gamma \in \{ 1, \dots \nu \} \end{array}}
\\
& \exp{\left[-\frac{b^2}{2}
\sum_{\alpha,\gamma=1}^{2\nu}
\sigma_{\alpha} \sigma_{\gamma}
\left( \delta_{n_{\alpha}n_{\gamma}} A_{\alpha \gamma} +
(1 - \delta_{n_{\alpha}n_{\gamma}}) B_{\alpha \gamma}\right)
\right]} \,,
\nonumber
\end{align}
where $\delta_{nm}$ represents the Kronecker-delta.
Equation (\ref{gte}) implies that
the parameter $b^2$ in Eq.~(\ref{pepot}) can naturally be identified
as being proportional to the inverse of the temperature of the gas,
$b^2 \propto T^{-1}$. The potentials
$A_{\alpha \gamma} \equiv A(\vec{r}_\alpha, \vec{r}_\gamma)$ and
$B_{\alpha \gamma} \equiv B(\vec{r}_\alpha, \vec{r}_\gamma)$
are the intralayer and interlayer
interaction potentials, respectively. They read
\begin{subequations}
\label{pot_limit}
\begin{align}
\label{A}
A_{\alpha \, \gamma} =& -\frac{1}{2\pi} \frac{N-1}{N}
\ln{\left(\frac{r_{\alpha \gamma}}{a}\right)}
+ \frac{1}{2\pi} \frac{1}{N}
\left[K_0\left(\frac{r_{\alpha \gamma}}{\lambda_{\rm eff}}\right)
- K_0\left(\frac{a}{\lambda_{\rm eff}}\right)\right]
\nonumber\\[2ex]
=& \left\{ \begin{array}{cc}
-\frac{1}{2\pi} \ln\left(\frac{r_{\alpha \gamma}}{a}\right) &
\quad (r_{\alpha \gamma} \ll \lambda_{\rm eff}) \\[2ex]
-\frac{1}{2\pi} \left[\frac{N-1}{N}
\ln\left(\frac{r_{\alpha \gamma}}{\lambda_{\rm eff}}\right)
-\ln\left(\frac{\lambda_{\rm eff}}{a}\right)\right] &
\quad (r_{\alpha \gamma} \gg \lambda_{\rm eff})
\end{array} \right.
\end{align}
where $r_{\alpha \gamma} = \vert \vec{r}_{\alpha} - \vec{r}_{\gamma}\vert$,
and
\begin{align}
\label{B}
B_{\alpha \, \gamma} =& \frac{1}{2\pi}
\frac{1}{N}
\left(\ln{\left(\frac{r_{\alpha \gamma}}{a}\right)}
+ \left[K_0\left(\frac{r_{\alpha \gamma}}{\lambda_{\rm eff}}\right)
- K_0\left(\frac{a}{\lambda_{\rm eff}}\right)\right]
\right)
\nonumber\\[2ex]
=& \left\{ \begin{array}{cc}
0 & \quad (r_{\alpha \gamma} \ll \lambda_{\rm eff})
\\[2ex]
\frac{1}{2\pi} \frac{1}{N}
\ln\left(\frac{r_{\alpha \gamma}}{\lambda_{\rm eff}}\right) &
\quad (r_{\alpha \gamma} \gg \lambda_{\rm eff})
\end{array} \right. .
\end{align}
\end{subequations}
$K_0(r)$ stands for the modified Bessel function of the
second kind, $a$ is the lattice spacing which serves as an UV
cutoff and an effective screening length $\lambda_{\rm eff}$ is
introduced which is related inversely to the non-zero mass
eigenvalue of the mass matrix (\ref{mass_matrix}),
$\lambda^{-1}_{\rm eff} = M_N = \sqrt{N G}$.
The relation $K_0(r) = -\ln(r) + \ln 2 -\gamma_{\rm E} + {\cal O}(r)$
has been used in the derivation of the asymptotic short- and
long-range forms in Eqs.~(\ref{A}) and~(\ref{B}), and only
the leading logarithmic terms are indicated
($\gamma_{\rm E} = 0.577216\dots$ is Euler's constant).
The interaction potentials (\ref{pot_limit}) have the same asymptotic
behavior as the vortices of magnetically coupled superconducting
layers \cite{BuFe1990,KoVi1991,ClemPancake,CoGeBl2005}
[for the intralayer and interlayer interactions see
Eqs.~(86) and~(89) of Ref.~\cite{ClemPancake}, under the
substitution $\Lambda_D =\Lambda_s /N$].
This observation shows
that the MLSG field theory is suitable to describe the vortex dynamics
in magnetically coupled layered systems. A few remarks are now in order.
(i) The prefactor $(N-1)/N$ appearing in the intralayer interaction
indicates the existence of vortices with fractional flux in the MLSG
model.
(ii) For small distances $r\ll\lambda_{\rm eff}$, the interlayer
interaction $B$ disappears and the intralayer potential $A$ has the
same logarithmic behaviour with full flux as that of the pure 2D-SG
model (which belongs to the same universality class as the
2D-XY model and the 2D Coulomb gas). Therefore, the MLSG model for
small distances behaves as an uncoupled system of 2D-SG models.
(iii) For the case $N=1$, there exists no interlayer interaction,
and the intralayer potential is logarithmic for small distances and
vanishes for large distances.
Consequently, there are always free, non-interacting vortices
in the model which push the KTB transition temperature to zero.
The MLSG model for a single layer reduces to the massive 2D-SG model
discussed in
Refs.~\cite{BlEtAl1994,PiVa2000,ReEtAl1996,BeCaGi2007magnetic}
where
the periodicity in the internal space is broken and the KTB transition
is absent.
(iv) In the bulk limit $N\to\infty$, the effective screening length
and the interlayer interaction disappear ($\lambda_{\rm eff}\to 0$,
$B_{\alpha \gamma}\to 0$), and the intralayer potential has a
logarithmic behaviour with full flux, thus the MLSG model
predicts the same behaviour as that of the pure 2DSG model
with $T^{(\infty)}_{\rm KTB} = T^{\star}_{\rm KTB}$.
Alternatively, one may observe that for $N\to \infty$,
the effect of the infinitely many zero-mass modes dominates over
the effect of the single remaining massive mode entirely,
leading to a constant limit for the transition temperature
as $N \to \infty$.
For $N=2$ layers, the MLSG model [with the choice $a_n = (-1)^{n+1}$]
has been proposed to describe the
vortex properties of Josephson coupled layered superconductors
\cite{LSGPierson}. However, the above discussed mapping indicates that
any layered sine--Gordon model, whatever be the mass matrix, can be
mapped onto an equivalent gas of topological excitations, whose
interaction potentials are determined by the inversion of a two-dimensional
propagator of the form $-\partial^2 + \underline{\underline{M^2}}$.
Any such propagator, upon backtransformation to coordinate space, can
only lead to a logarithmic behaviour for the vortex interactions at
small and large distances, and consequently, cannot possibly reproduce
the confining linear long-range intralayer interaction given in Eq.~(8.42)
of Ref.~\cite{BlEtAl1994} and in Ref.~\cite{LSGPierson}. The
candidate~\cite{LSGPierson} for a mass matrix
$\underline{\varphi}^{\rm T} \, {\underline{\underline m}}^2
\underline{\varphi} = J\,\sum_{i=1}^{N-1} (\varphi_i - \varphi_{i+1})^2 $
has also been discussed in
Refs.~\cite{JeNaZJ2006,Na2006,NaEtAl2007jpc}. This candidate
interaction is inspired by a discretization
of the anisotropic 3D-SG model~\cite{LSG3D},
but it cannot reproduce the linear
confining potential needed for the description of the Josephson-coupled
case~\cite{NaEtAl2007jpc}. The layer-dependent transition temperature of
this model is $T_{\rm c} \propto N^{-1}$ and decreases with the number of
layers, and for general $N$, the mass matrix ${\underline{\underline m}}^2$
also leads to different short- and long-range intralayer potentials as
compared to Eq.~(\ref{pot_limit}) and cannot be used for the description
of magnetically coupled $N$-layer systems, either~\cite{NaEtAl2007jpc}.
Finally, let us note that a suitable model for the Josephson coupled
layered system could probably be constructed if the interlayer interaction
term is
represented by a compact variable, i.e., one couple the phase (compact)
fields between the 2D planes~\cite{BeCaGi2007} and not the the dual fields.
\section{RG Analysis of the Multi-Layer Sine--Gordon Model}
\label{sec3}
The above statements on the MLSG model are based on the bare action
where the coupling parameters of the theory are fixed. However, only
a rigorous RG analysis enables one to construct the phase diagram in
a reliable manner.
For $N=2$ layers, the phase structure and the vortex properties of
the magnetically coupled layered system have already been considered
with a real space RG approach \cite{CoGeBl2005} using a two-stage
procedure, and a momentum space RG method \cite{LSGPierson}
on the basis of the dilute gas approximation has also been used.
\begin{figure}[htb]
\begin{center}
\begin{minipage}{14cm}
\begin{center}
\epsfig{file=fig1.eps,width=0.6\linewidth}
\caption{\label{fig1}In the left panels, the
mass-corrected scaling [see Eq.~(\ref{sol})] of the
dimensionless Fourier amplitude $\tilde y$ of the MLSG
model for $N=1$ (top) and for $N=2$ (bottom)
layers is represented graphically for
$b^2 = 4\pi, 8\pi, 12\pi, 16\pi, 20\pi$ (from top to bottom
on each panels, see the dashed curves).
We use $G=0.0001$ in order to
have the UV and IR regimes conveniently located on the
plots, which start at the UV scale $\Lambda = 1$. The dotted line
is the extrapolation of the UV ($k\gg M_N$) scaling to the
IR ($k\ll M_N$) region.
For $N=1$ layers, $\tilde y$ is always
relevant ($\sim k^{-2}$) in the IR. For $N=2$ layers,
$\tilde y$ is relevant for $b^2 < 16\pi$ in the IR
and irrelevant for $b^2 > 16\pi$. Thus, the 2-layer MLSG
model undergoes a KTB type phase transition at $b_c^2 = 16\pi$.
In general, the KTB transition temperature of the MLSG model
is layer-dependent $T^{(N)}_{\rm KTB} = (1-N^{-1})T^{\star}_{\rm KTB}$.
If the system has a finite volume ($R<\infty$), the
thermodynamic limit cannot be taken automatically and, as a
simple realization of the finite size effect, a momentum scale
$k_{\rm min}\sim 1/R$ appears in the model. For $R<\lambda_{\rm eff}$
(i.e. $k_{\rm min} > \sqrt{N G}=M_N$), the phase structure of the
MLSG model is determined by the UV scaling which predicts a KTB
type phase transition at $b_c^2 = 8\pi$ for any number of layers.
}
\end{center}
\end{minipage}
\end{center}
\end{figure}
Here, we apply a generalized multi-layer,
multi-field Wegner--Houghton (WH) RG
analysis developed by us for the layered SG type models
\cite{NaNaSaJe2005,JeNaZJ2006,NaSa2006,Na2006,NaEtAl2007jpc}
to the MLSG model with an arbitrary
numbers of layers. In the construction of the WH--RG equation,
the blocking transformations~\cite{Wi1971} are realized by
a successive elimination of the field fluctuations in the
direction of decreasing momenta, in infinitesimal momentum
shells, about the moving sharp momentum cutoff $k$
(see Ref.~\cite{WeHo1973}).
The physical effects of the eliminated modes are transferred
to the scale-dependences of the coupling constants [e.g.,
$y \equiv y(k)$]. The WH-RG equation in the local potential
approximation (LPA) for the MLSG model with $N$ layers reads
\begin{equation}
\label{wh_rg_N}
(2+k \, \partial_k) \,\, \tilde V_k = - \frac{1}{4\pi}
\ln \left[ {\mr{det}
\left(\delta_{ij} + \partial_{\varphi_i}\partial_{\varphi_j}
\tilde V_k \right) \right],
\end{equation}
where we have defined the dimensionless blocked potential as
$\tilde V_k \equiv k^{-2} \, V_k$. We make the following
ansatz for the blocked potential,
\begin{align}
\label{ansatz}
\tilde V_k =
\hf {\tilde G}_{k}
\left(\sum_{n=1}^N \varphi_n \right)^2
+ {\tilde U}_{k}(\varphi_1, \cdots \varphi_N),
\end{align}
where the scale-dependence is encoded in the dimensionless
coupling constants $\tilde y(k)$ and $\tilde G(k)$ which are all
related to their dimensionful (no tilde) counterparts by a
relative factor $k^{-2}$. Inserting the ansatz (\ref{ansatz})
into Eq.~(\ref{wh_rg_N}), the right hand side becomes periodic,
while the left-hand side contains both periodic and non-periodic
parts~\cite{NaNaSaJe2005,Na2006}.
In order to go beyond the dilute-gas approximation,
we calculate a mass-corrected
UV approximation of Eq.~(\ref{wh_rg_N})
by expanding the logarithm of
the determinant in the right hand side of Eq.~(\ref{wh_rg_N})
in powers of the periodic part of the blocked potential.
Because this procedure has been discussed at length in
Refs.~\cite{NaNaSaJe2005,JeNaZJ2006,Na2006},
we immediately state the result [cf.~Eq.~(43) of
Ref.~\cite{NaNaSaJe2005}],
\begin{equation}
\label{sol}
{\tilde y(k)} = {\tilde y}(\Lambda)
\left(\frac{k^2 + N \, G}{\Lambda^2 + N \, G}\right)^{\frac{b^2}{N 8\pi}}
\left(\frac{k}{\Lambda}\right)^{\frac{(N-1)b^2}{N 4\pi} -2},
\end{equation}
with the initial value ${\tilde y}(\Lambda)$ at the UV
cutoff $k = \Lambda$. Let us note that in our RG approach the
dimensionful $G$ and $b^2$ are scale-independent
constants. We can immediately read off from Eq.~(\ref{sol})
the critical value
$b^2_{c} =8\pi/(1-N^{-1})$ and the corresponding KTB temperature
$T^{(N)}_{\rm KTB} \sim b^{-2}_c = T^{\star}_{\rm KTB} (1-N^{-1})$.
The fugacity $\tilde y$ is irrelevant (decreasing) for
$b^2>b^2_{c}$ and relevant (increasing) for $b^2<b^2_{c}$
for decreasing scale $k$ (see Fig.~\ref{fig1}).
Our RG approach provides a consistent scheme to calculate higher
order corrections to the linearization in
the periodic part of the blocked potential, which is
equivalent to higher-order corrections to the dilute-gas
approximation. For $N=1$, the mass-corrected UV scaling law~(\ref{sol}),
obtained for the massive SG model, recovers the scaling
obtained in Refs.~\cite{PiVa2000,IcMu1994} (no phase transition).
\section{Conclusion and Summary}
\label{sec4}
In conclusion, we propose the multi-layer
sine--Gordon (MLSG) Lagrangian as a quantum field theoretical model
for the vortex properties of magnetically coupled
layered superconductors. Note that the MLSG model cannot be
assumed to belong to the same universality class as the
layered Ginzburg--Landau model~\cite{NaEtAl2007jpc},
which entails a discretization of the Ginzburg--Landau model
in one of the spatial directions.
The mapping of the MLSG model onto the gas
of topological defects is used to clarify the suitability of the
MLSG model to magnetically coupled layered systems. We investigate
the scaling laws for the MLSG model using a functional formulation of
the Wegner-Houghton RG approach in the local potential approximation.
The linearization of the RG flow in the periodic part of the blocked
potential (and not in the full potential) enables us to incorporate
the effect of the interlayer interaction into the mass-corrected
UV scaling laws, which improve the dilute gas approximation.
The mass-corrected Wegner--Houghton UV scaling laws indicate
that for general interlayer interactions of the type of
Eqs.~(\ref{mass_matrix}), one finds two phases separated by the
critical value $b_c^2 = 8 \pi/(1 - N^{-1})$, where $N$ is the
number of layers. This determines the layer-dependence of the KTB
transition temperature
$T^{(N)}_{\rm KTB} = T^{\star}_{\rm KTB} \, (1 - N^{-1})$
in full agreement with Refs.~\cite{Pu1993,CoGeBl2005}.
Perhaps, further investigations of the MLSG model (e.g., beyond
the local potential approximation) and other
generalizations of the momentum-space RG studies presented here
could enrich our understanding of the layered structures.
\section*{Acknowledgments}
The authors acknowledge insightful discussions
with Professor J. Zinn--Justin and thank Professor
J. R. Clem for valuable remarks.
I.N.~would like to acknowledge the kind hospitality of
Max--Planck--Institut, Heidelberg on the occasion of a number of
guest researcher appointments, and U.D.J.~acknowledges support from
the Deutsche Forschungsgemeinschaft with the
Heisenberg program (contract JE285/3--1).
S.N. ackowledges support via the Oveges program of the National
Office for Research and Technology of Hungary and the support by
the Univesritas Foundation, Debrecen.
|
1,477,468,751,426 | arxiv | \section{Introduction}
The epoch of reionization was a landmark event in the history of the Universe when the cumulative number of ionizing photons escaping from the first stars, galaxies, and quasars surpassed the number of hydrogen atoms in the intergalactic medium (IGM). Our knowledge of reionization is bounded by the presence of transmission in the Ly$\alpha$ forest at $z\la6$ \citep{Fan06}, and an integral constraint from the electron scattering optical depth of the cosmic microwave background (CMB) which constrains the volume of ionized IGM between the present day and $z\sim1100$ \citep{Planck16a} that suggests a characteristic reionization redshift of $z_{\rm re}=6.4$--$9.7$ (95\% credible interval, \citealt{Planck16b}). With only these constraints, the detailed reionization history -- reflecting the nature and evolution of sources of ionizing photons -- is still highly uncertain and model-dependent.
The discovery and deep follow-up spectroscopy of quasars with redshifts greater than six
provided the first look at the IGM approaching the epoch of reionization (e.g. \citealt{Fan01,Fan03,Becker01,White03}). While Gunn-Peterson troughs \citep{GP65} in the Ly$\alpha$ and Ly$\beta$ forests of these quasars due to the presence of neutral hydrogen in the IGM may be signatures of ongoing reionization, they can only place lower limits on the volume-averaged hydrogen neutral fraction of
$\langle x_\mathrm{H\,I}\rangle \ga 10^{-4}$ (e.g. \citealt{Fan06}).
The sizes of the transparent proximity zones of these quasars have also been analyzed in the context of expanding Str{\"{o}}mgren spheres in a \mbox{(partially-)neutral} IGM \citep{CH00,Wyithe05,MH07,Schroeder13}, but as recently demonstrated by \citet{Eilers17}, the sizes alone may be
insensitive to the ionization state of the IGM. A more sensitive, and perhaps definitive, probe of neutral gas in the IGM is the Ly$\alpha$ damping wing \citep{ME98} which suppresses the quasar continuum redward of rest-frame Ly$\alpha$.
The first
quasar with a claimed damping wing signal was ULAS J1120+0641 \citep{Mortlock11} at $z=7.09$, but the inferred constraints on $\langle x_\mathrm{H\,I}\rangle$ vary between different analyses. These differences are in part due to differences in physical models for the proximity zone and/or damping wing.
One approach to constrain $\langle x_{\rm HI}\rangle$ is to fit the Ly$\alpha$ transmission spectrum with the analytic model of \citet{ME98}, as performed by \citet{Mortlock11}, but this formula does not include the substantial resonant Ly$\alpha$ absorption by residual \ion{H}{1} inside of the proximity zone (see, e.g., \citealt{Bolton11,Keating15}), and the IGM outside the proximity zone is typically assumed to have a completely uniform ionization state instead of a more realistic patchy topology of ionized bubbles \citep{Furlanetto04}. \citet{Greig17b} constrained $\langle x_{\rm HI}\rangle$ from the damping wing of ULAS J1120+0641 using large-volume semi-numerical simulations of the reionization topology \citep{Mesinger16} to predict the distribution of damping wing strengths as a function of $\langle x_{\rm HI}\rangle$, but they only considered wavelengths redward of Ly$\alpha$. The size of the proximity zone, and the strength of the damping wing, are sensitive to the quasar lifetime \citep{Bolton11,Keating15} which has an uncertainty of several orders of magnitude \citep{Martini04,Eilers17}.
Another complication is that uncertainties and differences between methodologies for estimating the intrinsic quasar continuum \citep{KH09,Greig17a} can be similar in strength to the damping wing signal itself. Further exacerbating this challenge is the fact that the spectral properties of quasars at $z\ga6.5$, in particular their \ion{C}{4} emission line blueshifts, are often extreme outliers of the distribution of lower redshift quasars \citep{Mazzucchelli17}. In light of its large \ion{C}{4} blueshift, \citet{Mortlock11} estimated the intrinsic spectrum of ULAS J1120+0641 via stacking a sample of SDSS quasar spectra with matched \ion{C}{4} emission line properties, but did not test the accuracy of this continuum estimation method on quasars with known continua. \citet{BB15} found that a more closely matched sample of lower-redshift quasars (i.e., without any damping wing signal) with carefully selected \ion{C}{4} emission line strengths and blueshifts had Ly$\alpha$ spectral shapes consistent with the observed ULAS J1120+0641 spectrum. In contrast, the predictive continuum model of \citet{Greig17a}, as demonstrated by \citet{Greig17b}, appears to prefer a much stronger intrinsic Ly$\alpha$ profile, suggesting instead that the damping wing signal is quite strong.
Analysis of the recently discovered quasar ULAS J1342+0928 \citep{Banados18} at $z=7.54$ suggests that it exhibits a much more prominent damping wing absorption signal than ULAS J1120+0641, consistent with a predominantly neutral IGM. However, its \ion{C}{4} line exhibits a blueshift more than twice that of ULAS J1120+0641, and so only a very small number of similar quasars exist in lower redshift samples. \citet{Banados18} estimated the intrinsic spectrum of ULAS J1342+0928 by constructing a composite spectrum from 46 SDSS/BOSS quasars with similar \ion{C}{4} emission line properties and estimated the uncertainty via measuring the residuals between the composite and its constituent quasar spectra. They then derived their fiducial constraints on $\langle x_{\rm HI}\rangle$ in the surrounding IGM using the \citet{ME98} model for the damping wing shape. In this work we describe one of the alternative models (``Model B") from \citet{Banados18} in more detail.
A complete model of the proximity zone and damping wing region of quasar spectra requires an estimate of the intrinsic quasar continuum, the uncertainty in the quasar continuum model, a model for the small-scale density fluctuations in the IGM, a realistic description of patchy reionization topology surrounding the massive dark matter halos that host luminous quasars, and time-dependent radiative transfer of ionizing photons from the quasar along the line of sight. The goal of this work is to put all of these pieces together for the first time to forward model mock quasar spectra and develop a statistical method to constrain the volume-averaged IGM neutral fraction and quasar lifetime from individual quasar spectra.
In \citet[][henceforth Paper I]{Davies18a}, we developed a Principal Component Analysis (PCA)-based approach with a training set of $>10,000$ quasar spectra from the SDSS/BOSS DR12Q catalog \citep{Paris17} to predict the ``blue-side" quasar continuum, at rest-frame wavelengths $1175 < \lambda_{\rm rest}<1280$ {\AA}, from the ``red-side" spectrum, covering $1280 < \lambda_{\rm rest} < 2850$ {\AA}. We quantified the covariant uncertainties by testing the method on the training set, finding that for a typical quasar the relative error of our predicted continua is $\sim6$--$12\%$ at
rest-frame wavelengths most sensitive to damping wing absorption. Finally, we demonstrated the applicability of our method on the two known $z>7$ quasars: ULAS J1120+0641, and ULAS J1342+0928. While these quasars represent outliers from the distribution of typical quasars in SDSS/BOSS,
we have calibrated the uncertainty on the blue-side predicted continua from custom subsets of ``nearest-neighbor" quasars in the training set that have similar red-side spectra to each quasar separately.
In this work, we present a hybrid model for quasar proximity zone and damping wing structures during reionization, and a statistical method to perform Bayesian parameter inference on high-redshift quasar spectra in conjunction with the PCA quasar continuum model from Paper I. We apply these new methods to constrain $\langle x_{\rm HI}\rangle$ from the spectra of ULAS J1342+0928 at $z=7.54$ and ULAS J1120+0641 at $z=7.09$. We find strong evidence for a substantially neutral IGM at $z>7$, especially at $z=7.54$, consistent with the latest constraints from the CMB \citep{Planck16b}.
The rest of the paper is structured as follows. In \S~2, we briefly summarize the Principal Component Analysis (PCA) method for predicting the intrinsic blue-side quasar continuum from the red-side spectrum from Paper I. In \S~3 we describe our hybrid model for quasar proximity zones and the Ly$\alpha$ damping wing, combining ionizing radiative transfer simulations \citep{Davies16} through density field skewers from high-resolution hydrodynamical simulations with semi-numerical simulations of the inside-out reionization topology around massive halos. In \S~4 we describe our methodology for performing Bayesian parameter inference
from millions of forward-modeled mock spectra. In \S~5 we show the results of our analysis on the two quasars known at $z>7$: ULAS J1342+0928 and ULAS J1120+0641. Finally, in \S~6 we conclude with a discussion of the implications of the neutral fraction constraints from the two
quasars on the reionization history of the Universe, and describe avenues for future investigation of existing quasar samples.
In this work we assume a flat $\Lambda$CDM cosmology with $h=0.685$, $\Omega_b=0.047$, $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and $\sigma_8=0.8$.
\section{PCA Continuum Model}\label{sec:pca}
We adopt the method for predicting the intrinsic blue-side quasar continuum ($1175 < \lambda_{\rm rest} < 1280$ {\AA}) from the observed red-side spectrum ($1280 < \lambda_{\rm rest} < 2850$ {\AA}) from Paper I, which we briefly summarize below.
To construct the PCA model, we selected a sample of $12,764$ quasars from the BOSS DR12Q catalog \citep{Paris17} at $2.09 < z_{\rm pipe} < 2.51$ with ${\rm S/N}>7$
at $\lambda_{\rm rest}=1290$ {\AA}, and fit each spectrum with an automated, piecewise spline fitting method designed to recover smooth quasar continua in the presence of absorption lines \citep{Young79,Carswell82,Dall'Aglio08}. In this redshift range, the BOSS spectra cover the entire spectral range from Ly$\alpha$ to \ion{Mg}{2}.
We further processed the splined spectra by median stacking each one with its 40 nearest neighbors to clean up residual artifacts such as strong associated absorption. We then computed principal component spectra (or ``basis spectra") from these median stacks
of these spline fit spectra with the standard PCA approach using \texttt{scikit-learn} \citep{scikit-learn}, albeit in \emph{log-space}. That is, the logarithm of each quasar spectrum is represented by a sum of basis spectra ${\bf A}_i$ with corresponding weights $a_i$,
\begin{equation}
\log{\bf F} \approx \langle\log{\bf F}\rangle + \sum_i^{n} a_i {\bf A}_i,
\end{equation}
which in linear space becomes a \emph{product} of basis spectra raised to powers,
\begin{equation}\label{eqn:pca}
{\bf F} \approx {\rm e}^{\langle\log{\bf F}\rangle} \prod_i^{n} {\rm e}^{a_i {\bf A}_i}.
\end{equation}
This log-space decomposition naturally accounts for the continuum slope variations between quasars, which dominates the total variance in flux space. For our analysis, we decomposed the red-side and blue-side spectra independently, keeping 10 red-side (${\bf R}_i$) and 6 blue-side (${\bf B}_i$) basis spectra.
We then found the best-fit red-side coefficients $r_i$ for each (original, not spline fit)
quasar spectrum in the training set via $\chi^2$ minimization while simultaneously fitting for a \emph{template redshift} $z_{\rm temp}$, allowing us to place each quasar onto a consistently defined rest-frame. The blue-side coefficients $b_i$ for each training set quasar were then found by fitting the blue-side (in the $z_{\rm temp}$ frame) spline fit continua assuming constant noise. From the sets of $r_i$ and $b_i$ for all training set quasars, ${\bf r}$ and ${\bf b}$, we follow \citet{Suzuki05} and \citet{Paris11} and compute the \emph{projection matrix} ${\bf X}$ by finding the least-squares solution to the linear equation,
\begin{equation}
{\bf b} = {\bf r} \cdot {\bf X}.
\end{equation}
After fitting the 10 $r_i$ of an arbitrary quasar spectrum, we can ``project" to the corresponding 6 $b_i$ (and thus reconstruct the blue-side spectrum) via a dot product with the $10 \times 6$ projection matrix ${\bf X}$.
By testing our PCA procedure on the training set, we found that the relative error in the projected blue-side continua (which we refer to as the ``blue-side prediction") is
$\sim6$--$12\%$ in the region of the spectrum most useful for proximity zone and damping wing analyses ($1210 < \lambda_{\rm rest} < 1240$), with a mean bias $\la1\%$. The continuum error was found to be highly covariant across wide regions of the spectrum corresponding to regions associated with broad emission lines. However, as mentioned above, the spectra of $z\ga6.5$ quasars are known to be irregular compared to typical quasar spectra at lower redshift -- in particular, they exhibit large \ion{C}{4} blueshifts relative to lower ionization lines in the spectrum such as \ion{Mg}{2} \citep{Mortlock11,Mazzucchelli17,Banados18}. The uncertainty of the predicted continua for these atypical spectra may not be properly represented by the average uncertainty for all spectra.
For individual quasars, we can estimate a more accurate uncertainty by measuring the continuum errors for quasars with similar red-side spectra. We defined a distance $D_r$ in the space of red-side PCA coefficients $r_i$ by
\begin{equation}\label{eqn:dist}
D_r \equiv \sqrt{\sum_i^{N_{\rm PCA,r}}\left(\frac{{\Delta}r_i}{\sigma(r_i)}\right)^2},
\end{equation}
where $N_{\rm PCA,r}$ is the number of red-side PCA basis vectors, ${\Delta}r_i$ is the difference between $r_i$ values, and $\sigma(r_i)$ is the standard deviation of $r_i$ values in the training set. In Paper I, we measured the predicted continuum errors for the $1\%$ of training set spectra with the lowest $D_r$ to each of the $z>7$ quasars, allowing us to estimate a custom continuum uncertainty for each $z>7$ quasar.
The predictions for these ``similar" quasars tended to be slightly less uncertain and somewhat more biased than the training set as a whole.
For the statistical analysis of quasar spectra that follows, we require the ability to generate mock realizations of the continuum prediction error, which we denote as $\epsilon_C$ following Paper I. We assume a multivariate Gaussian distribution for the relative continuum error, with the mean and covariance determined from the prediction errors measured for similar, i.e. the $1\%$ nearest neighbor,
quasars as described above. We then use draws from these custom error distributions to generate forward-modeled mock spectra, described in more detail in \S~\ref{sec:stats}.
\section{Hybrid Model of Quasar Proximity Zones and Damping Wings During Reionization}
With our predictive model for intrinsic quasar continua and their errors
in place, we now must develop a physical model for quasar proximity zones and damping wings. We construct a hybrid model with three parts:
\begin{enumerate}
\item High-resolution density field from a large-volume hydrodynamical simulation \citep{Lukic15}.
\item Semi-numerical simulations of reionization morphology (\citealt{Mesinger11}; Davies \& Furlanetto, in prep.).
\item One-dimensional ionizing radiative transfer of hydrogen- and helium-ionizing photons emitted by the quasar \citep{Davies16}.
\end{enumerate}
In this section, we describe these model components in detail.
\subsection{Nyx Hydrodynamical Simulation}
The first ingredient of our hybrid model is the small-scale structure of the IGM, which determines the absorption features inside the quasar proximity zone. We use density, velocity, and temperature fields from the $z=7.0$ output of a \texttt{Nyx} hydrodynamical simulation \citep{Almgren13}, 100 Mpc$/h$ (comoving) on a side with $4096^3$ dark matter particles and $4096^3$ baryon grid cells (see also \citealt{Lukic15}). Dark matter halos were selected via an algorithm that finds topologically-connected regions above 138 times the mean density (Luki\`{c} et al., in prep.), which is described in \citet{Sorini17}.
We extract 1200 axis-aligned skewers from the centers of the 200 most massive halos, corresponding to halo masses $M_{\rm h}\ga2\times10^{11}$ M$_\odot$.
The simulation, optimized for studying the Ly$\alpha$ forest, was run on a fixed, Eulerian grid, and lacks prescriptions for star formation or feedback \citep{Lukic15} which are required to characterize the circumgalactic medium of massive dark matter halos.
Nevertheless, they should be adequate for our purposes, because our primary goal is to capture the larger-scale overdensity surrounding these halos on the relatively large $\sim1$--$2$ proper Mpc scales covered by the proximity zones of the $z>7$ quasars in our analysis (compared to the halo virial radius, $\sim50$ proper kpc).
We re-scale the gas density of the skewers by $(1+z)^3$ depending on which quasar we are simulating. We leave the computation of custom-redshift outputs (i.e. matched to the quasar redshifts) of large hydrodynamical simulations to future work, but note that between $z=7$ and the redshifts of the two quasars we focus on here ($z=7.09,7.54$) the evolution of the overdensity field should be relatively unimportant.
\subsection{Semi-Numerical Reionization Simulations with \texttt{21cmFAST}}
The second ingredient of our hybrid model is the large-scale morphology of reionization around massive quasar-hosting halos. To compute realistic ionization fields on large scales, we adopt a modified version of the semi-numerical reionization code \texttt{21cmFAST}\footnote{\url{https://github.com/andreimesinger/21cmFAST}} \citep{Mesinger11}, to be presented in further detail in Davies \& Furlanetto (in prep.).
The \texttt{21cmFAST} code computes the fraction of material that has collapsed into dark matter halos, $f_{\rm coll}$, following conditional Press-Schechter \citep{LC93} applied to a non-linear density
field computed using the Zel'dovich approximation \citep{Zel'dovich70}. A region is considered ionized if $f_{\rm coll} > \zeta^{-1}$ on \emph{any} scale, where $\zeta$ is the ``ionizing efficiency," combining a series of assumptions about the efficiency of star formation and the production (and escape) of ionizing photons from galaxies into a single parameter that corresponds to the total number of ionizing photons emitted per collapsed baryon. In standard \texttt{21cmFAST}, this criterion is assessed by filtering the density field from large to small scales, re-computing $f_{\rm coll}$ at each filter scale. Our modified algorithm assigns collapsed mass to each cell according to the non-linear density field, and it is this collapsed mass field which is filtered to determine whether a given region is ionized, similar to \texttt{DexM} \citep{MF07} but without explicitly generating a distribution of halos. In this way, the small-scale clustering of halos is better reflected on large scales, and the new algorithm produces ionization fields that are very similar to \texttt{DexM} at a very small fraction of the computation time. We have also implemented a novel approach to treat the mean free path of ionizing photons as a smooth attenuation rather than a sharp cutoff.
As shown by \citet{Greig17b}, we do not expect the exact choice of model for reionization topology to make a substantial difference in our inference of $\langle x_{\rm HI} \rangle$, so we leave an exploration of different model assumptions to future work.
The hydrodynamical simulation is likely to be too small to fully characterize the distribution of ionized regions around rare, massive halos, so we compute the ionization fields in an independent larger volume, 400 comoving Mpc (cMpc) on a side. The resolution of the cosmological initial conditions was $2048^3$, while the evolved density field and ionization fields were output at a lower resolution, $512^3$. We assume a mass-independent ionizing efficiency $\zeta$, a minimum halo mass of $M_{\rm min}=10^8$ M$_\odot$, and mean free path of ionizing photons $\lambda_{\rm mfp}=60$ cMpc. We tuned $\zeta$ to produce ionization fields with global volume-averaged neutral fractions of $\langle x_{\rm HI} \rangle=0.05$--$0.95$ in steps of ${\Delta}x_{\rm HI}=0.05$.
Massive dark matter halos reside within larger-scale overdensities which are reionized early \citep{AA07}, leading to an important bias in the distribution of distances to the nearest patch of neutral gas \citep{Lidz07,MF08}. We constructed dark matter halos directly from the initial conditions following the method of \citet{MF07} as now implemented in the public release of \texttt{21cmFAST}. In Figure~\ref{fig:ionfield}, we show a 0.78 Mpc-thick slice
through an ionization field at $z=7.5$ with
$\langle{x_{\rm HI}}\rangle=0.5$ and the locations of massive halos within $\pm2$ Mpc of the slice. As expected for the inside-out progression of reionization \citep{Furlanetto04}, halos preferentially (in fact, exclusively, for the massive halos shown here) lie inside of ionized regions (black).
As seen by the subtle grey shading,
the ionization field is not entirely a binary neutral (white) vs. ionized (black) field -- \texttt{21cmFAST} by default includes a prescription for a (typically small) degree of partial ionization due to ``unresolved" ionized bubbles within each cell, but this is unlikely to have any impact on our results.
We then extracted randomly-oriented sightlines of $x_{\rm HI}$ from the locations of the 500 most massive halos, corresponding to $M_{\rm h}\ga3\times10^{11}$ M$_\odot$. We show the distribution of distances from these massive halos
to the nearest patch of neutral hydrogen ($x_{\rm HI} > 0.01$) as a function of $\langle{x_{\rm HI}}\rangle$ in Figure~\ref{fig:bubbledist}.
This distance is what determines the initial strength of the damping wing feature when the quasar first turns on, and the distribution of distances leads to large variations in the damping wing profiles between different sightlines at the same global neutral fraction.
More massive halos tend to sit in larger ionized bubbles \citep{AA07,Lidz07,MF08}, so assuming a lower (higher) halo mass
than the actual quasar host halo would result in shorter (longer) distances to the nearest neutral patch at fixed $\langle{x_{\rm HI}}\rangle$, leading to an overestimate (underestimate) of the damping wing strength. In this work we ignore this potential source of bias, and note that it is likely degenerate with other assumptions in our model for the reionization topology (e.g. $M_{\rm min}$).
\begin{figure}[htb]
\begin{center}
\resizebox{8.50cm}{!}{\includegraphics[trim={1em 0.5em 0.5em 0.5em},clip]{f1.pdf}}\\
\end{center}
\caption{One pixel ($\sim800$ comoving kpc) slice through the semi-numerical ionization field at $z=7.5$ with $\langle x_{\rm HI}\rangle=0.5$ (greyscale; a linear stretch with white corresponding to neutral and black corresponding to fully ionized). The locations of $M_{\rm h}>10^{11}$ M$_\odot$ halos are shown from a 4 comoving Mpc-thick slice centered on the ionization field slice, color- and size-coded by halo mass. Massive halos tend to lie inside of large regions that have already been ionized.}
\label{fig:ionfield}
\end{figure}
\begin{figure}[htb]
\begin{center}
\resizebox{8.50cm}{!}{\includegraphics[trim={2.0em 0em 2.5em 3em},clip]{f2.pdf}}\\
\end{center}
\caption{Distribution of distances from massive halos ($M_{\rm halo}\ga3\times10^{11}$ M$_\odot$) to the first patch of neutral gas in our semi-numerical simulations with global neutral fractions $\langle x_{\rm HI}\rangle=0.1,0.25,0.5,0.75,0.9$ from right to left. The vertical dashed lines correspond to the median of the correspondingly-colored distribution.}
\label{fig:bubbledist}
\end{figure}
\subsection{Ionizing radiative transfer}
We use an updated version of the one-dimensional radiative transfer implementation in \citep{Davies16} to compute the effect of quasar radiation on the surrounding IGM, which we briefly summarize below. The radiative transfer code computes the time-dependent evolution of six species (e$^{-}$, \ion{H}{1}, \ion{H}{2}, \ion{He}{1}, \ion{He}{2}, \ion{He}{3}) and the gas temperature, following the method described in the Appendix of \citet{BH07}. The abundances of ionic species are computed by integrating the following
coupled system of equations,
\begin{eqnarray}
\frac{dn_\mathrm{H\,II}}{dt} &=& n_\mathrm{H\,I} (\Gamma^\gamma_{\mathrm{H\,I}} + n_e \Gamma^\mathrm{e}_{\mathrm{H\,I}}) - n_\mathrm{H\,II} n_e \alpha^A_\mathrm{H\,II}, \\
\frac{dn_\mathrm{He\,II}}{dt} &=& n_\mathrm{He\,I} (\Gamma^\gamma_{\mathrm{He\,I}} + n_e \Gamma^\mathrm{e}_{\mathrm{He\,I}}) + n_\mathrm{He\,III} n_e \alpha^A_\mathrm{He\,III} \nonumber \\
&& - n_\mathrm{He\,II} (\Gamma^\gamma_{\mathrm{He\,II}} + n_e \Gamma^\mathrm{e}_\mathrm{He\,II} - n_e \alpha^A_\mathrm{He\,II}), \\
\frac{dn_\mathrm{He\,III}}{dt} &=& n_\mathrm{He\,II} (\Gamma^\gamma_{\mathrm{He\,II}} + n_e \Gamma^\mathrm{e}_\mathrm{He\,II}) - n_\mathrm{He\,III} n_e \alpha^A_\mathrm{He\,III},
\end{eqnarray}
where $n_i$ are the number densities for each species, $\Gamma^\gamma_i$ are the photoionization rates, $\Gamma^\mathrm{e}_i$ are the collisional ionization rates, and $\alpha^A_i$ are the Case A recombination rate coefficients. In $\Gamma^{\gamma}_i$ we include the effect of \emph{secondary} ionizations, as tabulated by \citet{FJS10}, whereby energetic photoelectrons (with kinetic energy greater than the ionization potential) lose energy by ionizing additional atoms rather than simply dumping the excess photoionization energy into the gas as heat. The remaining species are then solved for via the closure
conditions
\begin{eqnarray}
n_\mathrm{H\,I} &=& n_\mathrm{H} - n_\mathrm{H\,II}, \\
n_\mathrm{He\,I} &=& \frac{Y}{4(1-Y)}n_\mathrm{H} - n_\mathrm{He\,II} - n_\mathrm{He\,III}, \\
n_e &=& n_\mathrm{H\,II} + n_\mathrm{He\,II} + 2n_\mathrm{He\,III},
\end{eqnarray}
where we have assumed $Y=0.24$ for the mass fraction of helium.
The gas temperature is evolved taking into account photoionization heating and cooling\footnote{We assume that the gas is of primordial composition, i.e. there is no cooling due to elements heavier than helium.} from recombinations, collisional excitation, the expansion of the Universe, and inverse Compton scattering off of
CMB photons (see \citealt{Davies16} for more details). Adding to the model presented in \citet{Davies16}, we now include the prescription from \citet{Rahmati13} to approximate self-shielding of the ionizing background in dense gas, and update this self-shielding at each time step to take into account the ionization of dense absorbers by the quasar.
For the ionizing spectrum of each quasar, we first use the \citet{Lusso15} template to convert from the measured $M_{1450}$ to $L_\nu$ at the ionizing edge of hydrogen ($E_{\rm HI}\approx13.6$ eV), and then extrapolate to higher frequencies by assuming $L_\nu \propto \nu^{-1.7}$, in agreement with the best-fit power-law spectrum from \citet{Lusso15}. Assuming a different average quasar template (e.g. \citealt{Telfer02,Stevans14}) could change the output of ionizing photons by tens of percent, which would have implications for the shape of the proximity zone transmission profile (Davies et al., in prep.) and strength of the damping wing. However, the size of the ionized bubble around the quasar $R_{\rm ion}$ (assuming a fully neutral universe) is only weakly dependent on the ionizing photon output $\dot{N}_{\rm ion}$ ($R_{\rm ion} \propto \dot{N}_{\rm ion}^{1/3}$; \citealt{CH00}), and this dependence is completely degenerate with the lifetime of the quasar.
When computing the photoionization and photoheating rates, we integrate the ionizing spectrum over frequencies from the ionizing frequency $\nu_i$ to $40\nu_i$ for each (partially-)neutral species $i$, separately, with 25 logarithmic frequency bins.
We assume a ``lightbulb" model for quasar emission: the quasar turned on at some point $t_{\rm q}$ in the past,
and has been shining at a constant luminosity since then. In the rest of the paper we will refer to $t_{\rm q}$ as the ``quasar lifetime."
\subsection{Hybrid model}\label{sec:model}
\begin{figure*}[htb]
\begin{center}
\resizebox{17.6cm}{!}{\includegraphics[trim={7.0em 2em 6.5em 2.5em},clip]{f3.pdf}}\\
\end{center}
\caption{Example outputs from the hybrid model of quasar proximity zones at $\log{t_{\rm q}}=$ 4.5 (orange), 6.5 (green), and 8.0 (purple) for two skewers (left and right) through the $\langle x_{\rm HI}\rangle=0.5$ simulation at $z=7.54$. The black dotted curves show the initial state of the skewer prior to the quasar turning on. The top panels show the Ly$\alpha$ transmission, the middle panels show $x_{\rm HI}$, and the bottom panels show the gas temperature.}
\label{fig:model_ex}
\end{figure*}
\begin{figure}[htb]
\begin{center}
\resizebox{8.50cm}{!}{\includegraphics[trim={1.3em 2em 4.0em 2.5em},clip]{f4.pdf}}\\
\end{center}
\caption{Mean Ly$\alpha$ transmission profiles from the 1D radiative transfer simulations. The top panel shows the variation in the mean profile for varying quasar lifetime ($\log{[t_{\rm q}/{\rm yr}]}$=3.0--8.0, $\Delta\log{t_{\rm q}}$=1.0) at a fixed global neutral fraction $\langle x_{\rm HI}\rangle$=0.5. The bottom panel shows the variation in the mean profile for varying neutral fraction ($\langle x_{\rm HI}\rangle$=0.0--1.0, ${\Delta}x_{\rm HI}$=0.2) at a fixed quasar lifetime of $\log{[t_{\rm q}/{\rm yr}]}=6.0$.}
\label{fig:model_stacks}
\end{figure}
We synthesize these three model components by computing ionizing radiative transfer along hydrodynamical simulation skewers, with the initial neutral fraction along each sightline set by skewers from the semi-numerical reionization simulations. For regions with $x_{\rm HI}=0$ in the reionization simulations, we assume a uniform ionizing background such that $\langle x_{\rm HI}\rangle$ inside of ionized regions is $\sim10^{-3}$ (corresponding to a hydrogen photoionization rate $\Gamma_{\rm HI}\sim6\times10^{-14}$), although we find that our results are insensitive to this choice.
We initialize the IGM temperature in ionized regions to the values from the hydrodynamical simulation skewer, and assume that the IGM is initially cold (2000 K) inside of neutral regions (e.g. \citealt{Furlanetto06}).
The density field from the hydrodynamical simulation does not reflect the additional clumpiness of such cold gas, i.e. the gas in the simulation has been ``pressure smoothed"
to some extent \citep{GH98,Rorai13,Kulkarni15}, but we do not expect this to have a large effect on the transmission profile.
In the left column of Figure~\ref{fig:model_ex}, we show a typical example of the output from our full model, a simulated sightline assuming the luminosity and redshift of J1342+0928 ($M_{1450}=-26.76, z_{\rm q}=7.5413$) using a skewer from the hydrodynamical simulation with initial $x_{\rm HI}$ given by a skewer from the $\langle x_{\rm HI}\rangle=0.5$ semi-numerical reionization simulation.
The top panel shows the Ly$\alpha$ transmission in the quasar spectrum, the middle panel shows the neutral fraction, and the bottom panel shows the gas temperature. The different colors correspond to quasar lifetimes of $10^{4.5}$ (orange), $10^{6.5}$ (green), and $10^8$ (purple) years. The damping wing signal, shown by the absorption at negative distance ($\lambda_{\rm rest}>\lambda_{{\rm Ly}\alpha}$), is very strong soon after the quasar turns on (orange), with the first patch of neutral gas encountered at $\sim1.3$ proper Mpc along the line of sight.
This first neutral patch is ionized within a few million years (green), weakening the damping wing considerably and photoheating the gas to $T\sim3$--$4\times10^4$~K.
After 100 million years (purple), the quasar has carved out a large enough
ionized region ($\sim7$ proper Mpc) to completely wipe out the damping wing signal, and initially-neutral photoheated regions near the quasar have cooled substantially.
At this stage, the proximity zone is no longer cut off by the onset of fully neutral gas in the IGM; instead, the Ly$\alpha$ forest absorption becomes too strong as the ionizing flux from the quasar decreases (e.g. \citealt{BH07,Eilers17}) and
$x_{\rm HI}$ reaches $\sim10^{-4}$, as shown by the complete disappearance of the purple transmission curve at a shorter distance ($\la 6$ proper Mpc)
than the location
of the
ionization front (6.7 proper Mpc).
Cosmic variance in the density field and in the reionization morphology lead to a wide variety of proximity zone and damping wing spectra at the same neutral fraction -- we show another example sightline in the right column of Figure~\ref{fig:model_ex} which initially resides in a very large ionized region, and thus \emph{never} shows a strong damping wing signal.
For the longest lifetime model (purple), a modest amount of extra transmission appears at
$R\sim5$ proper Mpc
due to heating from the reionization of \ion{He}{2} by the quasar (the thermal proximity effect, e.g. \citealt{Bolton12,Khrykin17}).
For each semi-numerical reionization box, corresponding to $0 \leq \langle x_{\rm HI}\rangle \leq 1$ in
21 steps of $\Delta x_{\rm HI}=0.05$, we ran 2400 radiative transfer simulations on 2400 different random skewers, using each of our 1200 hydrodynamical simulation skewers twice. From these simulations, we computed transmission spectra every
$\Delta \log{[t_{\rm q}/yr]} = 0.5$
in 11 steps from $10^3$ to $10^8$ years covering a velocity range $-10,000 \leq v-v_{\rm sys} \leq +10,000$ km/s, where $v_{\rm sys}$ is the systemic velocity of the halo center. Our final set of transmission spectrum models is then $21 \times 11$, with 2400 spectra for each point in the coarse 2D grid. In Figure~\ref{fig:model_stacks} we show the mean transmission profiles from our simulations as a function of $t_{\rm q}$ at fixed $\langle x_{\rm HI}\rangle=0.5$ (top), and as a function of $\langle x_{\rm HI}\rangle$ at fixed $t_{\rm q}=10^6$ years (bottom).
There is a clear trend towards stronger damping wings and smaller proximity zones for high neutral fractions and short quasar lifetimes.
As noted by \citet{Bolton11}, a degeneracy exists between IGM neutral fraction and quasar lifetime in determining the shape of the proximity zone and damping wing profile, wherein short quasar lifetime and small neutral fraction appears similar to long quasar lifetime and large neutral fraction (
although these models assumed a constant $x_{\rm HI}$ in the IGM instead of our
more realistic patchy topology).
A similar degeneracy arises in our hybrid model because even at large neutral fraction, the
quasar can carve out a large ionized region that greatly increases the distance to the nearest neutral patch,
decreasing the strength of the damping wing feature and increasing the size of the proximity zone (see the purple curves in Figure~\ref{fig:model_ex}). Another consequence of this is that at relatively long quasar lifetimes, $t_{\rm q}\ga10^8$ years, the damping wing almost entirely disappears, even for $\langle x_{\rm HI}\rangle\sim1$.
At even shorter timescales, $t_{\rm q}\la10^5$ years, the inner parts of the proximity zone start to disappear entirely (see the orange curves in Figure~\ref{fig:model_ex}).
This occurs because the gas has not been illuminated long enough
to respond to the increased ionizing flux from the quasar (e.g. \citealt{Khrykin16}), and such short lifetimes may explain the handful of very small proximity zones observed at $z\sim6$ (\citealt{Eilers17}, Davies et al. in prep.).
\section{Statistical Method for Jointly Inferring the Neutral Fraction and Quasar Lifetime}\label{sec:stats}
The measured quasar proximity zone and damping wing signals are a highly covariant heteroskedastic process, with large sightline-to-sightline variance for any particular set of parameters ($\langle x_{\rm HI}\rangle,t_{\rm q}$).
In addition to the
uncorrelated
photon noise in the spectrum,
additional sources of variance are the IGM density field, which leads to the
absorption inside the proximity zone, and the distance to the nearest neutral patch of the IGM, which has strong covariant effects across the whole spectrum. The uncertainty in our prediction for the quasar continuum (\S~2) introduces an additional multiplicative error which is strongly covariant. The combination of these processes cannot be simply described by
by a multivariate Gaussian likelihood, suggesting that inference via standard likelihood-based methods (e.g. Markov Chain Monte Carlo) may be difficult to interpret correctly.
Instead, we adopt an approach following principles of Bayesian indirect inference \citep{Gourieroux93,Drovandi15}, wherein the likelihood for auxiliary parameters or an auxiliary likelihood of the true parameters is used in place of an intractable true likelihood for the true parameters.
We define a ``pseudo-likelihood" $\tilde{L}$ as the product
of flux
probability distribution functions (PDFs)
$P(F_i)$ of 500 km/s binned pixels,
\begin{equation}\label{eqn:pseudo}
\tilde{L}(\theta) = \prod_i P(F_i|\theta),
\end{equation}
which is equivalent to the likelihood function of the (500 km/s binned) transmission spectrum in the absence of correlations between pixels.
For computational simplicity, and to limit the impact of our finite number of simulated sightlines, we approximate the flux PDFs $P(F_i|\theta)$ of each bin $i$ by fitting them with mixtures of three Gaussians\footnote{The exact form of the approximation to the individual flux PDFs appears to have only a minor effect on our analysis -- similar, albeit somewhat less constraining, posterior PDFs can be obtained with single Gaussian fits.}. While direct parameter inference from this likelihood would be formally incorrect due to the neglected correlations, one can still determine a set of maximum pseudo-likelihood model parameters, $\theta_{\rm M\tilde{L}E}$, which should be closely related to the true maximum likelihood parameters. This procedure
reduces the dimensionality of our data from the number of transmitted flux bins in the spectrum to the number of model parameters, allowing for a full Bayesian treatment with modest computational expense (albeit likely with slightly
less constraining power than the original data due to information lost in this compression).
We treat $\theta_{\rm M\tilde{L}E}$ as a summary statistic and compute the posterior PDF of the ``true" model parameters $\theta$ following Bayes' theorem,
\begin{equation}
p(\theta|\theta_{\rm M\tilde{L}E}) = \frac{p(\theta_{\rm M\tilde{L}E}|\theta)p(\theta)}{p(\theta_{\rm M\tilde{L}E})},
\end{equation}
where $p(\theta|\theta_{\rm M\tilde{L}E})$ is the posterior PDF of $\theta$, $p(\theta_{\rm M\tilde{L}E}|\theta)$ is the likelihood of $\theta_{\rm M\tilde{L}E}$ given the model $\theta$,
$p(\theta)$ is the prior on $\theta$, and $p(\theta_{\rm M\tilde{L}E})$ is the evidence.
We compute the likelihood function \emph{directly} by measuring the distribution of
$\theta_{\rm M\tilde{L}E}$ for forward-modeled mock data on a coarse grid of $\theta=(\langle x_{\rm HI}\rangle,t_{\rm q})$ and explicitly computing the evidence,
\begin{equation}
p(\theta_{\rm M\tilde{L}E})=\int p(\theta_{\rm M\tilde{L}E}|\theta) p(\theta) d\theta.
\end{equation}
We denote as $\hat{F}$ a forward-modeled transmission spectrum, which results from taking a draw $F$ from the set of 2400 transmission spectra computed for the parameter set $\theta$,
multiplication by a random draw from a multivariate Gaussian distribution describing the relative continuum error ($\epsilon_C$; Paper I), then a draw of additive Gaussian noise following the continuum-normalized noise vector of the spectrum ($N$):
\begin{equation}
\hat{F} = F\times(1+\epsilon_C)+N.
\end{equation}
We find the $\theta_{\rm M\tilde{L}E}$ for each mock spectrum via a simple brute force approach, computing $\tilde{L}$ (equation \ref{eqn:pseudo})
for each of the $21\times11$ models in our coarse grid.
For priors, we assume a flat \emph{linear} prior on $\langle x_{\rm HI}\rangle$ from 0 to 1, and flat \emph{log} priors on $t_{\rm q}$ from $10^{3}$ to $10^{8}$ years. The linear prior on $\langle x_{\rm HI}\rangle$ reflects our expectation that $z>7$ is in the midst of the reionization epoch, while the log prior on $t_{\rm q}$ reflects our broad uncertainty on the lifetime of the luminous quasar phase, incorporating recently discovered evidence for very short lifetimes \citep{Eilers17}. We will also quote constraints for a stronger prior on $t_{\rm q}$ which only allows lifetimes as short as $10^5$ years. We will discuss the impact of this choice of priors in \S~\ref{sec:priors}.
In this method, the data have been compressed into the measurement of $\theta_{\rm M\tilde{L}E}$, so any spectrum with a particular $\theta_{\rm M\tilde{L}E}$ will result in an identical 2D posterior PDF. Given our coarse grid in parameter space, there are only $21\times11=231$ possible posterior PDFs for any given quasar spectrum.
\section{Results}
In Paper I, we estimated the intrinsic continua (and their uncertainties) of the two quasars known at $z>7$: ULAS J1120+0641 ($z=7.0851$; \citealt{Mortlock11,Venemans17}) and ULAS J1342+0928 ($z=7.5413$; \citealt{Banados18,Venemans17b}). Here we apply the statistical method from the previous section to jointly constrain the neutral fraction at $z>7$ and the quasar lifetimes by comparing the resulting transmission profiles to our simulated spectra.
For the purposes of modeling the physical state of the IGM along the line of sight, we adopt the precise systemic redshifts above as the true locations of the quasar host halos. However, these systemic redshifts have little relevance to the PCA continua, given that the PCA model was trained on quasars with imprecise redshifts. In Paper I we resolved this ambiguity by fitting for a ``template redshift" simultaneously with the red-side PCA coefficients, resulting in an independent (but physically irrelevant) redshift estimate that can be applied to any quasar spectrum. This template redshift is what then defines the rest-frame wavelengths for the continuum prediction.
\subsection{ULAS J1120+0641}
\begin{figure*}[ht]
\begin{center}
\resizebox{17.6cm}{!}{\includegraphics[trim={6.5em 0em 8.0em 4.0em},clip]{f5.pdf}}\\
\end{center}
\caption{Top: VLT/FORS2 + Gemini/GNIRS spectrum of ULAS J1120+0641 (\citealt{Mortlock11}, black) and its noise vector (red). The red-side PCA fit and blue-side prediction are shown as the orange and blue curves, respectively. Bottom: Zoom in of the Ly$\alpha$ region of the spectrum, where the vertical dashed line shows rest-frame Ly$\alpha$ ($\lambda_{\rm rest}=1215.67$ {\AA}). The transparent curves show 100 draws from the covariant blue-side prediction error calibrated from the 1\% most similar quasars in the training set. This quasar shows modest evidence for a damping wing and has a relatively small proximity zone.}
\label{fig:mortlock_pca}
\end{figure*}
\begin{figure}[htb]
\begin{center}
\resizebox{8.5cm}{!}{\includegraphics[trim={1.2em 1.2em 1.2em 1em},clip]{f6.pdf}}\\
\end{center}
\caption{Continuum-divided spectrum of ULAS J1120+0641 (grey) and its noise vector (red). The black histogram shows the spectrum rebinned to $\sim500$ km/s in the region we use for model comparison. The blue solid curve shows the median $\sim500$ km/s-binned transmission spectrum of mock spectra with the M\~{L}E parameter values $\theta_{\rm M\tilde{L}E}=(\langle x_{\rm HI}\rangle=0.65,\log{t_{\rm q}}=6.5)$, while the associated blue shaded region shows the 16th--84th percentile range for mock spectra with $\theta=\theta_{\rm M\tilde{L}E}$}
\label{fig:mortlock_fit}
\end{figure}
\begin{figure}[htb]
\begin{center}
\resizebox{8.50cm}{!}{\includegraphics[trim={6.0em 0.2em 4.0em 3.5em},clip]{f7.pdf}}\\
\end{center}
\caption{2D posterior PDF of $\langle x_{\rm HI}\rangle$ and $\log{t_{\rm q}}$ resulting from the M\~{L}E parameter values $\theta_{\rm M\tilde{L}E}=(\langle x_{\rm HI}\rangle=0.65,\log{t_{\rm q}}=6.5)$ derived from the ULAS J1120+0641 spectrum. The contours enclose $68\%$ and $95\%$ of the total probability.}
\label{fig:mortlock_post}
\end{figure}
In Figure~\ref{fig:mortlock_pca}, we show the red-side fit to the VLT/FORS2 + Gemini/GNIRS spectrum of ULAS J1120+0641 (\citealt{Mortlock11}; top panel) and the predicted blue-side
continuum (bottom panel) from Paper I. The predicted continuum has been corrected for the mean bias of predicted continua for similar quasars in the training set, as discussed in \S\ref{sec:pca}
and shown in Figure 12 of Paper I. We find a best-fit template redshift of $z=7.0834$,
a very small blueshift of ${\Delta}v=63$ km/s
from the systemic
frame defined by the centroid of the [CII] emission line of the host galaxy ($z=7.0851$, \citealt{Venemans17}).
The blue-side profile shows a hint of absorption redward of Ly$\alpha$ and a relatively small proximity zone ($R_{\rm p}=1.72$ proper Mpc,
following the definition in \citealt{Eilers17}) compared to the trend seen at $z\sim5.7$--$6.5$ \citep{Eilers17,Mazzucchelli17}.
While the damping wing signal appears to be fairly strong at rest-frame Ly$\alpha$, similar
to
what was found by previous works \citep{Mortlock11,Simcoe12,Greig17b},
when our large, covariant continuum uncertainty is taken into account the spectrum does not appear to definitively indicate a neutral IGM.
We show the resulting transmission spectrum (i.e. observed spectrum divided by the continuum model) as the grey curve in Figure~\ref{fig:mortlock_fit} and the 500 km/s-binned spectrum in black. For the statistical analysis,
we only use pixels at $v-v_{\rm sys} > -4,400$ km/s ($\lambda_{\rm rest}\la1233$ {\AA})
to avoid the strong (and unresolved) associated \ion{N}{5} absorption, and we choose to end the blue-side coverage at $v-v_{\rm sys} = +6,400$ km/s ($\lambda_{\rm rest}\sim1190$ {\AA})
because all of the proximity zone models have no detectable signal beyond that distance. We find maximum pseudo-likelihood parameter values of $\theta_{\rm M\tilde{L}{E}} = (\langle x_{\rm HI}\rangle=0.65, \log{t_{\rm q}}=6.5)$, and we show the median transmission profile of the M\~{L}E model (blue solid) and the expected 16--84th percentile scatter (blue shaded)
from forward modeled spectra with $\theta=\theta_{\rm M\tilde{L}E}$ in Figure~\ref{fig:mortlock_fit}.
From the M\~{L}E parameter values we infer the 2D posterior PDF $p(\theta|\theta_{\rm M\tilde{L}E})$ shown in Figure~\ref{fig:mortlock_post}. The posterior PDF is relatively flat across a wide swathe of $(\langle x_{\rm HI} \rangle,\log{t_{\rm q}})$ parameter space, with a trend towards higher $\langle x_{\rm HI} \rangle$ for longer $t_{\rm q}$, reflectingv the degeneracy between these two parameters discussed in \S~\ref{sec:model} and shown in Figure~\ref{fig:model_stacks}.
The non-zero size of the proximity zone rules out quasar lifetimes shorter than $\sim10^{4.5}$ years, while the combination of damping wing strength and small proximity zone rule out quasar lifetimes longer than $\sim10^{7}$ years.
\subsection{ULAS J1342+0928}
\begin{figure*}[htb]
\begin{center}
\resizebox{17.6cm}{!}{\includegraphics[trim={6.2em 0em 8.0em 4.0em},clip]{f8.pdf}}\\
\end{center}
\caption{Similar to Figure~\ref{fig:mortlock_pca} but for the Magellan/FIRE + Gemini/GNIRS spectrum of ULAS J1342+0928 (black), including its noise vector (red), red-side PCA fit (orange), and blue-side prediction (blue). The FIRE spectrum in the top panel has been re-binned to match the resolution of the GNIRS data used in the K-band, while the bottom panel is shown at the higher FIRE resolution. This quasar shows strong evidence for a damping wing and has a very small proximity zone.}
\label{fig:pisco_pca}
\end{figure*}
\begin{figure}[htb]
\begin{center}
\resizebox{8.5cm}{!}{\includegraphics[trim={1.2em 1.2em 1.2em 0.2em},clip]{f9.pdf}}\\
\end{center}
\caption{Similar to Figure~\ref{fig:mortlock_fit} but for the continuum-divided spectrum of ULAS J1342+0928. The purple solid curve shows the median binned transmission spectrum in the mock spectra assuming the M\~{L}E parameter values $\theta_{\rm M\tilde{L}E}=(\langle x_{\rm HI}\rangle=0.8,\log{t_{\rm q}}=6.0)$, while the associated purple shaded region shows the 16th--84th percentile range for mock spectra with $\theta=\theta_{\rm M\tilde{L}E}$. The orange shaded regions highlight identified metal absorption systems that we have masked in our analysis.}
\label{fig:pisco_fit}
\end{figure}
\begin{figure}[htb]
\begin{center}
\resizebox{8.50cm}{!}{\includegraphics[trim={6.0em 0.0em 4.0em 3.5em},clip]{f10.pdf}}\\
\end{center}
\caption{2D posterior PDF of $\langle x_{\rm HI}\rangle$ and $\log{t_{\rm q}}$ resulting from the M\~{L}E parameter values $\theta_{\rm M\tilde{L}E}=(\langle x_{\rm HI}\rangle=0.65,\log{t_{\rm q}}=6.5)$ derived from the ULAS J1342+0928 spectrum. The contours enclose $68\%$ and $95\%$ of the total probability.}
\label{fig:pisco_post}
\end{figure}
In Figure~\ref{fig:pisco_pca}, we show the red-side fit to the Magellan/FIRE + Gemini/GNIRS spectrum of ULAS J1342+0928 and the predicted blue-side (bias-corrected) continuum from Paper I. We find a best-fit template redshift of $z=7.4438$, a blueshift of ${\Delta}v=3422$ km/s from the systemic frame ($z=7.5413$, \citealt{Venemans17b}).
The red-side spectrum is very different from a typical quasar, however, similar examples do exist in our PCA training set (Paper I), and the PCA model is capable of broadly reproducing the spectrum. In fact, the uncertainty in the continuum derived from nearest-neighbor quasars in the training set is somewhat lower than for typical quasars due to the relatively weak broad emission lines.
The blue-side profile shows a strong damping wing redward of Ly$\alpha$, and a very small proximity zone ($R_p=1.20$ pMpc).
The damping wing signal is clearly stronger and the proximity zone is even smaller than ULAS J1120+0641, despite the slightly higher luminosity of ULAS J1342+0928. Both of these properties point towards a substantially neutral IGM surrounding ULAS J1342+0928.
We show the resulting transmission spectrum as the grey curve in Figure~\ref{fig:pisco_fit} and the 500 km/s-binned spectrum in black. No strong associated absorption is visible in the spectrum, so we include all pixels redward of Ly$\alpha$ that are covered by our modeled transmission spectra ($v-v_{\rm sys}<10,000$ km/s, $\lambda_{\rm rest}\la1255$ {\AA}).
We cut off the blue-side coverage at $+4000$ km/s ($\sim1200$ {\AA})
due to the presence of non-Gaussian noise features (both positive and negative spikes) in the spectrum that are not included in our noise model, which could spuriously impact the statistical analysis\footnote{This non-Gaussianity is likely also affecting the region of spectrum that we do analyze; however, we expect that even a modestly enhanced noise in the spectrum will still be subdominant compared to the other sources of variance that we consider (cosmic variance, continuum error).}. The maximum pseudo-likelihood parameter values are $\theta_{\rm M\tilde{L}{E}}= (\langle x_{\rm HI}\rangle=0.8, \log{t_{\rm q}}=6.0)$, and we compare the binned transmission spectrum to the median transmission profile of the M\~{L}E model (solid purple) and the expected 16--84th percentile scatter (dashed purple) from our forward modeling in Figure~\ref{fig:pisco_fit}.
From the M\~{L}E parameter values we infer the 2D posterior PDF $p(\theta|\theta_{\rm M\tilde{L}E})$
shown in Figure~\ref{fig:pisco_post}. Due in large part to the strong damping wing, the posterior PDF has a clear preference for a significantly neutral universe, even for short quasar lifetimes, although there is still some degeneracy between $\langle x_{\rm HI}\rangle$ and $t_{\rm q}$.
\section{Discussion}
In the preceding sections, we have demonstrated a method for jointly constraining the global neutral fraction $\langle x_{\rm HI}\rangle$ and quasar lifetime $t_{\rm q}$ from analysis of the proximity zone and (presence or absence)
damping wing, and applied it to the two quasars known at $z>7$: ULAS J1120+0641 at $z=7.09$, and ULAS J1342+0928 at $z=7.54$.
Here we present our constraints on $\langle x_{\rm HI}\rangle$ marginalized over quasar lifetime,
compare our analysis of ULAS J1120+0641 to
those from previous works, and discuss our choice of $\langle x_{\rm HI}\rangle$ and $t_{\rm q}$ priors.
\subsection{The History of Reionization: $\langle x_{\rm HI}\rangle(z)$}
\begin{figure
\begin{center}
\resizebox{8.50cm}{!}{\includegraphics[trim={1.5em 1em 4.0em 5.5em},clip]{f11.pdf}}\\
\end{center}
\caption{Posterior PDFs of $\langle x_{\rm HI}\rangle$ for ULAS J1120+0641 (blue) and ULAS J1342+0928 (purple) marginalized over quasar lifetime assuming a flat prior covering our entire model grid ($3.0 \leq \log{t_{\rm q}/{\rm yr}} \leq 8.0$; solid curves) or adopting a prior that excludes extremely short lifetimes ($5.0 \leq \log{t_{\rm q}/{\rm yr}} \leq 8.0$; dashed curves).
The posterior PDF from the damping wing analysis of ULAS J1120+0641 in \citet{Greig17b} is shown as the dotted grey curve.}
\label{fig:xhi_pdf}
\end{figure}
In Figure~\ref{fig:xhi_pdf}, we show the posterior PDFs for $\langle x_{\rm HI}\rangle$ from each quasar, marginalized over quasar lifetime with a flat prior in log space from $10^3$ to $10^8$ years (solid curves) and a more restrictive prior from $10^5$ to $10^8$ years (dashed curves), and provide the 68\% and 95\% credible intervals in Table~\ref{tab:xhi}. We chose to include extremely short quasar lifetimes $\la10^5$ years in our fiducial analysis due to the existence of a surprisingly large fraction
of small proximity zones at $z\sim6$ ($\sim10\%$, \citealt{Eilers17}) which imply lifetimes shorter than $10^5$ years.
Including the possibility of the shortest quasar lifetimes shifts the $\langle x_{\rm HI}\rangle$ posterior PDF
to slightly lower values, consistent with the degeneracy between $\langle x_{\rm HI}\rangle$ and $t_{\rm q}$ shown in Figure~\ref{fig:model_stacks},
but in general does not have a large effect. While our analysis of ULAS J1120+0641 suggests a neutral fraction of $\langle x_{\rm HI} \rangle\sim0.5$, the posterior PDF is not particularly constraining, with significant probability density at the $\langle x_{\rm HI} \rangle=0$ and $\langle x_{\rm HI} \rangle=1$ boundaries. In contrast, the posterior PDF for ULAS J1342+0928 strongly indicates a significantly neutral IGM, ruling out $\langle x_{\rm HI} \rangle < 0.08\ (0.14)$ at 99\% probability marginalized over quasar lifetime for our prior covering $10^3 \leq t_{\rm q} \leq 10^8$ ($10^5 \leq t_{\rm q} \leq 10^8$) years.
\begin{figure
\begin{center}
\resizebox{8.50cm}{!}{\includegraphics[trim={1.0em 1em 4.0em 5.0em},clip]{f12.pdf}}
\end{center}
\caption{Violin plot comparing the posterior PDFs from our analysis with the reionization history constraints from \citet{Planck16b}, with the dark and light grey shaded regions corresponding to the 68\% and 95\% credible intervals, respectively. Also shown are the Ly$\alpha$+Ly$\beta$ forest dark pixel constraints from \citet{McGreer15} (red crosses) and the damping wing analysis of ULAS J1120+0641 from \citet{Greig17b} (orange square).}
\label{fig:planck}
\end{figure}
In Figure~\ref{fig:planck}, we compare the posterior PDFs for $\langle x_{\rm HI}\rangle$ from each quasar to the broad swathe of reionization histories consistent with the measured electron-scattering optical depth of the
CMB \citep{Planck16b}\footnote{We compare to the combined Planck + ACT/SPT constraints on the reionization history that take into account upper limits on the strength of the kinetic Sunyaev-Zel'dovich effect and a prior from Ly$\alpha$ forest measurements that the end of reionization occurred before $z=6$, see \S~5.3 of \citet{Planck16b}}.
Under our most conservative prior, allowing quasar lifetimes from $10^3$ to $10^8$ years, we find 68\% (95\%) credible intervals of $\langle x_{\rm HI}\rangle(z=7.09)=0.48^{+0.26}_{-0.26}(^{+0.47}_{-0.46})$ and $\langle x_{\rm HI}\rangle(z=7.54)=0.60^{+0.20}_{-0.23}(^{+0.36}_{-0.45})$. These constraints are consistent with the CMB and are in broad agreement with recent calculations of the reionization history (e.g. \citealt{Robertson15,Bouwens15,Khaire16}).
The large cosmic variance between the damping wing profiles at fixed $\langle x_{\rm HI}\rangle$ (\S~\ref{sec:model}), the strong degeneracy with quasar lifetime,
and the limited precision of our continuum reconstructions
greatly limits the constraining power of any single $z>7$ quasar. However, with only a handful of additional quasars at $z>7$, it may be possible to constrain $\langle x_{\rm HI} \rangle(z)$
to $\sim10\%$. That said, despite the substantial uncertainties, our analysis of two $z>7$ quasars already constrains the reionization history more than the integral constraint from the CMB.
\begin{table*}[tb]
\begin{center}
\caption{Neutral fraction constraints from the proximity zone and damping wing}
\label{tab:xhi}
\begin{tabular}{c c c c c}
\hline \noalign {\smallskip}
Quasar & $z$ & $\langle x_{\rm HI}\rangle$ $(10^3 \leq t_{\rm q}/{\rm yr} \leq 10^8)$ & $\langle x_{\rm HI}\rangle$ $(10^5 \leq t_{\rm q}/{\rm yr} \leq 10^8)$ \\
\hline \noalign {\smallskip}
ULAS J1120+0641 & 7.0851 & $0.48^{+0.26}_{-0.26}(^{+0.47}_{-0.46})$ & $0.52^{+0.25}_{-0.25}(^{+0.44}_{-0.46})$ \\
ULAS J1342+0928 & 7.5413 & $0.60^{+0.20}_{-0.23}(^{+0.36}_{-0.45})$ & $0.67^{+0.19}_{-0.23}(^{+0.31}_{-0.45})$ \\
\hline \noalign {\smallskip}
\end{tabular}
\end{center}
The tabulated constraints represent the median and 68\% (95\%) credible intervals obtained via linear interpolation of the $t_{\rm q}$-marginalized posterior PDFs in Figure~\ref{fig:xhi_pdf}.
\end{table*}
\subsection{Previous studies of ULAS J1120+0641}
\begin{figure
\begin{center}
\resizebox{8.50cm}{!}{\includegraphics[trim={1.0em 0em 3.5em 3em},clip]{f13.pdf}}
\end{center}
\caption{Comparison between our blue-side prediction for ULAS J1120+0641 (blue), the SDSS matched composite spectrum from \citet{Mortlock11} (orange), and the model from \citet{Greig17b}. The \citet{Greig17b} model has been renormalized to our blue-side prediction at $\lambda_{\rm rest}=1245$ {\AA} to correct for the different flux calibration of the \citet{Simcoe12} spectrum used in their analysis.}
\label{fig:greig}
\end{figure}
In the original discovery paper for ULAS J1120+0641, \citet{Mortlock11} suggested that the spectrum showed signs of an IGM damping wing. They selected a sample of lower-redshift quasars with similar \ion{C}{4} blueshifts (relative to \ion{Mg}{2}) and equivalent widths, and stacked their spectra to predict the shape of ULAS J1120+0641. The resulting composite spectrum was somewhat above the observed spectrum at wavelengths at and just redward of rest-frame Ly$\alpha$, with a shape resembling the characteristic damping wing profile. However, the uncertainty in the stacked composite was not fully quantified, and the physical model was limited to the \citet{ME98} expression for the damping wing. A followup work by \citet{Bolton11} expanded upon the physical model with 1D radiative transfer simulations, and found that the combination of absorption at rest-frame Ly$\alpha$ and the small proximity zone were suggestive of neutral gas close to the quasar ($x_{\rm HI}\sim0.1$), although they noted that an identical signal could potentially come from small-scale optically thick gas along the line of sight instead of a neutral IGM (see also \citealt{Keating15}). With a higher resolution FIRE spectrum, \citet{Simcoe12} found that any such dense gas would have to be extremely metal-poor ($[Z/H]<-4$), which would seem to favor the IGM interpretation.
The accuracy of the \citet{Mortlock11} composite spectrum as a prediction for ULAS J1120+0641 was called into question by \citet{BB15}, because the composite spectrum fails to match the \ion{C}{4} line and this may lead to an overestimate of the Ly$\alpha$ emission. \citet{BB15} selected a comparison sample of low-redshift quasars with more precisely-matched \ion{C}{4} emission line profiles. They found that the shape of the Ly$\alpha$+\ion{N}{5} region of these spectra was nearly identical to ULAS J1120+0641, suggesting that there may not be any damped absorption at all.
The most recent analysis of the ULAS J1120+0641 damping wing profile was undertaken by \citet{Greig17b}. Similar to this work, they trained a predictive model for the intrinsic blue-side continuum from a large sample of BOSS quasar spectra \citep{Greig17a}. Their parametric model predicts Gaussian emission line parameters for Ly$\alpha$ (line width, amplitude, and velocity shift of two components) from fits to several broad emission lines on the red side of the spectrum. In addition, they separately fit the \ion{N}{5}+\ion{Si}{2} complex at $1230 < \lambda_{\rm rest} < 1275$ {\AA} and introduce this fit as a strict prior to the model for the Ly$\alpha$ damping wing region ($1218 < \lambda_{\rm rest} < 1230$ {\AA}). Again similar to our analysis, \citet{Greig17b} employed a large-volume semi-numerical simulation of reionization (the Evolution Of 21 cm Structure simulation, \citealt{Mesinger16}) to characterize the large-scale distribution of neutral gas around massive halos during the reionization epoch. By restricting their analysis to wavelengths redward of Ly$\alpha$, they did not need to explicitly model the proximity zone of the quasar.
To approximate the effect of the quasar ionizing radiation on neutral gas along the line of sight, they ionize the first 16 comoving Mpc ($\sim2$ proper Mpc) of every sightline to be consistent with the observed profile,
but beyond this distance the quasar does not affect the ionization topology.
Their final statistical constraints on $\langle x_{\rm HI}\rangle$ were derived from a $\chi^2$-based likelihood analysis of intrinsic Ly$\alpha$ profiles drawn from their predictive
continuum model multiplied by the damping wing absorption from sightlines through their reionization simulation. While \citet{Greig17b} presented the most sophisticated study of a quasar damping wing at the time, their method has a handful of potential shortcomings which we describe below.
First, their fit of the \ion{N}{5} and \ion{Si}{2} complex and subsequent prior on the spectrum begins at $\lambda_{\rm rest}=1230$ {\AA} under the assumption that the damping wing absorption there is minimal. However, the absorption at $\lambda_{\rm rest}\sim1230$ {\AA} can still be significant at large $\langle x_{\rm HI}\rangle$, a fact which can be readily seen in the inset panel of Figure 2 in \citet{Greig17b} where their best-fit damping wing model still shows $>2\sigma$
absorption at $\lambda_{\rm rest}\sim1230$ {\AA} (see also Figure~\ref{fig:model_stacks}).
This prior may then result in a bias towards lower $\langle x_{\rm HI}\rangle$, as smooth absorption redward of $\lambda_{\rm rest}\sim1230$ {\AA} may be fitted out, although it is unclear what the magnitude of this effect would be in practice.
Additionally, while they account for scatter in the Gaussian parameters for the Ly$\alpha$ line, they do not appear to account for any error in the Gaussian fits themselves. Indeed, as described in \citet{Greig17a}, they remove quasars from the training set whose spectra are ``not well fit or characterized" by their double-Gaussian model for the Ly$\alpha$ line. While we have also excluded some discrepant quasars from our analysis, they were exclusively
the most extreme cases of BALs and associated absorption which would be readily apparent in the spectra of $z>7$ quasars. The distribution of possible continua for ULAS J1120+0641 used in \citet{Greig17b} thus represents an underestimate of the true continuum error, lacking the additional error resulting from any deviations in the true continuum from the multiple Gaussian model.
Finally, by always treating the first $\sim2$ proper Mpc of every sightline as ionized, \citet{Greig17b} introduce a complicated prior on the quasar lifetime. For sightlines which originally intersect neutral gas within 2 proper Mpc, the quasar must have been on long enough to ionize material out to that distance. However, for sightlines where the first 2 proper Mpc are already ionized, the quasar then has no effect at all on neutral gas along the line of sight, implying a very short lifetime. As such the \citet{Greig17b} posterior PDF cannot be considered fully marginalized over quasar lifetime, making a direct comparison between our $t_{\rm q}$-marginalized $\langle x_{\rm HI}\rangle$ constraints (Figure~\ref{fig:xhi_pdf}) and the results of the \citet{Greig17b} analysis very difficult.
In Figure~\ref{fig:greig}, we compare our blue-side prediction for ULAS J1120+0641 to the \citet{Mortlock11} composite spectrum and the \citet{Greig17b} model, where the latter has been renormalized to match our prediction at $\lambda_{\rm rest}=1245$ {\AA}. While our method predicts a very similar continuum to \citet{Mortlock11}, the intrinsic Ly$\alpha$ emission line strength predicted by \citet{Greig17b} is dramatically higher (albeit with nearly identical Ly$\alpha$ centroids). Most importantly, however, we predict a substantially lower continuum (i.e. much closer to the observed spectrum) at $\lambda_{\rm rest}\sim1225$ {\AA}, the spectral region which contributed the most to the damping wing detection in \citet{Greig17b}. As a result, our measurement does not rule out $\langle x_{\rm HI}\rangle\sim0$, although we nevertheless prefer somewhat higher $\langle x_{\rm HI}\rangle$ than \citet{Greig17b}
(see Figure~\ref{fig:xhi_pdf}), in large part due to the small proximity zone. While it is currently unclear why our two methods predict substantially different continua, we note that our predicted continuum is more consistent with the weak Ly$\alpha$ lines of the ULAS J1120+0641-analogs discussed in \citet{BB15}, and with the nearest-neighbor quasars we used to calibrate the continuum uncertainty in Paper I.
\subsection{Choice of $\langle x_{\rm HI}\rangle$ and $t_q$ Priors}\label{sec:priors}
As mentioned above, we assume a flat prior on $\langle x_{\rm HI}\rangle$ from ``0" (in truth, a model where reionization has finished with residual $\langle x_{\rm HI}\rangle\sim10^{-3}$) to 1.0. If we were to instead assume a flat logarithmic prior (i.e. $p(\langle x_{\rm HI} \rangle) \propto 1/\langle x_{\rm HI} \rangle$) that extended down to $\langle x_{\rm HI}\rangle\sim10^{-4}$, which would still be consistent with a completely opaque Ly$\alpha$ forest at
$z\ga7$,
our constraints on $\langle x_{\rm HI}\rangle$ would be dragged down to $\langle x_{\rm HI}\rangle\sim0$, and there would be little evidence at all for ongoing reionization -- the posterior PDF at $\langle x_{\rm HI}\rangle\sim0$, which is small but non-negligible (Figure~\ref{fig:xhi_pdf}), would be boosted by a factor of $>10^3$ relative to $\langle x_{\rm HI}\rangle>0.1$. Because the damping wing signal is only detected at modest
statistical significance in the context of the covariant continuum errors in our
PCA method (even for ULAS J1342+0928), switching to a
prior on $\langle x_{\rm HI}\rangle$ that is instead uniform in log space would shift the posterior PDF to peak close to $\langle x_{\rm HI}\rangle\sim0$ for essentially \emph{any} realistic quasar damping wing signal unless $\langle x_{\rm HI}\rangle\sim1$ and $t_{\rm q}$ is very short.
One could argue that our prior knowledge of $\langle x_{\rm HI} \rangle$ is not simply log-uniform, however, but can instead be thought of as bimodal.
If reionization is complete, the Universe is ``highly ionized," with $\langle x_{\rm HI}\rangle\la10^{-3}$ set by photoionization equilibrium with a metagalactic ionizing radiation field (e.g. \citealt{HM12}). If the Universe is undergoing the reionization phase transition, we instead have $\langle x_{\rm HI} \rangle$ of order unity.
While we are still starved for $z>7$ quasars we will remain in a regime where quantitative constraints on $\langle x_{\rm HI} \rangle$ depend strongly on our choice of priors, but larger samples will greatly reduce this dependence: the prior enters the posterior PDF only once while each additional quasar contributes to the likelihood function. Assuming a Gaussian likelihood with $\langle x_{\rm HI}\rangle=0.5$ and $\sigma_{\langle x_{\rm HI}\rangle}\sim0.3$, similar to our constraint from the ULAS J1120+0641 spectrum, to counteract a factor of $10^4$ prior advantage at $\langle x_{\rm HI}\rangle\sim10^{-4}$ would require $\ga7$ quasars. Such a population of $z\ga7$ quasars is within reach of current programs exploiting wide-field optical and near-infrared surveys (e.g. \citealt{Wang17,Banados18}).
Our fiducial prior on $t_q$ is a flat, logarithmic prior from $10^3$ to $10^8$ years. The lower limit is motivated by the extremely small proximity zones in \citet{Eilers17} whose sizes are consistent with such a short lifetime ($t_{\rm q}<10^5$ years).
The upper limit comes from the fact that, assuming accretion at the Eddington limit with $10\%$ radiative efficiency, $10^8$ years ago the quasar would have been roughly an order of magnitude fainter than currently observed. The quasar would then be effectively shut off at early times, so longer lifetimes would be largely irrelevant to the proximity zone and damping wing structure and can thus be excluded. Other estimates from the thermal proximity effect \citep{Bolton12} and \ion{He}{2} transverse proximity effect \citep{Schmidt17a,Schmidt17b} suggest lifetimes of at least $10^7$ years.
The spectra of ULAS J1120+0641 and ULAS J1342+0928 both appear to exclude lifetimes at the upper and lower ends of our fiducial prior range (Figures \ref{fig:mortlock_post} and \ref{fig:pisco_post}), however, so expanding the bounds in either direction would make little difference to our posterior PDFs.
The choice of flat prior in linear space vs. log space for $t_{\rm q}$ is more subtle.
If one assumes that all quasars live for a fixed amount of time $t_{\rm q,max}$, then a random quasar will have been shining continuously for a time $t_{\rm q}$
(which is what we have defined as ``lifetime" in this work) drawn from a uniform distribution between $0 < t_{\rm q} < t_{\rm q,max}$, and so a flat prior in linear space would be appropriate. However, the uncertainty on this maximum lifetime spans multiple orders of magnitude (e.g. \citealt{Martini04}), so we believe that a flat prior in log space is reasonably well justified.
\section{Conclusion}
In this work we have used the intrinsic quasar continuum models of ULAS J1120+0641 and ULAS J1342+0928 from Paper I, in combination with extensive forward modeling of the proximity zone and damping wing features in the context of patchy reionization, to jointly constrain the lifetimes of the two quasars and the volume-averaged neutral fraction of the Universe at $z>7$.
Our hybrid model of quasar spectra combines large-scale semi-numerical reionization simulations, hydrodynamical simulations, and 1D radiative transfer of ionizing photons from the quasars. We computed 2400 transmission spectra covering the proximity zone and damping wing for each pair of $\langle x_{\rm HI}\rangle$ and $\log{t_{\rm q}}$ on a coarse $21\times11$ grid for both quasars. Accounting for the covariant intrinsic quasar continuum uncertainty from Paper I, we can then construct realistic forward modeled representations of quasar transmission spectra. Based
on these mock spectra we developed a Bayesian statistical method for recovering the joint posterior PDF of $\langle x_{\rm HI}\rangle$ and $\log{t_{\rm q}}$ from an observed quasar spectrum.
Applying our statistical methodology to the spectra of ULAS J1120+0641 at $z=7.09$ \citep{Mortlock11} and ULAS J1342+0928 at $z=7.54$ \citep{Banados18}, we found that both quasars are consistent with an ongoing epoch of reionization at $z>7$. When marginalized over quasar lifetimes from $10^3$ to $10^8$ years, the resulting medians and 68\% credible intervals of the posterior PDFs are $\langle x_{\rm HI}\rangle(z=7.09)=0.48^{+0.26}_{-0.26}$ and $\langle x_{\rm HI}\rangle(z=7.54)=0.60^{+0.20}_{-0.23}$.
Using our method it should be possible to constrain $\langle x_{\rm HI}\rangle$ at lower redshifts $z\sim6$--$7$ where there are far more quasars known (e.g. \citealt{Venemans13,Venemans15,Banados16,Reed17,Wang17,Mazzucchelli17}). The most constraining measurements at lower redshift to date are the model-independent upper limits from \citet{McGreer15} who measured the fraction of dark pixels in the co-spatial Ly$\alpha$ and Ly$\beta$ forests, shown as red points in Figure~\ref{fig:planck}, but this method becomes less constraining as the Ly-series forests become almost entirely opaque at $z\ga6$, which may simply result from density evolution in the IGM or a mild decrease in the ionizing background towards higher redshift (e.g. \citealt{Davies17}).
We predict that we will be able to obtain stronger constraints than the $z\sim6.1$ upper limit of \citet{McGreer15} from the multitude of existing spectra of quasars at $z\sim6-7$, an endeavor we leave for future work.
Large samples of $z\ga7$ quasars to be discovered in further follow-up of quasar candidates from ground-based surveys (e.g. ULAS, \citealt{Lawrence07}; VIKING, \citealt{Arnaboldi07}; VHS, \citealt{McMahon13}; DECaLS\footnote{\url{http://legacysurvey.org/}}; UHS, \citealt{Dye18}) and in future wide-field near-infrared surveys by Euclid and WFIRST, together with high signal-to-noise spectra from JWST, will allow for exquisitely precise constraints on $\langle x_{\rm HI}\rangle(z)$.
\section*{Acknowledgements}
We would like to thank A. Price-Whelan and D. Hogg for consultation on statistical methods, and D. Stern for supporting the discovery of ULAS J1342+0928.
BPV and F. Walter acknowledge funding through the ERC grants ``Cosmic Dawn" and ``Cosmic Gas".
\newcommand{\noop}[1]{}
|
1,477,468,751,427 | arxiv | \section{Introduction}
\begin{abstract}
A seminal work of \cite{jacot2018neural} demonstrated that training a neural network under certain parameterization is equivalent to performing a certain kernel method as width goes to infinity.
This equivalence opened a promising direction of applying results of rich literature on kernel methods to neural nets which were much harder to tackle.
The present survey covers key results on kernel convergence as width goes to infinity, finite-width corrections, applications, and discussion of limitations of the corresponding method.
\end{abstract}
\section{Definition and the explicit solution for square loss}
\label{sec:definition}
Consider a generic parametric model $f(x; \theta): \, \mathcal{X} \times \mathbb{R}^N \to \mathbb{R}$ differentiable with respect to weights $\theta$.
We aim to minimize square loss over a dataset $(\vec x, \vec y)$ of size $m$: $\frac{1}{2} \sum_{j=1}^m (y_j - f(x_j; \theta))^2 \to \min_\theta$.
A continuous-time gradient descent dynamics (gradient flow) corresponds to the following ordinary differential equation (ODE):
\begin{equation}
\dot\theta_t
= -\nabla_\theta\left(\frac{1}{2} \sum_{j=1}^m (y_j - f(x_j; \theta_t))^2\right)
= \sum_{j=1}^m (y_j - f(x_j; \theta_t)) \nabla_\theta f(x_j; \theta_t).
\end{equation}
Let us abbreviate the prediction at a given data point $x$ at time $t$, $f(x; \theta_t)$, as $f_t(x)$.
Under the dynamics above, this quantity evolves as
\begin{equation}
\dot f_t(x)
= \dot\theta_t^T \nabla f_t(x)
= \sum_{j=1}^m (y_j - f_t(x_j)) \nabla_\theta^T f_t(x_j) \nabla_\theta f_t(x).
\label{eq:f_t_dynamics}
\end{equation}
If we perceive $\nabla_\theta f_t(x)$ as a feature map $\Phi_t: \, \mathcal{X} \to \mathbb{R}^N$, the scalar product above becomes a kernel evaluated at a pair $(x_j,x)$.
This kernel is called an empirical neural tangent kernel (NTK) and is denoted by $\hat\Theta_t$:
\begin{equation}
\hat\Theta_t(x,x')
= \nabla_\theta^T f_t(x) \nabla_\theta f_t(x').
\end{equation}
This definition allows for a shorter representation of the prediction dynamics (\ref{eq:f_t_dynamics}):
\begin{equation}
\dot f_t(x)
= \hat\Theta_t(x,\vec x) (\vec y - f_t(\vec x)),
\label{eq:f_t_dynamics_emp_ntk}
\end{equation}
where by convention, $\hat\Theta_t(x,\vec x) \in \mathbb{R}^{1 \times m}$.
Assume that the empirical NTK does not evolve with time, i.e $\hat\Theta_t(x,x') = \hat\Theta_0(x,x')$ $\forall x,x' \in \mathcal{X}$.
This assumption is equivalent to assuming the model $f(x;\theta)$ to be linear as a function of its weights:
\begin{equation}
f(x; \theta)
= f(x; \theta_0) + \nabla_\theta^T f(x; \theta_0) (\theta - \theta_0).
\end{equation}
When the kernel is constant, Eq.(\ref{eq:f_t_dynamics_emp_ntk}) is easily integrable.
Indeed, on the train dataset,
\begin{equation}
\dot f_t(\vec x)
= \hat\Theta_0(\vec x,\vec x) (\vec y - f_t(\vec x)),
\label{eq:f_t_dynamics_train}
\end{equation}
which gives
\begin{equation}
f_t(\vec x)
= f_0(\vec x) - \left(I - e^{-\hat\Theta_0(\vec x, \vec x) t}\right) (f_0(\vec x) - \vec y).
\end{equation}
Plugging it back to Eq.(\ref{eq:f_t_dynamics_emp_ntk}) gives
\begin{equation}
\dot f_t(x)
= \hat\Theta_t(x,\vec x) e^{-\hat\Theta_0(\vec x, \vec x) t} (\vec y - f_0(\vec x)),
\end{equation}
and finally,
\begin{equation}
f_t(x)
= f_0(x) - \hat\Theta_0(x, \vec x) \hat\Theta_0^{-1}(\vec x, \vec x) \left(I - e^{-\hat\Theta_0(\vec x, \vec x) t}\right) (f_0(\vec x) - \vec y).
\label{eq:lin_solution_square_loss}
\end{equation}
While the exact solution above is based on the constant kernel assumption, one can prove that the kernel is indeed nearly constant in certain settings, see \cref{sec:convergence}.
This allows one to transfer results that hold for linearized models to original ones.
For example, $f_t(\vec x)$ converges to $\vec y$ (i.e. the model learns the dataset) as long as the Gram matrix is positive definite: $\hat\Theta_0(\vec x,\vec x) \geq \lambda_0$ for some $\lambda_0 > 0$, see Eq.(\ref{eq:f_t_dynamics_train}).
The same result holds without the constant kernel assumption, as long as $\hat\Theta_t(\vec x,\vec x)$ stays sufficiently close to $\hat\Theta_0(\vec x,\vec x)$, and therefore, say, $\hat\Theta_t(\vec x,\vec x) \geq \lambda_0/2$.
Indeed,
\begin{equation}
\frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right)
= -(\vec y - f_t(\vec x))^T \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x))
\leq -\frac{\lambda_0}{2} \| \vec y - f_t(\vec x) \|_2^2,
\end{equation}
which gives
\begin{equation}
\| \vec y - f_t(\vec x) \|_2^2
\leq e^{-\lambda_0 t} \| \vec y - f_0(\vec x) \|_2^2
\to 0 \quad \text{as $t \to \infty$};
\end{equation}
see \cite{du2018gradient} for the formal result.
This result is not trivial, since loss surfaces of generic neural nets are non-convex, and therefore any local optimization method (e.g. the gradient flow) may get stuck in a spurious local minimum.
See \cite{arora2019fine} for other results of a similar kind.
Also, if one assumes the kernel to be nearly constant, one can identify certain pathologies affecting the learning process by analyzing the initial kernel: see \cite{martens2021rapid} discussing trainability of very deep nets and \cite{dupuis2021dnn,tancik2020fourier} fixing blurry results of image regression.
Finally, the exact solution (\ref{eq:lin_solution_square_loss}) can be used as a substitute for the usual gradient descent training routine.
A naive approach for evaluating Eq.(\ref{eq:lin_solution_square_loss}) would be to compute the initial kernel $\hat\Theta_0(\vec x, \vec x)$ and then to invert it.
Naively computing the kernel requires $O(N m^2)$ time and $O(m^2)$ memory, while inverting it takes $O(m^3)$ more time.
Such an approach is infeasible for datasets of realistic sizes (i.e. $m \gtrsim 10^5$), asking for major optimizations, see \cite{novak2019neural,novakfast,meanti2020kernel}.
Nevertheless, for $m \lesssim 10^4$, the direct approach is feasible and gives promising results, see \cite{arora2019harnessing}.
Also, in certain scenarios, the kernel can be efficiently scaled from small $m$ to larger ones, see \cite{radhakrishnan2021simple}.
\section{Kernel convergence}
\label{sec:convergence}
The goal of this section is to validate the constant kernel assumption: $\hat\Theta_t(x,x') = \hat\Theta_0(x,x')$ $\forall x,x' \in \mathcal{X}$.
The main result is: under certain parameterization, the empirical NTK of a neural network becomes constant as width goes to infinity.
Before stating this result formally, we provide an illustrative example.
Consider a neural network with one hidden layer, scalar input, and Gaussian-initialized weights:
\begin{equation}
f(x; a_{1:n}, w_{1:n})
= \sum_{i=1}^n a_i \phi(w_i x),
\quad
a_{1:n} \sim \mathcal{N}(0, n^{-1} I),
\quad
w_{1:n} \sim \mathcal{N}(0, I).
\label{eq:1_hid_net_standard}
\end{equation}
Here $n$ is width of the hidden layer; following a standard initialization scheme \cite{he2015delving}, initialization variance of each layer is inversely proportional to the number of input neurons.
The above parameterization of the network is the one typically used in practice; we shall refer it as standard.
However, the parameterization we need is a different one:
\begin{equation}
f(x; a_{1:n}, w_{1:n})
= \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i \phi(w_i x),
\quad
a_{1:n} \sim \mathcal{N}(0, I),
\quad
w_{1:n} \sim \mathcal{N}(0, I).
\label{eq:1_hid_net_ntk}
\end{equation}
We shall refer it as NTK-parameterization.
Note that it does not alter the distribution of neurons, both hidden and output, at initialization but it does alter the gradient flow:
\begin{equation}
\dot a_k
= \frac{1}{\sqrt{n}} \sum_{j=1}^m \phi(w_k x_j),
\quad
\dot w_k
= \frac{1}{\sqrt{n}} \sum_{j=1}^m a_k \phi'(w_k x_j) x_j.
\end{equation}
Here input and output weights receive $O(n^{-1/2})$ increments, while both of them are $O(1)$ at initialization.
Hence $a_k(t) \to a_k(0)$ and $w_k(t) \to w_k(0)$ as $n \to \infty$ for any fixed $k \in \mathbb{N}$ and $t \in \mathbb{R}_+$.
Compare with gradient flow under standard parameterization:
\begin{equation}
\dot a_k
= \sum_{j=1}^m \phi(w_k x_j),
\quad
\dot w_k
= \sum_{j=1}^m a_k \phi'(w_k x_j) x_j.
\end{equation}
Here the output weights are $O(n^{-1/2})$ at initialization but receive $O(1)$ increments for $t=0$, while the input weights are $O(1)$ at initialization but receive $O(n^{-1/2})$ increments for $t=0$.
Let us write the NTK under NTK parameterization:
\begin{multline}
\hat\Theta_t(x,x')
= \sum_{i=1}^n \left(\partial_{a_i} f(x) \partial_{a_i} f(x') + \partial_{w_i} f(x) \partial_{w_i} f(x')\right)
=\\= \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(t) x) \phi(w_i(t) x') + a_i^2(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x'\right).
\end{multline}
Since $a_k(t) \to a_k(0)$ and $w_k(t) \to w_k(0)$ as $n \to \infty$ for any fixed $k \in \mathbb{N}$ and $t \in \mathbb{R}_+$, the above expression is asymptotically equivalent to
\begin{equation}
\hat\Theta_0(x,x')
= \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right),
\end{equation}
which converges (almost surely) to
\begin{equation}
\Theta(x,x')
= \mathbb{E}\,_{a,w \sim \mathcal{N}(0,1)} \left(\phi(w x) \phi(w x') + a^2 \phi'(w x) \phi'(w x') x x'\right)
\end{equation}
as $n \to \infty$ due to the (strong) Law of Large Numbers.
The limit kernel $\Theta(x,x')$ depends neither on a timestep $t$, nor on initialization.
This kernel is typically referred as NTK, contrasting to the empirical NTK $\hat\Theta_t$.
Since under standard parameterization the weights receive increments asymptotically at least comparable to initialization, one cannot expect that the empirical NTK stops evolving as $n \to \infty$ in this setting.
Moreover, the initial empirical NTK diverges with width:
\begin{multline}
\hat\Theta_0(x,x')
= \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right)
\sim\\\sim n \times \mathbb{E}\,_{w \sim \mathcal{N}(0,1)} \phi(w x) \phi(w x').
\end{multline}
The above kernel convergence result holds in more general settings.
Consider a fully-connected network with $L$ layers under NTK parameterization:
\begin{equation}
f(x) = h_L(x),
\quad
h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l x_{l-1}(x),
\quad
x_{l-1}(x) = \phi(h_{l-1}(x)),
\quad
x_0(x) = x,
\end{equation}
where $W_1 \in \mathbb{R}^{n_1 \times n_0}$, $W_L \in \mathbb{R}^{1 \times n_{L-1}}$, and $W_l \in \mathbb{R}^{n_l \times n_{l-1}}$ for all other $l$.
Here all weights are initialized with independent standard Gaussians.
Suppose we aim to optimize a generic differentiable loss $\ell$ instead of the quadratic one:
\begin{equation}
\dot\theta_t
= -\nabla_\theta\left(\sum_{j=1}^m \ell(y_j, f(x_j; \theta_t))\right)
= \sum_{j=1}^m \left.\frac{\partial \ell(y_j, z)}{\partial z}\right|_{z=f(x_j; \theta_t)} \nabla_\theta f(x_j; \theta_t),
\end{equation}
where $\theta$ now is a concatenation of all weights $W_{1:L}$.
The seminal work of \cite{jacot2018neural} proves the following:
\begin{theorem}[\cite{jacot2018neural}]
Under the conditions above, for $\phi$ being $C^2$ and Lipschitz and $\ell$ being $C^1$ and Lipschitz, $\hat\Theta_t(x,x') \to \Theta(x,x')$ in probability as $n_{1:L-1} \to \infty$ sequentially $\forall x,x' \in \mathcal{X}$ $\forall t \geq 0$.
\end{theorem}
In fact, the theorem above can be generalized far from fully-connected nets with smooth activation functions.
Define a tensor program as a set of initial variables of certain types and a sequence of operations.
Each of the operations generates a new variable by acting on previously generated ones.
The variable types are
\begin{enumerate}
\item $\mathsf{A}$: $n \times n$ matrices with iid $\mathcal{N}(0,1)$ entries;
\item $\mathsf{G}$: vectors of size $n$ with asymptotically iid Gaussian entries;
\item $\mathsf{H}$: images of $\mathsf{G}$-vars by coordinatewise nonlinearities.
\end{enumerate}
The operations are
\begin{enumerate}
\item $\mathrm{Trsp}$: $W: \mathsf{A} \to W^\top: \mathsf{A}$;
\item $\mathrm{MatMul}$: $(W: \mathsf{A}, \; x: \mathsf{H}) \to \frac{1}{\sqrt{n}} W x: \mathsf{G}$;
\item {$\mathrm{LinComb}$: $(\{x_i: \mathsf{G}, \; a_i \in \mathbb{R}\}_{i=1}^k) \to \sum_{i=1}^k a_i x_i: \mathsf{G}$;}
\item $\mathrm{Nonlin}$: $(\{x_i: \mathsf{G}\}_{i=1}^k, \; \phi: \mathbb{R}^k \to \mathbb{R}) \to \phi(x_{1:k}): \mathsf{H}$.
\end{enumerate}
The set of initial variables consists of variables of $\mathsf{A}$-type and $\mathsf{G}$-type.
As for input $\mathsf{G}$-vars, we sample $\{x_\alpha: \text{$x$ is an input G-var}\} \sim \mathcal{N}(\mu^{in}, \Sigma^{in})$ $\forall \alpha \in [n]$.
The above formalism allows to express forward and backward passes of a very wide class of neural nets (including RNNs, ResNets, and Transformers).
Besides none of the operations above generates new $\mathsf{A}$-vars (new weights), the whole gradient descent training process can be expressed as a single tensor program by backtracking the gradient steps.
The real power of tensor programs comes from the following theorem:
\begin{theorem}["Master theorem", \cite{yang2020tensor_iii}]
\label{thm:master_theorem}
Consider a tensor program with $M$ $\mathsf{G}$-vars, under above assumptions.
Suppose all the nonlinearities $\phi$ and a function $\psi: \, \mathbb{R}^M \to \mathbb{R}$ are polynomially bounded.
Then the following holds:
\begin{equation}
\frac{1}{n} \sum_{\alpha=1}^n \psi(g^1_\alpha,\ldots,g^M_\alpha)
\to \mathbb{E}\,_{Z \sim \mathcal{N}(\mu,\Sigma)} \psi(Z)
\end{equation}
a.s. as $n \to \infty$, where $\mu$ and $\Sigma$ can be computed using certain recurrent rules.
\end{theorem}
It is possible to define the empirical NTK of a tensor program and express it in the form $\frac{1}{n} \sum_{\alpha=1}^n \psi(g^1_\alpha,\ldots,g^M_\alpha)$ for a certain function $\psi$.
Then the kernel converges by virtue of the above theorem.
See \cite{yang2020tensor_ii} for the proof of initial kernel convergence and \cite{yang2021tensor_iib} for the proof of kernel convergence for any timestep.
As an illustration, recall the two-layered net considered at the beginning of the present section.
Its empirical NTK is given by
\begin{equation}
\hat\Theta_0(x,x')
= \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(0) x) \phi(w_i(0) x') + a_i^2(0) \phi'(w_i(0) x) \phi'(w_i(0) x') x x'\right).
\end{equation}
Here $\mathsf{G}$-vars are $g^1 = w(0) x$, $g^2 = w(0) x'$, $g^3 = a(0) x$, $g^4 = a(0) x'$.
Taking $\psi(g^1_\alpha,\ldots,g^4_\alpha) = \phi(g^1_\alpha) \phi(g^2_\alpha) + \phi'(g^1_\alpha) \phi'(g^2_\alpha) g^3_\alpha g^4_\alpha$ allows for explicit application of Theorem \ref{thm:master_theorem}.
\section{Finite-width corrections}
\label{sec:finite_width}
While the results discussed in \cref{sec:convergence} hold in the limit of infinite width, they are not directly applicable to real-life finite-width nets for obvious reasons.
This motivates one to introduce finite-width corrections for the limit NTK.
First, define a higher-order kernel:
\begin{equation}
O_{s,t}(x_{1:s})
= \nabla^T_\theta O_{s-1,t}(x_{1:s-1}) \nabla_\theta f_t(x_s).
\end{equation}
Put $O_{1,t}(x_1) = f_t(x_1)$; this gives $O_{2,t}(x_1,x_2) = \hat\Theta_t(x_1,x_2)$.
Consider a gradient flow optimization process under square loss:
\begin{equation}
\dot\theta_t
= \sum_{j=1}^m (y_j - f_t(x_j)) \nabla_\theta f_t(x_j).
\end{equation}
Under this process, the $s$-order kernel evolves as
\begin{equation}
\dot O_{s,t}(x_{1:s})
= \nabla^T_\theta O_{s,t}(x_{1:s}) \dot\theta
= O_{s+1,t}(x_{1:s},\vec x) (\vec y - f_t(\vec x)).
\end{equation}
This gives an infinite system of ODE's governing the evolution of the kernels.
If our goal is to obtain a solution ony up to the order of $n^{-1}$, will it allow us to truncate the initially infinite system?
How many equations should we keep?
In order to answer these questions, let us estimate the order of growth for $O_{s,t}$.
Following \cite{Dyer2020Asymptotics}, we start with a definition of a correlation function.
Let us fix $t=0$ and omit the corresponding subscript for now.
Define a rank-$k$ derivative tensor $T_{\mu_1 \ldots \mu_k}$ as follows:
\begin{equation}
T_{\mu_1 \ldots \mu_k}(x; f) =
\frac{\partial^k f(x)}{\partial \theta^{\mu_1} \ldots \partial \theta^{\mu_k}}.
\end{equation}
For $k=0$ we define $T(x; f) = f(x)$.
We are now ready to define a correlation function $C$:
\begin{equation}
C(x_1,\ldots,x_m) =
\sum_{\mu_1,\ldots,\mu_{k_m}} \Delta_{\mu_1 \ldots \mu_{k_m}}^{(\pi)} \mathbb{E}\,_\theta \left(
T_{\mu_1 \ldots \mu_{k_1}}(x_1) T_{\mu_{k_1+1} \ldots \mu_{k_2}}(x_2) \ldots T_{\mu_{k_{m-1}+1} \ldots \mu_{k_m}}(x_m)
\right).
\end{equation}
Here $0 \leq k_1 \leq \ldots \leq k_m$, $k_m$ and $m$ are even, $\pi \in S_{k_m}$ is a permutation, and $\Delta_{\mu_1 \ldots \mu_{k_m}}^{(\pi)} = \delta_{\mu_{\pi(1)} \mu_{\pi(2)}} \ldots \delta_{\mu_{\pi(k_m-1)} \mu_{\pi(k_m)}}$.
For example,
\begin{multline}
\mathbb{E}\,_\theta (f(x) \nabla^T_\theta f(x) \nabla_\theta \nabla^T_\theta f(x_1) \nabla_\theta f(x_2)) =
\sum_{\mu,\nu} \mathbb{E}\,_\theta (f(x) \partial_\mu f(x) \partial^2_{\mu,\nu} f(x_1) \partial_\nu f(x_2)) =\\=
\sum_{\mu_1,\mu_2,\mu_3,\mu_4} \delta_{\mu_1 \mu_2} \delta_{\mu_3 \mu_4} \mathbb{E}\,_\theta (f(x) \partial_{\mu_1} f(x) \partial^2_{\mu_2,\mu_3} f(x_1) \partial_{\mu_4} f(x_2)) =
C(x,x,x_1,x_2)
\label{eq:dtheta_dt_as_corr_f}
\end{multline}
is a correlation function with $m=4$, $k_1=0$, $k_2=1$, $k_3=3$, $k_4=4$, and $\pi(j) = j$.
If two derivative tensors have two indices that are summed over, we say that they are contracted.
Formally, we say that $T_{\mu_{k_{i-1}+1} \ldots \mu_{k_i}}(x_i)$ is contracted with $T_{\mu_{k_{j-1}+1} \ldots \mu_{k_j}}(x_j)$ for $1 \leq i,j \leq m$ if there exists an even $s \leq k_m$ such that $k_{i-1} < \pi(s-1) \leq k_i$, while $k_{j-1} < \pi(s) \leq k_j$, or vice versa.
Define the cluster graph $G_C(V,E)$ as a non-oriented non-weighted graph with vertices $V = \{v_1, \ldots, v_m\}$ and edges $E = \{(v_i,v_j) \, | \, \text{$T(x_i)$ and $T(x_j)$ are contracted in $C$}\}$.
Let $n_e$ be the number of even-sized connected components of $G_C(V,E)$ and $n_o$ be the number of odd-sized components.
We are going to use the following conjecture, which is proven in certain scenarios:
\begin{conjecture}[\cite{Dyer2020Asymptotics}]
\label{conj:C_asymptotics}
If $m$ is even, $C(x_1,\ldots,x_m) = O_{n\to\infty}(n^{s_C})$, where $s_C = n_e + n_o / 2 - m / 2$.
If $m$ is odd, $C(x_1,\ldots,x_m) = 0$.
\end{conjecture}
We are also going to use the following lemma:
\begin{lemma}[\cite{Dyer2020Asymptotics}]
\label{lemma:derivative_asymptotics}
Suppose \cref{conj:C_asymptotics} holds.
Let $C(\vec x) = \mathbb{E}\,_\theta F(\vec x; \theta)$ be a correlation function and suppose $C(\vec x) = O(n^{s_C})$ for $s_C$ defined in \cref{conj:C_asymptotics}.
Then $\mathbb{E}\,_\theta d^k F(\vec x; \theta) / dt^k = O(n^{s_C})$ $\forall k \geq 1$.
\end{lemma}
\begin{proof}
Consider the first derivative:
\begin{multline}
\mathbb{E}\,_\theta \frac{dF(\vec x)}{dt} =
\mathbb{E}\,_\theta (\dot\theta^T \nabla_\theta F(\vec x)) =
\mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (\eta (y - f(x)) \nabla^T_\theta f(x) \nabla_\theta F(\vec x)) =\\=
\eta \mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (y \nabla^T_\theta f(x) \nabla_\theta F(\vec x)) -
\eta \mathbb{E}\,_{x,y} \mathbb{E}\,_\theta (f(x) \nabla^T_\theta f(x) \nabla_\theta F(\vec x)).
\end{multline}
This is a sum of a linear combination of correlation functions.
By \cref{conj:C_asymptotics}, the first sum evaluates to zero, while the second one has $m' = m+2$, $n_e'$ even clusters, and $n_o'$ odd clusters.
If $\nabla_\theta f(x)$ is contracted with an even cluster of $C$, we have $n_e' = n_e - 1$, $n_o' = n_o + 2$.
In contrast, if $\nabla_\theta f(x)$ is contracted with an odd cluster of $C$, we have $n_e' = n_e + 1$, $n_o' = n_o$.
In the first case, we have $s_C' = n_e' + n_o'/2 - m'/2 = s_C - 1$, while for the second $s_C' = s_C$.
In any case, the result is a linear combination of correlation functions with $s_C' \leq s_C$ for each.
\end{proof}
Let us return the $t$-subscript.
Since $O_s$ has $s$ derivative tensors and a single cluster, by virtue of \cref{conj:C_asymptotics}, $\mathbb{E}\,_\theta O_{s,0} = O(n^{1 - s/2})$ for even $s$ and $\mathbb{E}\,_\theta O_{s,0} = 0$ for odd $s$.
At the same time, $\mathbb{E}\,_\theta \dot O_{s,0} = O(n^{1 - (s+2)/2}) = O(n^{-s/2})$ for even $s$ and $\mathbb{E}\,_\theta \dot O_{s,0} = O(n^{1 - (s+1)/2}) = O(n^{1/2 - s/2})$ for odd $s$.
As for the second moments, we have $\mathbb{E}\,_\theta (O_{s,0})^2 = O(n^{2 - s})$ for even $s$ and $\mathbb{E}\,_\theta (O_{s,0})^2 = O(n^{1 - s})$ for odd $s$.
Similarly, we have $\mathbb{E}\,_\theta (\dot O_{s,0})^2 = O(n^{2/2 - (2s+2)/2}) = O(n^{-s})$ for even $s$ and $\mathbb{E}\,_\theta (\dot O_{s,0})^2 = O(n^{2 - (2s+2)/2}) = O(n^{1 - s})$ for odd $s$.
The asymptotics for the first two moments implies the asymptotic for a random variable itself:
\begin{equation}
O_{s,0}(x_{1:s}) =
\begin{cases}
O(n^{1 - s/2}) &\text{for even $s$;}
\\
O(n^{1/2 - s/2}) &\text{for odd $s$;}
\end{cases}
\qquad
\dot O_{s,0}(x_{1:s}) =
\begin{cases}
O(n^{-s/2}) &\text{for even $s$;}
\\
O(n^{1/2 - s/2}) &\text{for odd $s$.}
\end{cases}
\end{equation}
\cref{lemma:derivative_asymptotics} gives $\forall k \geq 1$:
\begin{equation}
\left.\frac{d^k O_{s,t}}{dt^k}(x_{1:s})\right|_{t=0} =
\begin{cases}
O(n^{-s/2}) &\text{for even $s$;}
\\
O(n^{1/2 - s/2}) &\text{for odd $s$.}
\end{cases}
\end{equation}
Then given an analytic activation function, we have $\forall t \geq 0$:
\begin{equation}
\dot O_{s,t}(x_{1:s}) =
\sum_{k=1}^\infty \left.\frac{d^k O_{s,t}}{dt^k}(x_{1:s})\right|_{t=0} \frac{t^k}{k!} =
\begin{cases}
O(n^{-s/2}) &\text{for even $s$;}
\\
O(n^{1/2 - s/2}) &\text{for odd $s$.}
\end{cases}
\end{equation}
This allows us to write a finite system of ODE for the model evolution up to $O(n^{-1})$ terms:
\begin{equation}
\dot f_{t}(x_1) =
O_{2,t}(x_1, \vec x) (\vec y - f_t(\vec x)),
\qquad
f_0(x_1) =
f(x_1; \theta),
\quad
\theta \sim
\mathcal{N}(0, I),
\end{equation}
\begin{equation}
\dot O_{2,t}(x_1, x_2) =
O_{3,t}(x_1, x_2, \vec x) (\vec y - f_t(\vec x)),
\qquad
O_{2,0}(x_1, x_2) =
\nabla_\theta^T f_0(x_1) \nabla_\theta f_0(x_2),
\end{equation}
\begin{equation}
\dot O_{3,t}(x_1, x_2, x_3) =
O_{4,t}(x_1, x_2, x_3, \vec x) (\vec y - f_t(\vec x)),
\qquad
O_{3,0}(x_1, x_2, x_3) =
\nabla_\theta^T O_{2,0}(x_1, x_2) \nabla_\theta f_0(x_3),
\end{equation}
\begin{equation}
\dot O_{4,t}(x_1, x_2, x_3, x_4) =
O(n^{-2}),
\qquad
O_{4,0}(x_1, x_2, x_3, x_4) =
\nabla_\theta^T O_{3,0}(x_1, x_2, x_3) \nabla_\theta f_0(x_4).
\end{equation}
Let us expand all the quantities wrt $n^{-1}$:
\begin{equation}
O_{s,t}(x_{1:s}) =
O_{s,t}^{(0)}(x_{1:s}) + n^{-1} O_{s,t}^{(1)}(x_{1:s}) + O(n^{-2}),
\end{equation}
where $O_{s,t}^{(k)}(x_{1:s}) = \Theta_{n\to\infty}(1)$.
Then the system above transforms into the following:
\begin{equation}
\dot f_{t}^{(0)}(x_1) =
O_{2,t}^{(0)}(x_1, \vec x) (\vec y - f_t^{(0)}(\vec x)),
\end{equation}
\begin{equation}
\dot f_{t}^{(1)}(x_1) =
O_{2,t}^{(1)}(x_1, \vec x) (\vec y - f_t^{(0)}(\vec x)) - O_{2,t}^{(0)}(x_1, \vec x) f_t^{(1)}(\vec x),
\end{equation}
\begin{equation}
O_{2,t}^{(0)}(x_1, x_2) =
\nabla_\theta^T f_0^{(0)}(x_1) \nabla_\theta f_0^{(0)}(x_2),
\end{equation}
\begin{equation}
\dot O_{2,t}^{(1)}(x_1, x_2) =
O_{3,t}^{(1)}(x_1, x_2, \vec x) (\vec y - f_t^{(0)}(\vec x)),
\end{equation}
\begin{equation}
\dot O_{3,t}^{(1)}(x_1, x_2, x_3) =
O_{4,t}^{(1)}(x_1, x_2, x_3, \vec x) (\vec y - f_t^{(0)}(\vec x)),
\end{equation}
\begin{equation}
O_{4,t}^{(1)}(x_1, x_2, x_3, x_4) =
\nabla_\theta^T O_{3,0}^{(0)}(x_1, x_2, x_3) \nabla_\theta f_0^{(0)}(x_4),
\end{equation}
where we have ignored the initial conditions for the time being.
Integrating this system is straightforward:
\begin{equation}
f_{t}^{(0)}(\vec x) =
\vec y + e^{-O_{2,0}^{(0)}(\vec x, \vec x) t} (f_0^{(0)}(\vec x) - \vec y),
\end{equation}
For brevity, let us introduce the following definition:
\begin{equation}
\Delta f_t^{(0)}(x) =
e^{-O_{2,0}^{(0)}(x, \vec x) t} (f_0^{(0)}(\vec x) - \vec y).
\end{equation}
This gives:
\begin{equation}
O_{3,t}^{(1)}(x_1, x_2, x_3) =
O_{3,0}^{(1)}(x_1, x_2, x_3) -
\int_{0}^t O_{4,0}^{(1)}(x_1, x_2, x_3, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt'.
\end{equation}
\begin{multline}
O_{2,t}^{(1)}(x_1, x_2)
= O_{2,0}^{(1)}(x_1, x_2) - \int_{0}^t O_{3,0}^{(1)}(x_1, x_2, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt'
+\\+
\int_{0}^{t} \int_{0}^{t''} \Delta f_{t''}^{(0),T}(\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \Delta f_{t'}^{(0)}(\vec x') \, dt' \, dt''.
\end{multline}
Let us elaborate the terms:
\begin{equation}
\int_{0}^t O_{3,0}^{(1)}(x_1, x_2, \vec x) \Delta f_{t'}^{(0)}(\vec x) \, dt'
= O_{3,0}^{(1)}(x_1, x_2, \vec x) \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t}\right) (f_0^{(0)}(\vec x) - \vec y).
\end{equation}
\begin{multline}
\int_{0}^{t} \int_{0}^{t''} \Delta f_{t''}^{(0),T}(\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \Delta f_{t'}^{(0)}(\vec x') \, dt' \, dt''
=\\= \int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'}\right) \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt'
=\\= (f_0^{(0)}(\vec x) - \vec y)^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} \left(I - e^{-O_{2,0}^{(0)}(\vec x, \vec x) t}\right) (f_0^{(0)}(\vec x) - \vec y)
-\\-
\int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt'.
\end{multline}
Consider the eigenvalue-eigenvector decomposition of $O_{2,0}^{(0)}(\vec x, \vec x)$: $O_{2,0}^{(0)}(\vec x, \vec x) = \sum_{k=1}^m \lambda_1 v_k v_k^T$.
This helps us integrating the last term:
\begin{multline}
\int_{0}^{t} (f_0^{(0)}(\vec x) - \vec y)^T e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') e^{-O_{2,0}^{(0)}(\vec x, \vec x) t'} (f_0^{(0)}(\vec x) - \vec y) \, dt'
=\\= \sum_{k,l=1}^m \int_{0}^{t} e^{-(\lambda_k+\lambda_l) t'} (f_0^{(0)}(\vec x) - \vec y)^T v_k v_k^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') v_l v_l^T (f_0^{(0)}(\vec x) - \vec y) \, dt'
=\\= \sum_{k,l=1}^m \frac{1}{\lambda_k+\lambda_l} \left(1 - e^{-(\lambda_k+\lambda_l) t}\right) (f_0^{(0)}(\vec x) - \vec y)^T v_k v_k^T \left(O_{2,0}^{(0)}(\vec x,\vec x)\right)^{-1} O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x') v_l v_l^T (f_0^{(0)}(\vec x) - \vec y).
\end{multline}
Recall $\hat\Theta_t(x_1,x_2) = O_{2,t}(x_1,x_2) = O_{2,t}^{(0)}(x_1,x_2) + n^{-1} O_{2,t}^{(1)}(x_1,x_2) + O(n^{-2})$.
The first term (the limit NTK) does not depend on $t$, $O_{2,t}^{(0)}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) = \Theta(x_1,x_2)$, while the second one (the correction) does.
Note that computing the second term invokes $O_{4,0}^{(1)}$, the fourth-order tensor, therefore approaching it directly requires $O(m^4)$ memory.
Integrating the above system further gives the first-order correction for the limit model $f_t^{(1)}$.
As we shall see in \cref{sec:beyond}, the kernel $\Theta^{NTH}(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2)$ can be considered as a label-aware alternative to the usual NTK $\Theta(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2)$.
Let us write its explicit definition and refer it later in \cref{sec:beyond}:
\begin{multline}
\Theta^{NTH}(x_1,x_2)
= O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2)
=\\= \Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] - n^{-1} \mathbb{E}\,\left[O_{3,0}^{(1)}(x_1, x_2, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right]
+\\+ n^{-1} \vec y^T \Theta^{-1}(\vec x,\vec x) \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \Theta^{-1}(\vec x,\vec x) \vec y
+\\+ n^{-1} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \Theta^{-1}(\vec x,\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right]
-\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y
-\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \vec v_k \vec v_k^T O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \vec v_l \vec v_l^T f_0^{(0)}(\vec x)\right].
\label{eq:lantk_nth}
\end{multline}
While the above result is valid under a conjecture, \cref{conj:C_asymptotics}, it can be proven rigorously, see \cite{huang2019dynamics}.
\section{Computing the limit kernel}
\label{sec:limit}
It is not obvious how to compute the limit kernel $\Theta$ predicted by the theorems discussed in \cref{sec:convergence}.
Fortunately, one can compute the limit kernel exactly for certain classes of models.
\subsection{Fully-connected nets}
\label{sec:limit_fc_nets}
Consider an $L$-layer fully-connected network under NTK parameterization:
\begin{equation}
f(x) = h_L(x),
\quad
h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l x_{l-1}(x),
\quad
x_{l-1}(x) = \phi(h_{l-1}(x)),
\quad
x_0(x) = x,
\end{equation}
where $W_l \in \mathbb{R}^{n_l \times n_{l-1}}$ $\forall l \in [L]$.
For simplicity, we assume $n_L=1$, i.e. the output is scalar.
Since we already know (see \cref{sec:convergence}) that the kernel does not depend on $t$ under NTK parameterization, we consider the case $t=0$ only and omit the $t$-subscript.
The empirical NTK is given by
\begin{equation}
\hat\Theta(x,x')
= \nabla^T_\theta f(x;\theta) \nabla_\theta f(x';\theta)
= \sum_{l=1}^L \tr\left(\nabla^T_{W_l} f(x;W_{1:L}) \nabla_{W_l} f(x;W_{1:L})\right).
\end{equation}
By chain rule,
\begin{equation}
\nabla_{W_l} f(x)
= \sum_{i=1}^{n_l} \partial_{h_l^i} f(x) \nabla_{W_l} h_l^i(x)
= \frac{1}{\sqrt{n_{l-1}}} \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \partial_{h_l^i} f(x) E_{ij} x_{l-1}^j(x)
= \frac{1}{\sqrt{n_{l-1}}} \nabla_{h_l} f(x) x_{l-1}^T(x).
\end{equation}
Therefore,
\begin{equation}
\hat\Theta(x,x')
= \sum_{l=1}^L \tr\left(\nabla^T_{W_l} f(x) \nabla_{W_l} f(x)\right)
= \sum_{l=1}^L \frac{1}{n_{l-1}} \left(\nabla^T_{h_l} f(x') \nabla_{h_l} f(x)\right) \times \left(x_{l-1}^T(x) x_{l-1}(x')\right).
\end{equation}
If $x_{l-1}$ had iid components with zero mean, $\frac{1}{n_{l-1}} x_{l-1}^T(x) x_{l-1}(x')$ would be an empirical covariance estimated with $n_{l-1}$ samples.
In fact, when all weights are iid standard Gaussians, components of $h_{l-1}$ become iid Gaussian with zero mean as $n_{1:l-2} \to \infty$ sequentially.
Hence their images under elementwise maps $\phi$ are also iid.
Proof by induction.
$h_1(x) = \frac{1}{\sqrt{n_0}} W_1 x$ has iid Gaussian components with zero mean and variance $q_1(x) = x^T x$.
Suppose components of $h_{l-1}(x)$ become iid Gaussian with zero mean and $q_{l-1}(x)$ variance as $n_{1:l-2} \to \infty$ sequentially.
Then $h_l(x) = \frac{1}{\sqrt{n_{l-1}}} W_l \phi(h_{l-1}(x))$ converges (in distribution) to a vector of Gaussians with zero mean and variance $q_l(x) = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_{l-1}(x))} \phi^2(z)$ as $n_{1:l-1} \to \infty$ sequentially by the Central Limit Theorem (CLT).
One can easily generalize the above proof to any finite set of inputs.
In particular, $[h_l^i(x),h_l^i(x')]^T$ converges to a Gaussian with zero mean and covariance $\Sigma_l(x,x') = \begin{pmatrix} q_l(x) & q_l(x,x')\\q_l(x,x') & q_l(x') \end{pmatrix}$, where $q_l(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z')$.
Hence as $n_{1:l-2} \to \infty$ sequentially, $\frac{1}{n_{l-1}} x_{l-1}^T(x) x_{l-1}(x')$ converges to $q_l(x,x')$.
Let $g_l(x) = \sqrt{n_l} \nabla_{h_l} f(x)$.
Since
\begin{equation}
\nabla_{h_l^j} f(x)
= \sum_{i=1}^{n_{l+1}} \nabla_{h_{l+1}^i} f(x) \nabla_{h_l^j} h_{l+1}^i(x)
= \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \nabla_{h_{l+1}^i} f(x) W_{l+1}^{ij} \phi'(h_l^j(x)),
\end{equation}
we have $g_l(x) = \frac{1}{\sqrt{n_{l+1}}} D_l(x) W_{l+1}^T g_{l+1}(x)$, where $D_l(x) = \diag(\phi'(h_l(x)))$.
There are two obstacles that prevent us from following the same lines for $g_l$ as for $h_l$.
First, $g_{l+1}$ depends on $D_{l+1}$ that depends on $h_{l+1}$ that depends on $W_{l+1}$.
Since $W_{l+1}$ and $g_{l+1}$ are dependent, we cannot guarantee that components of $g_l$ become iid.
Second, we know the distribution of $h_l$ as all the layers from the input side become infinitely wide sequentially, while induction for $g_l$ should be performed starting from the head.
Nevertheless, it can be proven rigorously that ignoring these two obstacles still lead to a correct result \cite{yang2020tensor_ii}: $g_l(x)$ converges to a vector of iid Gaussians with zero mean and variance $\dot q_l(x) = \dot q_{l+1}(x) \mathbb{E}\,_{z \sim \mathcal{N}(0,q_l(x))} (\phi')^2(z)$ as $n_{1:L-1} \to \infty$.
A similar result holds for a pair of inputs: $[g_l^i(x),g_l^i(x')]^T$ converges to a Gaussian with zero mean and covariance $\dot\Sigma_l(x,x') = \begin{pmatrix} \dot q_l(x) & \dot q_l(x,x')\\\dot q_l(x,x') & \dot q_l(x') \end{pmatrix}$, where $\dot q_l(x,x') = \dot q_{l+1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z) \phi'(z')$.
Hence $\nabla^T_{h_l} f(x') \nabla_{h_l} f(x) = \frac{1}{n_l} g_l^T(x') g_l(x)$ converges to $\dot q_l(x,x')$.
Putting all together, $\hat\Theta(x,x')$ converges to $\Theta(x,x') = \sum_{l=1}^L \dot q_l(x,x') q_l(x,x')$, where
\begin{equation}
q_1(x,x') = x^T x',
\quad
q_l(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z'),
\end{equation}
\begin{equation}
\dot q_L(x,x') = 1,
\quad
\dot q_l(x,x') = \dot q_{l+1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z) \phi'(z'),
\end{equation}
and $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x')\\q_l(x,x') & q_l(x',x') \end{pmatrix}$.
Note that the Master theorem of \cite{yang2020tensor_ii} gives similar recurrent formulas for NTK of any architecture expressible by a tensor program and makes them mathematically rigorous.
In fact, computing the NTK can be performed in a convenient sequential layer-wise manner, as implemented in Neural Tangents\footnote{\url{https://github.com/google/neural-tangents}} \cite{novak2019neural}.
Define the NTK for the first $l$ layers as $\Theta_{:l}(x,x') = \sum_{l'=1}^l \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x'))$; in this case $\Theta_{:L}(x,x') = \Theta(x,x')$.
Suppose $\Theta_{:l-1}(x,x')$ and $q_{l-1}(x,x')$ are already computed.
Adding a nonlinearity and a linear layer with weights $W_l$ gives $q_l$ as listed above:
\begin{equation}
q_l(x,x')
= \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi(z) \phi(z'),
\quad
\text{where $\Sigma_{l-1}(x,x')
= \begin{pmatrix} q_{l-1}(x,x) & q_{l-1}(x,x')\\q_{l-1}(x,x') & q_{l-1}(x',x') \end{pmatrix}$.}
\label{eq:q_iteration}
\end{equation}
However, according to a formula above, $\dot q_l$ is computed using $\dot q_{l+1}$, which requires a sequential layer-wise "forward pass" to compute all $q_l$ and a "backward pass" to compute $\dot q_l$.
In fact, one forward pass is enough:
\begin{multline}
\Theta_{:l}(x,x')
= \sum_{l'=1}^l \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x'))
= q_l(x,x') + \sum_{l'=1}^{l-1} \tr(\nabla_{W_{l'}}^T h_l^i(x) \nabla_{W_{l'}} h_l^i(x'))
=\\= q_l(x,x') + \Theta_{:l-1}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi'(z) \phi'(z').
\label{eq:Theta_iteration}
\end{multline}
In Neural Tangents, each operation in a neural network is mapped to a corresponding kernel transform.
\subsection{Convolutional nets}
The same idea can be applied for convolutional nets as well.
Consider 1d-convolutions for simplicity.
In this case, we are dealing with 1d "images" with $d$ pixels: $x \in \mathbb{R}^{n_0 \times d}$.
Consider a network with $L$ convolutions under NTK parameterization and an average pooling at the end:
\begin{equation}
f^i = \frac{1}{d} \sum_{s=1}^d x_L^{i,s},
\quad
h_l^{i,s} = \frac{1}{\sqrt{n_{l-1}}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} W_l^{ijr} x_{l-1}^{j,s+r},
\quad
x_{l-1}^{i,s} = \phi(h_{l-1}^{i,s}),
\quad
x_0^{i,s} = x^{i,s},
\end{equation}
where we omitted the argument $x$ for brevity, $W_l \in \mathbb{R}^{n_l \times n_{l-1} \times |\ker|}$ with $W_l^{ijr} \sim \mathcal{N}(0,1)$ iid $\forall l \in [L]$, and $\ker$ denotes the convolution filter; e.g. $\ker = [-1,0,1]$ for a convolution of size $3$.
For simplicity, we assume $n_L=1$, i.e. the output is scalar.
As before, the empirical NTK is given as
\begin{equation}
\hat\Theta(x,x')
= \nabla^T_\theta f(x;\theta) \nabla_\theta f(x';\theta)
= \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \partial_{W_l^{ijr}} f(x) \partial_{W_l^{ijr}} f(x').
\end{equation}
By chain rule,
\begin{equation}
\partial_{W_l^{ijr}} f
= \sum_{s=1}^d \partial_{h_l^{i,s}} f \partial_{W_l^{ijr}} h_l^{i,s}
= \frac{1}{\sqrt{n_{l-1}}} \sum_{s=1}^{d} \partial_{h_l^{i,s}} f x_{l-1}^{j,s+r}.
\end{equation}
Therefore,
\begin{equation}
\hat\Theta(x,x')
= \sum_{l=1}^L \frac{1}{n_{l-1}} \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \sum_{s,s'=1}^{d} \partial_{h_l^{i,s}} f(x) \partial_{h_l^{i,s'}} f(x') x_{l-1}^{j,s+r}(x) x_{l-1}^{j,s'+r}(x').
\end{equation}
As for the fully-connected case, we are going to prove that $h^{i,s}$ become Gaussian with zero mean and variance given by a certain recurrent formula as $n_{1:l-1} \to \infty$ sequentially.
However for the convolutional case, not all $h^{i,s}$ become independent: they become independent for different $i$'s but not for different $s$.
Let us induct on $l$.
$h_1^{i,s} = \frac{1}{\sqrt{n_0}} \sum_{j=1}^{n_0} \sum_{r \in \ker} W_1^{ijr} x^{j,s+r}$ are independent for any two different $i$'s.
For a fixed $i$, $h_1^{i,\cdot}$ is a Gaussian vector with zero mean and covariance $q_1^{s,s'} = \frac{1}{n_0} \sum_{j=1}^{n_0} \sum_{r \in \ker} x^{j,s+r} x^{j,s'+r}$.
Suppose $h_{l-1}^{i,s}$ becomes Gaussian with zero mean, independent for any two different $i$'s, and $q_{l-1}^{s,s'}$ is its covariance as $n_{1:l-2} \to \infty$ sequentially.
Then $h_l^{i,s} = \frac{1}{\sqrt{n_{l-1}}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} W_l^{ijr} x_{l-1}^{j,s+r}$ converges (in distribution) to a random variable with similar properties but with covariance $q_l^{s,s'} = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_{l-1})} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{s'+r})$ as $n_{1:l-1} \to \infty$ sequentially by the Central Limit Theorem (CLT).
One can easily generalize the above proof to any finite set of inputs.
In particular, $[h_l^{i,\cdot}(x),h_l^{i,\cdot}(x')]^T \in \mathbb{R}^{2d}$ converges to a Gaussian with zero mean and covariance $\Sigma_l(x,x') = \begin{pmatrix} q_l(x) & q_l(x,x')\\q_l(x,x') & q_l(x') \end{pmatrix} \in \mathbb{R}^{2d \times 2d}$, where $q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r})$.
Hence as $n_{1:l-2} \to \infty$ sequentially, $\frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} x_{l-1}^{j,s+r}(x) x_{l-1}^{j,s'+r}(x')$ converges to $q_l^{s,s'}(x,x')$.
Let $g_l^{j,p} = \sqrt{n_l} \nabla_{h_l^{j,p}} f$.
Since
\begin{multline}
\partial_{h_l^{j,p}} f
= \sum_{i=1}^{n_{l+1}} \sum_{s=1}^d \partial_{h_{l+1}^{i,s}} f \partial_{h_l^{j,p}} h_{l+1}^{i,s}
=\\= \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \sum_{s=1}^d \partial_{h_{l+1}^{i,s}} f \sum_{r \in \ker} W_{l+1}^{ijr} 1_{s+r=p} \phi'(h_l^{j,p})
= \frac{1}{\sqrt{n_l}} \sum_{i=1}^{n_{l+1}} \sum_{r \in \ker} \partial_{h_{l+1}^{i,p-r}} f W_{l+1}^{ijr} \phi'(h_l^{j,p}),
\end{multline}
$\partial_{h_L^{j,p}} f = \frac{1}{d} \phi'(h_L^{j,p})$, and $n_L=1$, we have
\begin{equation}
g_L^{j,p}
= \frac{1}{d} \phi'(h_L^{j,p}),
\quad
g_l^{j,p}
= \frac{1}{\sqrt{n_{l+1}}} \sum_{i=1}^{n_{l+1}} \sum_{r \in \ker} g_{l+1}^{i,p-r} W_{l+1}^{ijr} \phi'(h_l^{j,p}).
\end{equation}
With the same correctness remark as for convolutional nets, it is possible to show that $g_l^{j,p}$ become independent for different $j$'s and $g_l^{j,\cdot}$ become Gaussian with covariance $\dot q_l^{p,p'}$ as $n_{1:L-1} \to \infty$.
Covariance is given by the following recurrence: $\dot q_L^{p,p'} = \frac{1}{d^2} \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_L)} \phi'(z^p) \phi'(z^{p'})$, $\dot q_l^{p,p'} = \mathbb{E}\,_{z \sim \mathcal{N}(0,q_l)} \phi'(z^{p}) \phi'(z^{p'}) \sum_{r \in \ker} \dot q_{l+1}^{p-r,p'-r}$.
A similar result holds for a pair of inputs: $[g_l^{i,\cdot}(x),g_l^{i,\cdot}(x')]^T \in \mathbb{R}^{2d}$ converges to a Gaussian with zero mean and covariance $\dot\Sigma_l(x,x') = \begin{pmatrix} \dot q_l(x) & \dot q_l(x,x')\\\dot q_l(x,x') & \dot q_l(x') \end{pmatrix} \in \mathbb{R}^{2d \times 2d}$, where $\dot q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z^{s}) \phi'(z^{\prime,s'}) \sum_{r \in \ker} \dot q_{l+1}^{s-r,s'-r}(x,x')$.
Hence
\begin{equation}
\sum_{i=1}^{n_l} \partial_{h_l^{i,s}} f(x) \partial_{h_l^{i,s'}} f(x')
= \frac{1}{n_l} \sum_{i=1}^{n_l} g_l^{i,s}(x) g_l^{i,s'}(x')
\to \dot q_l^{s,s'}(x,x').
\end{equation}
Putting all together, $\hat\Theta(x,x')$ converges to $\Theta(x,x') = \sum_{l=1}^L \sum_{s,s'=1}^d \dot q_l^{s,s'}(x,x') q_l^{s,s'}(x,x')$, where
\begin{equation}
q_1^{s,s'}(x,x') = \frac{1}{n_0} \sum_{j=1}^{n_0} \sum_{r \in \ker} x^{j,s+r} x^{\prime,j,s'+r},
\end{equation}
\begin{equation}
q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r}),
\end{equation}
\begin{equation}
\dot q_L^{s,s'}(x,x') = \frac{1}{d^2} \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_L(x,x'))} \phi'(z^s) \phi'(z^{\prime,s'}),
\end{equation}
\begin{equation}
\dot q_l^{s,s'}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi'(z^{s}) \phi'(z^{\prime,s'}) \sum_{r \in \ker} \dot q_{l+1}^{s-r,s'-r}(x,x'),
\end{equation}
and $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x')\\q_l(x,x') & q_l(x',x') \end{pmatrix}$.
Same as for fully-connected nets, computing the NTK can be performed in a convenient sequential layer-wise manner.
Define the empirical NTK for the first $l$ layers as
\begin{equation}
\hat\Theta_{:l}^{s,s'}(x,x')
= \sum_{l'=1}^l \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{r \in \ker} \partial_{W_{l'}^{ijr}} h_l^{1,s}(x) \partial_{W_{l'}^{ijr}} h_l^{1,s'}(x');
\end{equation}
in this case, by chain rule,
\begin{multline}
\hat\Theta(x,x')
= \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{r \in \ker} \partial_{W_l^{ijr}} f(x) \partial_{W_l^{ijr}} f(x')
=\\= \sum_{l=1}^L \sum_{i=1}^{n_l} \sum_{j=1}^{n_{l-1}} \sum_{s,s'=1}^d \sum_{r \in \ker} \partial_{W_l^{ijr}} h_L^{1,s}(x) \partial_{W_l^{ijr}} h_L^{1,s'}(x') \partial_{h_L^{1,s}} f(x) \partial_{h_L^{1,s'}} f(x')
=\\= \frac{1}{d^2} \sum_{s,s'=1}^d \phi'(h_L^{1,s}(x)) \phi'(h_L^{1,s'}(x')) \hat\Theta_{:L}^{s,s'}(x,x'),
\end{multline}
and therefore,
\begin{equation}
\Theta(x,x')
= \frac{1}{d^2} \sum_{s,s'=1}^d \dot q_L^{s,s'}(x,x') \Theta_{:L}^{s,s'}(x,x').
\end{equation}
Suppose $\hat\Theta_{:l-1}(x,x')$ and $q_{l-1}(x,x')$ are already computed.
Adding a nonlinearity and a convolutional layer with weights $W_l$ gives $q_l$ as listed above:
\begin{equation}
q_l^{s,s'}(x,x')
= \mathbb{E}\,_{[z,z'] \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \sum_{r \in \ker} \phi(z^{s+r}) \phi(z^{\prime,s'+r}),
\label{eq:q_iteration_conv}
\end{equation}
where $\Sigma_{l-1}(x,x') = \begin{pmatrix} q_{l-1}(x,x) & q_{l-1}(x,x')\\q_{l-1}(x,x') & q_{l-1}(x',x') \end{pmatrix}$.
We can compute $\hat\Theta_{:L}$ in a single forward pass using the following recurrence:
\begin{multline}
\hat\Theta_{:l}^{s,s'}(x,x')
= \sum_{l'=1}^l \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r \in \ker} \partial_{W_{l'}^{ij\tilde r}} h_l^{1,s}(x) \partial_{W_{l'}^{ij\tilde r}} h_l^{1,s'}(x')
=\\= \frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{\tilde r \in \ker} x^{j,s+\tilde r}(x) x^{j,s'+\tilde r}(x')
+\\+ \sum_{l'=1}^{l-1} \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r \in \ker} \sum_{k,k'=1}^{n_{l-1}} \sum_{p,p'=1}^d \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k,p}(x) \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k',p'}(x') \partial_{h_{l-1}^{k,p}} h_l^{1,s}(x) \partial_{h_{l-1}^{k',p'}} h_l^{1,s'}(x')
=\\= \frac{1}{n_{l-1}} \sum_{j=1}^{n_{l-1}} \sum_{\tilde r \in \ker} x^{j,s+\tilde r}(x) x^{j,s'+\tilde r}(x')
+\\+ \frac{1}{n_{l-1}} \sum_{l'=1}^{l-1} \sum_{i=1}^{n_{l'}} \sum_{j=1}^{n_{l'-1}} \sum_{\tilde r,r,r' \in \ker} \sum_{k,k'=1}^{n_{l-1}} \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k,s+r}(x) \partial_{W_{l'}^{ij\tilde r}} h_{l-1}^{k',s'+r'}(x') \times\\\times W_l^{1kr} \phi'(h_{l-1}^{k,s+r}(x)) W_l^{1k'r'} \phi'(h_{l-1}^{k',s'+r'}(x')).
\label{eq:Theta_iteration_conv}
\end{multline}
A limit then gives
\begin{equation}
\Theta_{:l}^{s,s'}(x,x')
= q_l^{s,s'}(x,x') + \sum_{r,r' \in \ker} \Theta_{:l-1}^{s+r,s'+r'}(x,x') \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_{l-1}(x,x'))} \phi'(z^{s+r}) \phi'(z^{\prime,s'+r'}),
\end{equation}
which resembles the corresponding result for fully-connected nets when $\ker = [0]$.
\subsection{Computing the expectations}
The only obstacle that prevents explicit computation here is expectations over $[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))$.
Fortunately, these expectations can be computed analytically for certain $\phi$: in particular, for ReLU and the error function.
We cover only the case of ReLU here as it is more widely used in practice.
Let us omit the $l$-subscript and the arguments $(x,x')$ for brevity: $\Sigma = \begin{pmatrix} q_{11} & q_{12}\\q_{12} & q_{22} \end{pmatrix}$, and we are interested in $\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+$ and $\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0}$.
Following \cite{arora2019exact}, we start with assuming $q_{11} = q_{22} = 1$ and $q_{12} = \lambda$; $\Sigma \geq 0$ implies $|\lambda| \leq 1$.
Then
\begin{multline}
\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+
= \mathbb{E}\,_{[u,\tilde v]^T \sim \mathcal{N}(0,I)} [u]_+ \left[\lambda u + \sqrt{1-\lambda^2} \tilde v\right]_+
=\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left([u]_+ \int_{-\frac{\lambda}{\sqrt{1-\lambda^2}} u}^\infty \left(\lambda u + \sqrt{1-\lambda^2} \tilde v\right) \frac{1}{\sqrt{2\pi}} e^{-\tilde v^2/2} \, d\tilde v \right)
=\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left(
[u]_+ \left(
\lambda u \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2}
\right)
\right)
=\\= \int_0^\infty u \left(\lambda u \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2}\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du
=\\= \frac{\lambda}{4} + \int_0^\infty u \left(\lambda u \frac{1}{2} \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) + \sqrt{\frac{1-\lambda^2}{2\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2}\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du
=\\= \frac{\lambda}{4} + \frac{\lambda}{2} A + \sqrt{\frac{1-\lambda^2}{2\pi}} B.
\end{multline}
\begin{multline}
A
= \int_0^\infty u^2 \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du
= -\int_0^\infty u \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} \, d\left(e^{-u^2/2}\right)
=\\= \int_0^\infty \left(\erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) + u \frac{\lambda}{\sqrt{2-2\lambda^2}} \frac{2}{\sqrt{\pi}} e^{-\frac{\lambda^2}{2-2\lambda^2} u^2} \right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du
= C + \frac{\lambda}{\sqrt{2-2\lambda^2}} \frac{2}{\sqrt{\pi}} B.
\end{multline}
\begin{equation}
C
= \int_0^\infty \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du
= \frac{1}{\pi} \arctan\left(\frac{\lambda}{\sqrt{1-\lambda^2}}\right)
= \frac{1}{\pi} \arcsin\lambda.
\end{equation}
\begin{equation}
B
= \int_0^\infty u e^{-\frac{\lambda^2}{2-2\lambda^2} u^2} \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du
= \frac{1}{\sqrt{2\pi}}\int_0^\infty u e^{-\frac{1}{2-2\lambda^2} u^2} \, du
= \frac{1-\lambda^2}{\sqrt{2\pi}}.
\end{equation}
Putting all together,
\begin{multline}
\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+
= \frac{\lambda}{4} + \frac{\lambda}{2} A + \sqrt{\frac{1-\lambda^2}{2\pi}} B
= \frac{\lambda}{4} + \frac{\lambda}{2} C + \frac{\lambda^2}{\sqrt{1-\lambda^2}} \frac{1}{\sqrt{2\pi}} B + \sqrt{\frac{1-\lambda^2}{2\pi}} B
=\\= \frac{\lambda}{4} + \frac{\lambda}{2} C + \frac{1}{\sqrt{1-\lambda^2}} \frac{1}{\sqrt{2\pi}} B
= \frac{\lambda}{4} + \frac{\lambda}{2\pi} \arcsin\lambda + \frac{\sqrt{1-\lambda^2}}{2\pi}
=\\= \frac{\lambda\left(\frac{\pi}{2} + \arcsin\lambda\right) + \sqrt{1-\lambda^2}}{2\pi}
= \frac{\lambda\left(\pi - \arccos\lambda\right) + \sqrt{1-\lambda^2}}{2\pi}.
\end{multline}
And for the second quantity,
\begin{multline}
\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0}
= \mathbb{E}\,_{[u,\tilde v]^T \sim \mathcal{N}(0,I)} 1_{u>0} 1_{\lambda u + \sqrt{1-\lambda^2} \tilde v > 0}
=\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left(1_{u>0} \int_{-\frac{\lambda}{\sqrt{1-\lambda^2}} u}^\infty \frac{1}{\sqrt{2\pi}} e^{-\tilde v^2/2} \, d\tilde v \right)
=\\= \mathbb{E}\,_{u \sim \mathcal{N}(0,1)} \left(
1_{u>0} \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right)
\right)
=\\= \int_0^\infty \frac{1}{2} \left(1 - \erf\left(-\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right)\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du
=\\= \frac{1}{4} + \int_0^\infty \frac{1}{2} \erf\left(\frac{\lambda}{\sqrt{2-2\lambda^2}} u\right) \frac{1}{\sqrt{2\pi}} e^{-u^2/2} \, du
=\\= \frac{1}{4} + \frac{1}{2} C
= \frac{\frac{\pi}{2} + \arcsin\lambda}{2\pi}
= \frac{\pi - \arccos\lambda}{2\pi}.
\end{multline}
A general positive semi-definite matrix $\Sigma$ can be expressed as $\Sigma = D \Lambda D$, where $\Lambda = \begin{pmatrix} 1 & \lambda\\\lambda & 1 \end{pmatrix}$, $D = \begin{pmatrix} \sqrt{q_{11}} & 0\\0 & \sqrt{q_{22}} \end{pmatrix}$, and $\lambda = \frac{q_{12}}{\sqrt{q_{11} q_{22}}}$.
Then, using homogeneity of ReLU,
\begin{multline}
\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} [u]_+ [v]_+
= \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,D \Lambda D)} [u]_+ [v]_+
= \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} [\sqrt{q_{11}} u]_+ [\sqrt{q_{22}} v]_+
=\\= \sqrt{q_{11} q_{22}} \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} [u]_+ [v]_+
= \sqrt{q_{11} q_{22}} \frac{\lambda\left(\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)\right) + \sqrt{1-\frac{q_{12}^2}{q_{11} q_{22}}}}{2\pi}
=\\= \frac{\lambda \sqrt{q_{11} q_{22}} \left(\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)\right) + \sqrt{q_{11} q_{22} - q_{12}^2}}{2\pi}.
\end{multline}
\begin{multline}
\mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Sigma)} 1_{u>0} 1_{v>0}
= \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,D \Lambda D)} 1_{u>0} 1_{v>0}
=\\= \mathbb{E}\,_{[u,v]^T \sim \mathcal{N}(0,\Lambda)} 1_{u>0} 1_{v>0}
= \frac{\pi - \arccos\left(\frac{q_{12}}{\sqrt{q_{11} q_{22}}}\right)}{2\pi}.
\end{multline}
Similar explicit computations are available for convolutional networks \cite{arora2019exact}, as well as for generic tensor programs, as long as the nonlinearities used belong to a certain list (which includes e.g. ReLU and the error function, see \cite{novak2019neural} for a concrete implementation and \cite{yang2020tensor_ii} for generic recurrent formulas in terms of expectations).
However, a typical convolutional network also uses max poolings and other nonlinear maps for which explicit formulas for expectations are not available at the moment.
In this case, one can rely on a finite-width Monte-Carlo estimate for $\Theta(x,x')$, i.e. $\hat\Theta^{(M)}(x,x') = \frac{1}{M} \sum_{k=1}^M \hat\Theta(x,x')$, where $M$ is a number of independent initializations and $\hat\Theta(x,x')$ is an empirical kernel for width $n$.
According to convergence results, $\hat\Theta^{(M)}(x,x') \to \Theta(x,x')$ as $n \to \infty$ $\forall M \geq 1$.
Also, $\hat\Theta^{(M)}(x,x') \to \mathbb{E}\, \hat\Theta(x,x')$ as $M \to \infty$ $\forall n \to \infty$.
Unfortunately, one cannot guarantee that $\mathbb{E}\, \hat\Theta(x,x') = \Theta(x,x')$; therefore, $\hat\Theta^{(M)}(x,x')$ can be a biased estimate.
However, according to experiments of \cite{novak2019neural}, discrepancy between $\hat\Theta^{(M)}$ and $\Theta$ decreases as $M$ grows for any finite $n$.
This means that the main component of this discrepancy is not bias but variance decreased by adding more Monte-Carlo samples.
We also have to note that \cite{arora2019exact} reports significant accuracy drops on a CNN of width $n=512$ when using a single-sample Monte-Carlo estimate for the NTK instead of the exact limit NTK.
However, they haven't provided any results for $M > 1$, therefore, this accuracy drop could be caused by large variance of $\hat\Theta$.
\subsection{NTK for attention layers}
A neural tangent kernel is typically considered for architectures for which analytical computation is available, i.e. for fully-connected and convolutional ReLU nets, see \cref{sec:limit}.
One of the necessary conditions for exact computations to be possible is the fact that the output of each individual pre-activation neuron becomes a Gaussian process in the limit of large width.
This allows one to apply Master theorem (\cref{thm:master_theorem}), and express the NTK as an expectation over certain Gaussian variables.
However, there exist layers which does not enjoy Gaussian behavior even in the limit of large width.
Attention layer is one of the examples:
\begin{equation}
f(x)
= \mathrm{Softmax}\left(G(x)\right) V(x),
\qquad
G(x)
= \frac{1}{\sqrt{n}} Q^T(x) K(x),
\end{equation}
where we define queries $Q(x) = x W_Q$, keys $K(x) = x W_K$, and values $V(x) = x W_V$.
Dimensions of the corresponding matrices are: $W_Q \in \mathbb{R}^{n_0 \times n}$, $W_K \in \mathbb{R}^{n_0 \times n}$, and $W_V \in \mathbb{R}^{n_0 \times n_H}$, and $x \in \mathbb{R}^{d \times n_0}$.
If $W_Q$ and $W_K$ are independent with iid zero mean unit variance entries then $G_{\alpha\beta}(x) = n^{-1/2} \sum_{i=1}^n \sum_{j,k=1}^{n_0} x_{\alpha,j} x_{\beta,k} W_Q^{ji} W_K^{ki}$ converges by CLT to a Gaussian variable.
The resulting limit matrix is therefore $d \times d$ matrix with (non-degenerate) Gaussian entries.
Since $d$ stays fixed as $n \to \infty$, we cannot apply any limit theorem to reason about the distribution of $f_i(x)$ for some $i \in [n_H]$.
\cite{hron2020infinite} consider a multi-head attention layer and show that it does enjoy Gaussian process behavior as width and number of heads go to infinity simultaneously:
\begin{equation}
f(x)
= [f^1(x), \ldots, f^n(x)] W_O,
\qquad
f_i(x)
= \mathrm{Softmax}\left(G_i(x)\right) V_i(x),
\qquad
G_i(x)
= \frac{1}{\sqrt{n}} Q_i^T(x) K_i(x),
\end{equation}
where $W_O \in \mathbb{R}^{n_H n \times n_H}$ and all $Q_i$, $K_i$, and $V_i$ are iid for different $i \in [n]$.
To gain some intuition about the result of \cite{hron2020infinite}, consider $n_H=1$, i.e. outputs of all individual heads are scalars and the final output is also a scalar.
In this case, $f(x)$ is a product of a vector with $n$ iid entries and a matrix with iid $\mathcal{N}(0,n^{-1})$ entries.
This product tends to a Gaussian as $n \to \infty$ by CLT.
Considering a set of inputs gives a random Gaussian vector similar to the fully-connected case, see \cref{sec:limit_fc_nets}.
\cite{hron2020infinite} gives exact formulas for covariances $q(x,x')$ and the kernel $\Theta(x,x')$; they are implemented as layers in NeuralTangents \cite{novak2019neural}.
\section{Computational aspects}
\label{sec:computations}
\subsection{Inference optimizations}
Suppose one is able to compute (or approximate) the limit kernel, $\Theta(x,x')$, on any pair of points $(x,x')$.
The result of kernel regression at convergence ($t \to \infty$) in the limit of inifinite width is then given by (see Eq.~(\ref{eq:lin_solution_square_loss})):
\begin{equation}
f_\infty(x)
= f_0(x) - \Theta(x, \vec x) \Theta^{-1}(\vec x, \vec x) (f_0(\vec x) - \vec y).
\label{eq:inf_wide_solution_square_loss}
\end{equation}
where $\Theta(\vec x, \vec x) \in \mathbb{R}^{m \times m}$ and $\Theta(x, \vec x) \in \mathbb{R}^{1 \times m}$.
For multi-class problems, $f(x) \in \mathbb{R}^k$, where $k$ is the number of classes, and the kernel evaluated at two points becomes a $k \times k$ matrix:
\begin{equation}
\hat\Theta_{jj'}(x,x')
= \nabla_\theta^T f^j(x) \nabla_\theta f^{j'}(x').
\end{equation}
Define a Gram matrix as $\hat\Theta_{ik+j,i'k+j'}(\vec x, \vec x) = \hat\Theta_{jj'}(x_i,x_{i'})$ and its limit counterpart $\Theta(\vec x, \vec x) \in \mathbb{R}^{mk \times mk}$ accordingly; similarly for $\Theta(x, \vec x) \in \mathbb{R}^{k \times mk}$.
If one defines $f_0^{ik+j}(\vec x) = f_0^j(x_i)$, the corresponding solution takes the same form as Eq.~(\ref{eq:inf_wide_solution_square_loss}).
Evaluating this quantity naively requires storing and inverting the kernel Gram matrix $\Theta(\vec x, \vec x) \in \mathbb{R}^{mk \times mk}$.
Storing it requires $O(m^2 k^2)$ memory, while inverting it takes $O(m^3 k^3)$ time, making such a naive approach computationally infeasible for datasets with $m k \gtrsim 10^4$ (nevertheless, for small datasets, the naive approach for computing the NTK estimator (\ref{eq:inf_wide_solution_square_loss}) is feasible and may provide advantage over traditional SGD training, see \cite{arora2019harnessing}).
Let us start with discussing two important optimizations implemented in Neural Tangents \cite{novak2019neural}.
Note that as discussed in \cref{sec:limit}, for a fully-connected net (and, in fact, for any tensor program, see \cite{yang2019tensor_i}) preactivations of different neurons on a given layer become iid as width goes to infinity.
This implies $\Theta_{jj'}(x,x') = \Theta_{11}(x,x') 1_{j=j'}$.
Therefore the kernel Gram matrix has a block structure: $\Theta(\vec x, \vec x) = \Theta|_{k=1}(\vec x, \vec x) \otimes I_{k \times k}$.
This reduces memory footprint to $O(m^2)$ and the time requirement to $O(m^3)$.
The second optimization deals with convolutional networks.
Note that computing $\Theta(x,x')$ requires computing all intermediate covariances $q_l(x,x')$.
These covariances were scalars for fully-connected nets since different neurons of a given layer became iid as width went to infinity.
However, for an image with $d$ pixels, different pixels of a given layer are dependent since their preactivations are computed using same weight matrices.
That's why for convolutional nets, one has to construct intermediate covariance matrices of size $d \times d$; storing and computing them for each pair of points requires $O(m^2 d^2)$ memory and time, even surpassing the time required for Gram matrix inversion when $d^2 > m$ (this happens e.g. for CIFAR10 for which $d = 32 \times 32 = 1024$, $m = 50 000$, $k = 10$).
However, as was noted e.g. in \cite{xiao2018dynamical}, if no pooling is used in the network, it suffices to compute and store $d$ independent $m \times m$ blocks of this covariance matrix, boiling down to $O(m^2 d)$ time requirement which is usually not greater than $O(m^3)$ time required for inversion.
So far, the main computational bottleneck was the time required for inverting the kernel Gram matrix.
This problem is not specific for NTK; it appears for any regularized kernel regression problem:
\begin{equation}
\hat f_\lambda
= \argmin_{f \in \mathcal{H}} \sum_{j=1}^m \ell(y_j, f(x_j)) + \lambda \| f \|_\mathcal{H}^2.
\label{eq:kernel_regression}
\end{equation}
Here $\mathcal{H}$ is a Hilbert space of functions of the form $f(x) = \Phi^T(x) \theta$; the corresponding scalar product is $\langle \Phi^T(x) \theta, \Phi^T(x) \theta' \rangle = \theta^T \theta'$.
Hence $\| f \|_\mathcal{H}^2 = \langle f, f \rangle = \|\theta\|_2^2$ for $f(x) = \Phi^T(x) \theta$.
Problem~(\ref{eq:kernel_regression}) has an associated kernel, which we denote with the same letter as NTK: $\Theta(x,x') = \Phi^T(x) \Phi(x')$.
Due to the representer theorem \cite{kimeldorf1970correspondence}, any solution of Problem~(\ref{eq:kernel_regression}) has the form $f(x) = \sum_{j=1}^m \alpha_j \Theta(x,x_j)$.
For now, consider quadratic loss: $\ell(y,z) = \frac{1}{2} \| y - z \|_2^2$.
The problem above becomes:
\begin{equation}
\vec\alpha
= \argmin_{\vec\alpha \in \mathbb{R}^m} \frac{1}{2} \sum_{j=1}^m \left( \sum_{j'=1}^m \alpha_{j'} \Theta(x_j,x_{j'}) - y_j \right)^2 + \lambda \left\| \sum_{j=1}^m \alpha_j \Phi(x_j) \right\|_2^2.
\end{equation}
This problem is convex, therefore any critical point of the corresponding functional is a solution:
\begin{equation}
(\Theta(\vec x, \vec x) + \lambda I) \vec\alpha
= \vec y.
\end{equation}
As long as $\Theta(\vec x, \vec x) + \lambda I$ is invertible, the solution is $\vec\alpha = (\Theta(\vec x, \vec x) + \lambda I)^{-1} \vec y$.
Putting $\lambda = 0$, we recover expected Eq.(\ref{eq:inf_wide_solution_square_loss}) (since $\mathbb{E}\, f_0(x) = 0$).
While the represeneter theorem guarantees that it suffices to look for solutions only of the form $f(x) = \sum_{j=1}^m \alpha_j \Theta(x,x_j)$ instead of inspecting the whole $\mathcal{H}$, we, following \cite{meanti2020kernel}, consider further contracting the search space by sampling $m'$ points $(\tilde x_1, \ldots, \tilde x_{m'})$ uniformly out of $m$ and looking for solutions of the form $f(x) = \sum_{j=1}^{m'} \tilde\alpha_j \Theta(x,\tilde x_j)$.
This is known as Nystr\"om approximation.
The minimization problem then becomes:
\begin{equation}
\vec{\tilde\alpha}
= \argmin_{\vec{\tilde\alpha} \in \mathbb{R}^{m'}} \frac{1}{2} \sum_{j=1}^m \left( \sum_{j'=1}^{m'} \tilde\alpha_{j'} \Theta(x_j,\tilde x_{j'}) - y_j \right)^2 + \lambda \left\| \sum_{j=1}^{m'} \tilde\alpha_j \Phi(\tilde x_j) \right\|_2^2.
\end{equation}
This problem is again convex and its critical points satisfy the following:
\begin{equation}
\left(\Theta\left(\vec{\tilde x}, \vec x\right) \Theta\left(\vec x, \vec{\tilde x}\right) + \lambda \Theta\left(\vec{\tilde x}, \vec{\tilde x}\right)\right) \vec{\tilde \alpha}
= \Theta\left(\vec{\tilde x}, \vec x\right) \vec y.
\label{eq:critical_points_nystrom}
\end{equation}
Computing the kernel-kernel product takes $O(m {m'}^2)$ time and solving the above system directly takes $O({m'}^3)$ time.
The space requirement can be put to $O({m'}^2)$ as the "rectangular Gram matrix" can be computed in $m' \times m'$ blocks.
Conjugate gradient methods are iterative methods designed for approximately solving linear systems of the form $A \vec z = \vec b$ without explicitly inverting the matrix $A$.
The main operation used by these methods on each iteration is a matrix-vector product.
In our case, the matrix-vector product requires $O(mm' + {m'}^2)$ time; note that it allows one to avoid computing the kernel-kernel product explicitly, by computing two matrix-vector product instead, costing $O(mm')$ time each.
Putting all together, solving system~(\ref{eq:critical_points_nystrom}) with $s$ iterations of a conjugate gradient method requires $O(s(mm' + {m'}^2))$ time and $O({m'}^2)$ space.
Based on certain theoretical results, \cite{meanti2020kernel} suggest taking $m' = O(\sqrt{m})$ and $s = O(\log m)$.
The resulting $O(m \sqrt{m} \log m)$ time and $O(m)$ space allows for applying their method to datasets of size up to $m \sim 10^6$ (the size of ImageNet).
\cite{meanti2020kernel} also discuss several optimizations aiming for improving GPU-efficiency of the method.
While their method is publicly available as an open-source library\footnote{\url{https://github.com/FalkonML/falkon}}, we are not aware of any of its applications to NTK.
\subsection{Computing the empirical kernel}
All the previous discussion of the current section assumed that the kernel, $\Theta$, can be efficiently computed.
This is the case for certain models for which analytic computations are available.
Indeed, for $L$-layer fully-connected nets, the limit Gram matrix $\Theta(\vec x, \vec x)$ can be computed in $O(m^2 L)$ time while storing it requires $O(m^2)$ space, see Eqs. (\ref{eq:q_iteration}) and (\ref{eq:Theta_iteration}).
For more complex models, e.g. for those including max-poolings, closed-form analytic expressions for the limit kernel are not currently available.
However, the empirical kernel, $\hat\Theta$, can always be computed explicitly and is close to $\Theta$ for sufficiently large width (see convergence theorems in \cref{sec:convergence}).
For this reason, we are looking for ways to compute $\hat\Theta$ efficiently.
In order to simplify the illustration, we will discuss only time requirements in the sequel.
Recall the empirical kernel is a product of two jacobians: $\hat\Theta_{jj'}(x,x') = \nabla^T_\theta f^j(x) \nabla_\theta f^{j'}(x')$.
Therefore the time cost for computing the kernel consists of the time required to compute the jacobian and the time required for jacobian contraction.
Denote $[FP]$ the cost of a single forward pass for our network; a single backward pass has approximately the same cost.
Then computing a jacobian for a given point $x$ takes $O(k [FP])$ time.
Contracting two jacobians for fixed $j$ and $j'$ takes $O(N)$ time, where $N$ is the total number of parameters: $\theta \in \mathbb{R}^N$.
Putting all together, computing the full $mk \times mk$ Gram matrix takes $O(m k [FP] + m^2 k^2 N)$ time.
\cite{novakfast} propose a method for computing the NTK-vector product.
It can be directly embedded into the method of \cite{meanti2020kernel} using conjugate gradients, or used for computing the kernel explicitly by applying it to columns of the $k \times k$ identify matrix.
Their method boils down to casting a matrix-vector product where the matrix is the empirical NTK to a vector-jacobian product followed by a jacobian-vector product: $\sum_{j'=1}^k \hat\Theta_{jj'}(x,x') v_{j'} = \nabla^T_\theta f^j(x) \sum_{j'=1}^k \nabla_\theta f^{j'}(x') v_{j'}$.
Both matrix-vector products can be computed in $O([FP])$ time.
Therefore this method allows to compute the full $mk \times mk$ Gram matrix in $O(m^2 k [FP])$ time, which improves over the jacobian contraction method as long as $[FP] < C k N$ for a certain constant $C$.
Memory requirements that we do not show here are, in fact, same for both methods, see \cite{novakfast}.
\cite{novakfast} also propose another optimization exploiting certain stucture of the function $f$: e.g. weights of a fully-connected net are aligned sequentially, while weights of a convolutional layer are aranged in blocks.
We do not discuss it in the present survey.
Both optimizations are publicly available as JAX \cite{jax2018github} function transformations.\footnote{\url{https://github.com/iclr2022anon/fast_finite_width_ntk}}.
\section{Applications}
\subsection{A kernel method}
\subsubsection{Supervised learning on small datasets}
The NTK is a kernel, therefore it can be used in any kernel method itself, i.e. kernel ridge regression or kernel SVM.
However, computing the kernel Gram matrix on a dataset of size $m$ requires $O(m^2)$ time, which is infeasible for large datasets.
One can either rely on certain approximations, e.g. Nystr\"om approximation, see \cref{sec:computations}, or restrict oneself to small datasets.
One possible advantage of kernel methods over neural nets is lower variance.
Indeed, the only variance of a kernel method is induced by sampling the dataset, while a neural network has several more sources of variance; e.g. initialization randomness and batch sampling.
It is likely that this difference in variances is especially important when the dataset is small.
The other advantage of kernel methods is having smaller number of hyperparamaters compared to neural nets.
This makes kernel methods useful as robust baseline methods that may outperform large neural nets in a situation when there is no budget for careful hyperparamater tuning.
As an illustration, \cite{arora2019harnessing} demonstrated that kernel regression with 14-layer CNTK consistently outperforms ResNet-34 trained with standard hyperparameters on a random subset of CIFAR-10 with $\leq 640$ samples.
\subsubsection{Neural architecture search using NTK conditional number}
There are other setups where computing the Gram matrix on a small dataset is sufficient.
For example, \cite{chen2021neural} proposes a condition number of the NTK Gram matrix as a proxy-measure of a given architecture performance; this proxy-measure is then used to guide neural architecture search (NAS).
In this case, we do not need the Gram matrix itself but only the condition number, which motivates computing the matrix on a small subset of examples.
While the condition number on a random subset Gram matrix provides only a random estimate, possibly noisy and biased, of a true condition number, the way we use it does not require exact estimates.
Indeed, a performance measure in NAS algorithms is mainly used to cut-off pathologic, low-performing models from a population, rather than finding the best one.
Therefore any measure that correlates positively with performance suffices.
The use of condition number as a proxy-measure of performance relies on two hypotheses: (1) performance correlates with trainability, and (2) trainability correlates with NTK condition number.
The first hypothesis is mainly motivated by a natural implication "bad trainability implies low performance".
To motivate the second hypothesis, let us consider kernel ridge regression trained with usual discrete-time gradient descent:
\begin{equation}
f_{t+1}(\vec x)
= f_t(\vec x) + \eta \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)),
\end{equation}
where now $t$ is a discrete time-step and $\eta$ is a learning rate.
Consider eigenvalue decomposition of the kernel: $\Theta(\vec x, \vec x) = \sum_{k=1}^m \lambda_k \vec v_k \vec v_k^T$, where $\lambda_1 \geq \ldots \geq \lambda_m \geq 0$, and $(\vec v_k)_{k=1}^m$ forms an orthonormal basis.
Let us decompose our model's predictions as $f_t(\vec x) = \sum_{k=1}^m u_{t,k} \vec v_k$.
Then the dynamics above decomposes as
\begin{equation}
u_{t+1,k}
= u_{t,k} + \eta \lambda_k (\vec y^T \vec v_k - u_{t,k}).
\end{equation}
This gives
\begin{equation}
u_{t+1,k} - \vec y^T \vec v_k
= (1 - \eta \lambda_k) (u_{t,k} - \vec y^T \vec v_k),
\end{equation}
and the solution is therefore
\begin{equation}
u_{t,k}
= \vec y^T \vec v_k + (1 - \eta \lambda_k)^t (u_{0,k} - \vec y^T \vec v_k).
\end{equation}
The dynamics above converges as $t \to \infty$ for any $u_{0,k}$ if and only if $\eta < 2 / \lambda_k$.
Since this should hold for all $k \in [m]$ and the maximal $\lambda$ is $\lambda_1$, we need to have $\eta < 2 / \lambda_1$.
Therefore the $m$-th principal component converges at rate $\eta \lambda_m < 2 \lambda_m / \lambda_1$.
$\kappa = \lambda_m / \lambda_1$ is our condition number.
We see that small condition number implies low trainability and thus, by the first hypothesis, low performance.
Using a combination of two proxy-measures, the condition number and the number of linear regions (we do not discuss it here), \cite{chen2021neural} constructed a NAS method that provided state-of-the-art performance on NAS-Bench-201 \cite{dong2020bench}, while using much smaller time compared to most of the other methods.
\cite{chen2021neural} tested their method on CIFAR10 and ImageNet as well.
In both cases, their method demonstrated competetive performance while using orders of magnitude less time.
\subsubsection{Matrix completion and image impainting}
In some cases, posing the problem as kernel regression allows for certain optimizations.
In particular, \cite{radhakrishnan2021simple} proposed approaching the problem of matrix completion by minimizing the following loss:
\begin{equation}
\mathcal{L}(\theta)
= \sum_{(i,j) \in S} (Y_{ij} - \tr(f(Z;\theta) M^{(ij)}))^2,
\end{equation}
where $S \subset [k] \times [d]$ is a set of coordinates of known entries of the target matrix $Y \in \mathbb{R}^{k \times d}$, $M^{(ij)} \in \mathbb{R}^{k \times d}$ has $1$ at position $(i,j)$ and $0$ elsewhere, $f(\cdot;\theta)$ is a neural network with parameters $\theta$, $n_0$ inputs and $k$ outputs, and $Z \in \mathbb{R}^{n_0 \times d}$ is an a-priori given matrix.
The model $f$ is applied to each column of $Z$ seperately, therefore $f(Z;\theta)$ is $k \times d$ matrix.
The above setup can be treated as a usual $l_2$ regression problem on a dataset $(Y_{ij}, M^{(ij)})_{(i,j) \in S}$.
The corresponding empirical NTK is defined as $\hat K(M^{(ij)}, M^{(i'j')}) = \nabla^T_\theta \tr(f(Z;\theta) M^{(ij)}) \nabla_\theta \tr(f(Z;\theta) M^{(i'j')})$.
Naturally, it does not depend on target matrix entries $Y$, and since there is only a finite set of possible inputs $M^{(ij)}$ (namely, $k d$), the resulting $kd \times kd$ Gram matrix will be the same for all possible matrix completion problems of a given target matrix dimensions.
In other words, one can precompute the Gram matrix once and use it to all possible matrix completion problems of given dimensions.
In contrast, original neural network formulation would require training a new network for each dataset $(Y_{ij}, M^{(ij)})_{(i,j) \in S}$.
When $f(\cdot;\theta)$ is given by a fully-connected network with $L$ layers, \cite{radhakrishnan2021simple} provide a closed-form formula for its limit NTK: $K(M^{(ij)}, M^{(i'j')}) = \kappa_L\left(z_{\cdot,j}^T z_{\cdot,j'}\right) 1_{i=i'}$, where $\kappa_L$ is given by a certain recurrent relation.
As we see, according to this kernel, elements of different rows of $Y$ are orthogonal (does not effect each other), while similarity of elements of the same row is given by a scalar product of the corresponding columns of $Z$.
Therefore columns of $Z$ encodes a-priori similarities between columns of $Y$.
The matrix $Z$ is called a feature-prior matrix.
The ideal feature-prior matrix would be the target matrix $Y$ itself.
Since one does not have access to it, \cite{radhakrishnan2021simple} suggest using the output $\hat Y$ of a separate matrix completion method instead.
The resulting joint method performs better than the backbone one on popular collaborative filtering and virtual drug screening datasets.
Image impainting can be viewed as a special case of matrix completion.
Apart from using the same Gram matrix for all problems of a given size, image impainting with convolutional networks allows for one more optimization.
When $f$ is a convolutional network, we pose the problem a bit differently to above.
Suppose $f$ has $n_0$ input channels, $1$ output channel, and it maps an image to an image of the same size.
Suppose $Z \in \mathbb{R}^{n_0 \times 2^p \times 2^q}$ and it is treated as a $2^p \times 2^q$ image with $n_0$ channels.
This in contrast to the previous considerations, where $Z$ was a matrix with columns treated as different inputs to a vector-valued model.
Similar to the above, $Y \in \mathbb{R}^{2^p \times 2^q}$ is a target image, and $M^{(ij)}$ of the same size has $1$ at $(i,j)$ and zero elsewhere.
Note that $f$ applied to the "image" $Z$ has $2^p \times 2^q$ output and therefore its NTK $\Theta$ is a $2^p \times 2^q \times 2^p \times 2^q$ tensor.
Suppose $f$ has no downsampling or upsampling layers.
\cite{radhakrishnan2021simple} provides exact formula for the corresponding limit NTK in terms of the limit NTK of the model $f$ in this case: $K(M^{(ij)}, M^{(i'j')}) = \Theta(Z,Z)_{i,j,i',j'}$.
Now suppose $f$ has $s$ downsampling and $s$ upsampling layers.
Computing the Gram matrix for its NTK requires $O(2^{2p+2q})$ memory and $O(L 2^{2p+2q})$ time, where $L$ is the number of convolutions in $f$.
It is already prohibitive for moderate-size images, i.e. when $p, q \approx 10$.
\cite{radhakrishnan2021simple} propose a way to reconstruct the $2^p \times 2^q \times 2^p \times 2^q$ Gram matrix from a smaller Gram matrix of size $2^{2s+p+q}$.
Moreover, this smaller Gram matrix requires computing the "usual" Gram matrices only for images of size $2^{s+1} \times 2^{s+1}$ which requires only $O(L 2^{4s})$ time.
\subsubsection{Approximate integration with application to federated learning}
Even in the case when the NTK Gram matrix can be computed and stored, the exact solution (\ref{eq:inf_wide_solution_square_loss}) requires inverting the kernel Gram matrix, which costs $O(m^3)$ when performed naively.
Fortunately, mixing continuous-time and discrete-time formulations allows one to avoid computing the inverse explicitly.
Denote $H_{t,ij} = \hat\Theta_t(x_i,x_j)$, $Z_{t,ik} = \partial_{\theta_i} f(x_k;\theta)$, and $u_{t,k} = f_t(x_k)$.
Note that $H_t = Z_t^T Z_t$.
Discrete-time weight evolution with learning rate $\eta$ is given by
\begin{equation}
\theta_{t+1}
= \theta_t + \eta Z_t (\vec y - \vec u_t).
\end{equation}
Recall that assuming stationary kernel $H_t = H_0$ is equivalent to assuming stationary jacobian $Z_t = Z_0$.
With this assumption, the dynamics above is solved as
\begin{equation}
\theta_t
= \theta_0 + \eta Z_0 \sum_{s=0}^{t-1} (\vec y - \vec u_s).
\end{equation}
Recall that integrating continuous-time gradient descent dynamics under assumption $H_t = H_0$ gives
\begin{equation}
\vec u_s
= \vec y + e^{-\eta s H_0} (\vec u_0 - \vec y).
\end{equation}
Combining the two latter equations, we get the weights at any time-step $t$:
\begin{equation}
\theta_t
= \theta_0 + \eta Z_0 \sum_{s=0}^{t-1} e^{-\eta s H_0} (\vec y - \vec u_0).
\end{equation}
The continuous analogue of the above evolution is obtained by replacing the sum with an integral:
\begin{equation}
\theta_t
= \theta_0 + \eta Z_0 \int_0^t e^{-\eta s H_0} (\vec y - \vec u_0) \, ds
= \theta_0 + Z_0 H_0^{-1} \left(I - e^{-\eta s H_0}\right) (\vec y - \vec u_0).
\end{equation}
Here we get the inverse, as expected.
Note that in this approach we do not assume that the network to be infinitely wide, we just assume it to be linear in its weights.
This allows us to reason in terms of the network weight vector $\theta_t$ instead of reasoning in terms of some abstract feature space associated to the kernel.
This aspect gives us one additional advantage: we can integrate the dynamics up to some time $t_1$ and, since we know the weights $\theta_{t_1}$, compute $Z_{t_1}$ and $H_{t_1}$.
We can then proceed integration with these updated matrices.
This method lies in between the usual gradient descent training and kernel gradient descent with constant kernel.
The latter never updates the kernel, while the former updates the kernel at each timestep.
In contrast, the method we discuss updates the kernel only at given timesteps.
The approach under discussion requires computing and storing $Z$ of size $N \times m$, which is an obvious disadvantage.
As a remedy, \cite{yue2021neural} propose splitting the job of computing $Z$ between several workers.
A server joins the parts together, integrates the dynamics up to some timestep $t$, and sends $\theta_t$ to all of the workers, starting a new iteration.
Tuning the timesteps of kernel updates may help balancing load between the server and the workers.
The data used to compute $Z$ is never stored on the server, making this approach promising for federated learning.
However, since the server may attempt reconstructing the data from $Z$, one has to ensure each worker's privacy cannot be compromised; see \cite{yue2021neural} for further details.
\subsection{Pathology analysis}
\begin{figure}[t]
\centering
\subfigure[Ground truth]{\includegraphics[width=0.18\textwidth]{images/div2k_8_gt.jpeg}}
\subfigure[No mapping]{\includegraphics[width=0.18\textwidth]{images/div2k_8_no_enc.jpeg}}
\subfigure[Basic]{\includegraphics[width=0.18\textwidth]{images/div2k_8_basic.jpeg}}
\subfigure[Positional enc.]{\includegraphics[width=0.18\textwidth]{images/div2k_8_posenc.jpeg}}
\subfigure[Gaussian]{\includegraphics[width=0.18\textwidth]{images/div2k_8_rff.jpeg}}
\caption{Images are borrowed from \cite{tancik2020fourier}.}
\label{fig:image_regression}
\end{figure}
\begin{figure}
\centering
\subfigure{\includegraphics[width=0.2\textwidth]{images/dragon_crop_0.png}}
\subfigure{\includegraphics[width=0.2\textwidth]{images/3D_MRI_no_encoding.png}}
\subfigure{\includegraphics[width=0.2\textwidth]{images/nerf_no_encoding.png}}
\\
\subfigure[3D shape regression]{\includegraphics[width=0.2\textwidth]{images/dragon_crop_1.png}}
\subfigure[MRI reconstruction]{\includegraphics[width=0.2\textwidth]{images/3D_MRI_gaussian_encoding.png}}
\subfigure[Inverse rendering]{\includegraphics[width=0.2\textwidth]{images/nerf_gaussian_18.png}}
\caption{Images are borrowed from \cite{tancik2020fourier}.}
\label{fig:low_dim_regression}
\end{figure}
While the empirical NTK of a neural network is not the same as its limit NTK, they may have certain properties in common.
In particular, certain issues of a finite-width network may reflect in certain issues of its limit NTK, and fixing these issues in the limit NTK may result in fixing them in a finite-width net.
As an example where this approach is proven to work, consider image regression.
In this task, input samples are image coordinates, $x \in [0,1]^d$ for $d=2$, and targets are pixel colors; we assume grey-scale images with $y \in [0,1]$.
The task is therefore to regress the full image given a set of pixels.
Let us consider applying a fully-connected network for this task.
As we have already observed in \cref{sec:limit_fc_nets}, the limit NTK $\Theta(x,x')$ of a fully-connected network depends only on $x^T x$, $x^{\prime,T} x'$, and $x^T x'$.
All of these terms are rotation-invariant, hence the kernel itself is rotation-invariant.
However, none of this terms is translation-invariant, hence the kernel cannot be translation-invariant (otherwise, it has to be constant).
Therefore it is quite unlikely that the empirical kernel will be invariant to translations.
On the other hand, both translation and rotation invariance are desirable for a kernel used for image regression.
Indeed, this means that applying these transformations to the train set of pixels results in the same image as without them, up to translation and rotation.
In order to achieve this property, one may start working on translationaly invariant embeddings of image coordinates.
The simplest non-trivial embedding of this kind is $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$, where $\cos$ and $\sin$ are applied elementwise.
Following \cite{tancik2020fourier}, we shall refer it as "basic".
Comparing (b) and (c) of Figure~\ref{fig:image_regression}, this indeed results in better perceived quality.
However the regressed image is still blurry: see Figure~\ref{fig:image_regression} (c).
As we shall see shortly, NTK kernel regression learns low-frequency components of the image before its high-frequency ones.
If we assume that the same property holds for the corresponding finite-width net then achieving sharp images may be impossible for a given number of gradient steps.
Recall the training dynamics of a kernel regression with kernel $\Theta$ trained to minimize square loss on a training dataset $(\vec x, \vec y)$:
\begin{equation}
\dot f_t(\vec x)
= \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)).
\end{equation}
$\Theta$ is a kernel, therefore its Gram matrix is positive-semidefinite.
Consider its eigenvalue decomposition: $\Theta(\vec x, \vec x) = \sum_{k=1}^m \lambda_k \vec v_k \vec v_k^T$, where $\lambda_1 \geq \ldots \geq \lambda_m \geq 0$, and $(\vec v_k)_{k=1}^m$ forms an orthonormal basis.
Let us decompose our model's predictions as $f_t(\vec x) = \sum_{k=1}^m u_{t,k} \vec v_k$.
Then the dynamics above decomposes as
\begin{equation}
u_{t,k}
= \lambda_k (\vec v_k^T \vec y - u_{t,k}),
\end{equation}
which solves as
\begin{equation}
u_{t,k}
= \vec v_k^T \vec y - e^{-\lambda_k t} (\vec v_k^T \vec y - u_{0,k}).
\end{equation}
As one clearly sees, time required to learn the $k$-th principal component of the target is inversely proportional to its strength $\lambda_k$.
In other words, strong components are learned before weak ones.
The question is: what are the eigenvectors of the NTK Gram matrix?
It is hard to answer this question in general since a Gram matrix depends on the dataset.
However, for a kernel, there is an analogue of eigenvalue decomposition called Mercer's representation.
Let $X$ be a compact metric space and let $\mu$ be a sigma-additive measure on $X$ with $\supp \mu = X$.
Suppose $K: \; X \times X \to \mathbb{R}$ is continuous, symmetric, and satisfies $\int_X \int_X K(x,x') f(x) f(x') \, d\mu(x) \, d\mu(x') < \infty$ $\forall f \in L^2_\mu(X)$.
Define Gram-Schmidt operator $T_K: L^2_\mu(X) \to L^2_\mu(X)$ as $T_K[f](x) = \int_X K(x,x') \, d\mu(x')$.
Then the above operator admits an eigenvalue decomposition with eigenfunctions $(\psi_k)_{k=1}^\infty$ and corresponding eigenvalues $(\lambda_k)_{k=1}^\infty$, and the set of eigenfunctions forms an orthonormal basis in $L^2_\mu(X)$.
The Mercer's representation is the corresponding decomposition of the kernel:
\begin{equation}
K(x,x')
= \sum_{k=1}^\infty \lambda_k \psi_k(x) \psi_k(x').
\end{equation}
The series converges uniformly in $X \times X$.
From the above, we have $\int_X \int_X K(x,x') \psi_k(x) \psi_k(x') \, d\mu(x) \, d\mu(x') = \lambda_k$ $\forall k \geq 1$.
Hence if $\vec x = (x_k)_{k=1}^m$ and $\vec x' = (x'_k)_{k=1}^m$ are sampled iid from $\mu$ then
\begin{multline}
\frac{1}{m^2} \psi_k^T(\vec x) K(\vec x, \vec x') \psi_k(\vec x')
=\\= \frac{1}{m^2} \sum_{i,j=1}^m K(x_i, x'_j) \psi(x_i) \psi(x'_k)
\to \int_X \int_X K(x,x') \psi_k(x) \psi_k(x') \, d\mu(x) \, d\mu(x')
= \lambda_k
\end{multline}
a.s. as $m \to \infty$ by the Law of Large Numbers (LLN).
Note that considering $\psi^T(\vec x) K(\vec x, \vec x) \psi(\vec x)$ instead of $\psi^T(\vec x) K(\vec x, \vec x') \psi(\vec x')$ may result in a different limit because the diagonal of $K$ is now calculated on two dependent arguments.
Nevertheless, there are only $m$ elements on the diagonal, which results in $O(m^{-1})$ error vanishing in the limit.
Hence
\begin{equation}
\frac{1}{m^2} \psi_k^T(\vec x) K(\vec x, \vec x) \psi_k(\vec x)
\to \lambda_k
\end{equation}
a.s. as $m \to \infty$.
In other words, given $\vec x$ sampled iid from $\mu$, $(\psi_k(\vec x))_{k=1}^m$ are approximately the eigenvectors of $K(\vec x, \vec x)$ with eigenvalues $(m^2 \lambda_k)_{k=1}^m$.
Recall that, as was noted above, the limit NTK of a fully-connected net $\Theta(z,z')$ depends only on $z^T z'$, $\|z\|_2$, and $\|z'\|_2$.
Recall also that we have decided to embed inputs with $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$.
This embedding maps $[0,1]^d$ on a $d$-dimensional torus that lies inside a $2d-1$-dimensional sphere.
In this case, our $\Theta(x,x') = \Theta(z(x),z(x'))$ depends only on $z^T(x) z(x')$.
Kernels with this property are called zonal.
Any zonal kernel $K: S^{p-1} \times S^{p-1} \to \mathbb{R}$ admits the following Mercer's decomposition with respect to the uniform measure on $S^{p-1}$:
\begin{equation}
K(z^T z')
= \sum_{k=0}^\infty \lambda_k \sum_{j=1}^{N(p,k)} Y_{k,j}(z) Y_{k,j}(z'),
\end{equation}
where $N(p,k)$ are so-called Gegenbauer polynomials and $Y_{k,j}$ are spherical harmonics.
For $p=2$, this decomposition gets a simpler form:
\begin{equation}
K(z^T z')
= \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k \arccos(z^T z')).
\label{eq:mercer_zonal_2d}
\end{equation}
As we see, large $k$'s correspond to high-frequency harmonics, while small $k$'s correspond to low-frequency ones.
A recent result of \cite{chen2020deep} states that the NTK of a fully-connected net with inputs lying on $S^{p-1}$ has eigenvalues decaying as a power-law: $\lambda_k \sim k^{-p}$ as $k \to \infty$; see also \cite{geifman2020similarity} for an earlier result for shallow nets and \cite{bietti2019inductive} for an even earlier result for bias-free shallow nets.
This means that learning the $k$-th harmonic of the input image requires $O(k^p)$ time.
Hence for a finite amount of training steps, high-frequency components remain not learned, which results in blurry images similar to Figure~\ref{fig:image_regression} (c).
The possible remedy would be increasing $\lambda_k$ for large $k$.
But how to achieve it?
We illustrate the solution proposed in \cite{tancik2020fourier} in the following.
Consider the case $d=1$ for simplicity.
In this case, the embedding map $z(x) = [\cos(2\pi x), \sin(2\pi x)]^T$ traverses a circle.
Consider a modified embedding $\tilde z(x) = [\cos(2\pi b x), \sin(2\pi b x)]^T$ instead, where $b \in \mathbb{N}$ is a tunable parameter.
The corresponding kernel is then given as
\begin{multline}
K(\tilde z^T \tilde z')
= \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k \arccos(\tilde z^T \tilde z'))
=\\= \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(4\pi k b (x-x'))
= \frac{1}{4\pi^2} + \frac{1}{\pi^2} \sum_{k=1}^\infty \lambda_k \cos(k b \arccos(z^T z')),
\end{multline}
which means that $\lambda_k$ becomes the $kb$-th eigenvalue in the original embedding space.
If $\lambda_k$ decreased monotonically this would mean that each $kb$-th eigenvalue increased from $\lambda_{kb}$ to $\lambda_k$, implying faster convergence to $kb$-th principal component.
The obvious downside of the method above is that in a new parameterization some of the eigenvalues become zero --- therefore they are never learned.
A simple solution is to enlarge the embedding: $\tilde z(x) = [\cos(2\pi \sigma^{j/M} x), \sin(2\pi \sigma^{j/M} x)]^T$, where $M \in \mathbb{N}$ and $\sigma \in \mathbb{R}_+$ are tunable parameters; this referred as "positional encoding" in \cite{tancik2020fourier}.
Another solution proposed by \cite{tancik2020fourier} is random Gaussian projections: $\tilde z(x) = [\cos(2\pi B x), \sin(2\pi B x)]^T$, where $B \in \mathbb{R}^{M \times d}$, each element of $B$ is sampled independently from $\mathcal{N}(0,\sigma^2)$, and $M$ and $\sigma$ are tunable parameters.
Both solution perform on par with each other and much better than the original embedding: compare (c), (d), and (e) in Figure~\ref{fig:image_regression}.
The same method suites other low-dimensional regression problems as well; \cite{tancik2020fourier} provide examples of 3D shape regression, MRI reconstruction, and inverse rendering.
See Figure~\ref{fig:low_dim_regression} for comparison of outputs of a neural net with no enconding of inputs (top row) and the proposed Gaussian encoding (bottom row).
One more notable example is Solid Isotropic Material Penalisation, an instance of topology optimization.
The task here is to optimize over material density at $N$ points $y \in [0,1]^N$ to obtain a shape that can withstand forces applied at certain points.
Given a density $y$ and a force vector $F$, the SIMP method constructs a stiffness matrix $K(y)$, and derives a displacement vector $U(y)$ by solving a linear system $K(y) U(y) = F$.
The resulting construction is stable if the forces do not do any work, i.e. $U^T(y) F = 0$.
The density is therefore optimized to minimize the work $C(y) = U^T(y) F U(y) \to \min_y$ under a volume constraint $\sum_{i=1}^N y_i = V$; $C$ is usually called compliance.
We can cast the constrained optimization problem as an unconstrained one by introducing pre-density $x \in \mathbb{R}^N$ and constructing density as $y_i = \sigma(x_i + b(x))$, where $b$ is a function that ensures the volume constraint.
Denoting this operation as $y = \Sigma(x)$, we get a new unconstrained optimization problem in the space of pre-densities: $C(\Sigma(x)) \to \min_x$.
While the above problem is not a regression problem, we can still model $x$ as outputs of a neural net at the corresponding grid points.
However, lack of translation invariance results in unplausible patterns.
\cite{dupuis2021dnn} used a similar embedding scheme as \cite{tancik2020fourier} to control this issue.
On the other hand, in contrast to \cite{tancik2020fourier}, \cite{dupuis2021dnn} used $\sin(\omega x)$ as activation instead of ReLU, and used $\omega$ together with bias initialization variance to control sharpness of output shapes, instead of modifying the embedding.
Both methods aim to "widen" the spectrum of the limit NTK.
\subsection{A theoretical tool}
\label{sec:app_theory}
Apart from providing a meaningful kernel for kernel methods, NTK can be used as a concept useful for reasoning about neural nets of large width.
Indeed, as stated in \cref{sec:convergence}, NTK, while being random and evolving, converges to a constant deterministic limit as width goes to infinity.
One can hope that for large enough width, the NTK stays close to its limit with high probability.
Therefore, any result valid for kernel regression with NTK taken as a kernel, may become also valid with high probability for a wide enough net.
\subsubsection{Global GD convergence}
Let us start with the following result valid for kernel regression with a constant kernel:
when the kernel is positive-definite, kernel regression learns the dataset.
Indeed, recall the training dynamics of a kernel regression with kernel $\Theta$ trained to minimize square loss on a training dataset $(\vec x, \vec y)$:
\begin{equation}
\dot f_t(\vec x)
= \Theta(\vec x, \vec x) (\vec y - f_t(\vec x)).
\end{equation}
Assuming $\Theta(\vec x, \vec x) \geq \lambda$,
\begin{equation}
\frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right)
= -(\vec y - f_t(\vec x))^T \Theta(\vec x, \vec x) (\vec y - f_t(\vec x))
\leq -\lambda \| \vec y - f_t(\vec x) \|_2^2,
\end{equation}
which gives
\begin{equation}
\| \vec y - f_t(\vec x) \|_2^2
\leq e^{-2\lambda t} \| \vec y - f_0(\vec x) \|_2^2.
\end{equation}
Hence $\lambda > 0$ suffices to guarantee that $f_t(\vec x)$ converges to $\vec y$ as $t \to \infty$.
Suppose now our kernel regression uses a random time-dependent kernel $\hat\Theta_t$ instead of $\Theta$:
\begin{equation}
\dot f_t(\vec x)
= \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x)).
\end{equation}
If we manage to guarantee that with probability $\geq 1-\delta$ $\forall t \geq 0$ $\hat\Theta_t(\vec x, \vec x) \geq \lambda$ then $\lambda > 0$ suffices to guarantee that $f_t(\vec x)$ converges to $\vec y$ as $t \to \infty$ with probability $\geq 1-\delta$.
Indeed,
\begin{equation}
\frac{d}{dt}\left(\frac{1}{2} \| \vec y - f_t(\vec x) \|_2^2\right)
= -(\vec y - f_t(\vec x))^T \hat\Theta_t(\vec x, \vec x) (\vec y - f_t(\vec x))
\leq -\lambda \| \vec y - f_t(\vec x) \|_2^2
\quad
\text{w.p. $\geq 1-\delta$},
\end{equation}
which gives
\begin{equation}
\| \vec y - f_t(\vec x) \|_2^2
\leq e^{-2 \lambda t} \| \vec y - f_0(\vec x) \|_2^2
\quad
\text{w.p. $\geq 1-\delta$}.
\end{equation}
One of the first results of this kind concerns ReLU nets with one hidden layer under NTK parameterization:
\begin{equation}
f(x; a_{1:n}, w_{1:n})
= \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i [w_i^T x]_+.
\label{eq:two_layered_ReLU_net_ntk}
\end{equation}
We aim to minimize square loss on a dataset $(\vec x, \vec y)$ of size $m$ with gradient descent on the input weights:
\begin{equation}
\dot w_i(t)
= \frac{1}{\sqrt{n}} \sum_{k=1}^m (y_k - f(x_k; a_{1:n}, w_{1:n}(t))) a_i [w_i^T(t) x_k > 0] x_k
\quad
\forall i \in [n].
\end{equation}
We sample $w_i \sim \mathcal{N}(0,I_{n_0})$ and $a_i \in U(\{-1,1\})$ $\forall i \in [n]$ independently.
The goal of sampling $a_i$ from this particular distribution is mere simplification: in this case $a_i^2 = 1$, which simplifies the NTK Gram matrix a little bit:
\begin{equation}
\hat\Theta_t(x_k, x_l) =
\frac{1}{n} \sum_{i=1}^n [w_i^T(t) x_k > 0] [w_i^T(t) x_l > 0] x_k^T x_l.
\end{equation}
However, it is possible to apply the same technique to any distribution of the output layer not depending on $n$.
Note that the Gram matrix depends merely on activation patterns of the hidden layer computed on the dataset.
The limit NTK is therefore given as:
\begin{equation}
\Theta(x_k, x_l) =
\mathbb{E}\,_{w \sim \mathcal{N}(0, I_{n_0})} [w^T x_k > 0] [w^T x_l > 0] x_k^T x_l.
\end{equation}
Note that in our two-layered case, $\Theta(x,x') = \lim_{n \to \infty} \hat\Theta_t(x,x') = \mathbb{E}\, \hat\Theta_0(x,x')$.
In the sequel, we denote the Gram matrices $\hat\Theta_t(\vec x, \vec x)$ as $H(t)$ and $\Theta(\vec x, \vec x)$ as $H^\infty$.
Let $\lambda_0$ to be the least eigenvalue of $H^\infty$.
\begin{theorem}[\cite{du2018gradient}]
Consider the setting discussed above and further assume $\|x_k\|_2 \geq 1$ and $|y_k| \leq 1$ $\forall k \in [m]$.
Then $\exists C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking
\begin{equation}
n >
\max\left(
C \frac{m^6}{\lambda_0^4 \delta^3}, \;
C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right)
\right)
\end{equation}
guarantees $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\delta$.
\label{thm:convergence_2layer}
\end{theorem}
This result implies $\| \vec y - f_t(\vec x) \|_2^2 \leq e^{-\lambda_0 t} \| \vec y - f_0(\vec x) \|_2^2$ w.p. $\geq 1-\delta$, as discussed above.
For the full proof, see the original paper \cite{du2018gradient} or lecture notes \cite{golikov2020notes}.
We are going to discuss, very briefly, only crucial parts of the proof in the sequel.
The proof is based on four lemmas.
The first lemma states that as long as $n = \Omega(m^2 \lambda_0^{-2} \log(m/\delta))$, where $\Omega$ hides a certain constant, $\|H(0) - H^\infty\|_2 \leq \lambda_0/4$, where $\|\cdot\|_2$ denotes a singular norm, w.p. $\geq 1-\delta$; this implies $H(0) \geq 3\lambda_0/4$ with the same probability.
As already noted above, $\mathbb{E}\, H(0) = H^\infty$.
This allows one to apply a concentration inequality to each element of $H(0)$.
Union bound then gives a bound that holds uniformly for all elements of $H(0)$.
This implies a bound on $\|H(0) - H^\infty\|_F$, hence on a singular norm as well.
The second lemma states that as long as $\forall i \in [n]$ $\|w_i - w_i(0)\|_2 \leq R$ for certain $R = R(\delta,\lambda_0,m)$, $\| H - H(0) \|_2 \leq \lambda_0/4$ w.p. $\geq 1-\delta$.
In other words, as long as weights are close to initialization, the corresponding Gram matrix is close to the initial one too.
The idea is that as long as the weights are not far from their initialization, with certain probability, not many of the hidden neurons can alter their activation patterns on the train dataset.
Since as already noted above, our Gram matrices depend only on activation patterns on the train dataset, this implies a tail bound on $|H_{kl}(0) - H_{kl}^\infty|$ $\forall k,l \in [m]$, which gives a tail bound on $\|H(0) - H^\infty\|_2$ with the same technique as used in the first lemma.
The third lemma states that as long as $H(s) \geq \lambda_0/2$ $\forall s \in [0,t]$ (we haven't proven it yet), weights indeed stay close to their initialization: $\forall i \in [n]$ $\|w_i(t) - w_i(0)\|_2 \leq R'$ for certain $R' = R'(\lambda_0,m,n)$.
This can be proven by a very simple estimate:
\begin{multline}
\left\|\frac{dw_i(s)}{ds}\right\|_2 =
\left\|\frac{1}{\sqrt{n}} \sum_{k=1}^m (y_k - f_s(x_k)) a_i [w_i^T(s) x_k > 0] x_k\right\|_2 \leq
\\\leq
\frac{1}{\sqrt{n}} \sum_{k=1}^m |y_k - f_s(x_k)| \leq
\sqrt{\frac{m}{n}} \|\vec y - f_s(\vec x)\|_2 \leq
\sqrt{\frac{m}{n}} e^{-\lambda_0 s / 2} \|\vec y - f_0(\vec x)\|_2.
\end{multline}
This gives $\forall i \in [n]$:
\begin{multline}
\| w_i(t) - w_i(0) \|_2 =
\left\|\int_0^t \frac{dw_i(s)}{ds} \, ds\right\|_2 \leq
\int_0^t \left\|\frac{dw_i(s)}{ds}\right\|_2 \, ds \leq
\\\leq
\frac{2 \sqrt{m}}{\lambda_0 \sqrt{n}} \left(1 - e^{-\lambda_0 t / 2}\right) \|\vec y - f_0(\vec x)\|_2 \leq
\frac{2 \sqrt{m}}{\lambda_0 \sqrt{n}} \|\vec y - f_0(\vec x)\|_2.
\end{multline}
Finally, the fourth lemma states that as long as $R' < R$, $\| H(t) - H(0) \|_2 \leq \lambda_0/4$ $\forall t \geq 0$ w.p. $\geq 1-\Omega(\delta)$ where $\Omega$ hides a certain constant.
Combined with the first lemma, this implies $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\Omega(\delta)$.
The condition $R'(\lambda_0,m,n) < R(\delta,\lambda_0,m)$ gives the second lower bound on $n$ (the first one is given be the first lemma).
By changing $\delta$, we get the desired result.
The fourth lemma is proven as follows.
Let $t_0$ be the first moment of time when the second lemma becomes no longer applicable, i.e. $t_0 = \inf\left\{t \geq 0: \; \max_{i \in [n]} \| w_i(t) - w_i(0) \|_2 > R\right\}$.
Assume it is finite.
Since weights are continuous functions of time, $\max_{i \in [n]} \| w_i(t_0) - w_i(0) \|_2 = R$.
Hence the second lemma holds for $w_{1:n} = w_{1:n}(t)$ $\forall t \in [0,t_0]$ and $\| H(t) - H(0) \|_2 \leq \lambda_0/4$ w.p. $\geq 1-\delta$ $\forall t \in [0,t_0]$, therefore $H(t) \geq \lambda_0/2$ w.p. $\geq 1-\Omega(\delta)$ $\forall t \in [0,t_0]$.
But then the third lemma holds as well: $\forall i \in [n]$ $\|w_i(t_0) - w_i(0)\|_2 \leq R' < R$; contradiction.
Hence $\forall t \geq 0$ $\max_{i \in [n]} \| w_i(t) - w_i(0) \|_2 \leq R$ and the second lemma gives the desired statement.
\cref{thm:convergence_2layer} requires the number of hidden units $n$ to grow as $m^6$ with the size of a train dataset and as $\delta^{-3}$ with the failure probability.
This bound is way too loose for practical purposes: indeed, even for very small datasets $m \geq 100$ which results in a bound of the order at least $10^8$.
If we want the bound to be valid with at least $90\%$ probability, we pay three orders of magnitude more.
Note that modern architectures designed to be trained on large datasets like ImageNet ($m=10^6$) have width barely exceeding $10^4$.
We state one of the existing improvements of \cref{thm:convergence_2layer} below:
\begin{theorem}[\cite{song2019quadratic}]
Under the same setting as \cref{thm:convergence_2layer}, $\exists C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking
\begin{equation}
n >
\max\left(
C \frac{m^4}{\lambda_0^4} \log^3\left(\frac{m}{\delta}\right), \;
C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right)
\right)
\end{equation}
guarantees $H(t) \geq \lambda_0/2$ $\forall t \geq 0$ w.p. $\geq 1-\delta$.
\label{thm:convergence_2layer_quartic}
\end{theorem}
This result decreases the exponent of $m$ from $6$ to $4$ and makes the $\delta$-dependence logarithmic.
The proof follows the same path as above.
Note however that the previous result aimed for elementwise tail bounds on $H(0) - H^\infty$ or $H - H(0)$ which lead to tail bounds on $\|H(0) - H^\infty\|_2$ and $\|H - H(0)\|_2$ by union bound, which gives an $m^2$ factor.
One of the improvements proposed by \cite{song2019quadratic} is to replace these elementwise bounds with matrix-Chernoff bounds --- they do not give this $m^2$ factor, thus leading to better bounds.
The other improvement is to replace Markov inequalities that result in $1/\delta$ factors with Bernstein inequality that results only in $\log(1/\delta)$ ones.
The $m^4$ width bound is still far from being realistically tight.
We are not aware of any further improvements of the results discussed above that apply the idea of NTK stability.
Global gradient descent convergence can be, however, proved by first proving gurantees on convergence to local minima and then proving that all minima are global for wide enough nets.
See \cite{lee2016gradient,panageas2017gradient,mertikopoulos2020almost} for the first line of works and \cite{yu1995local,nguyen2017loss,nguyen2019connected,nguyen2021note} for the second.
None of the works of both lines use the idea of NTK stability and they neither rely on NTK parameterization.
\cite{nguyen2019connected} proves that $n = m$ is enough of leaky ReLU nets to have only global "local valleys" (generalization of global minima to certain losses such as cross-entropy) and \cite{nguyen2021note} demonstrates that this bound cannot be improved for two-layered nets and general data.
\cite{du2019gradient} extends \cref{thm:convergence_2layer} to deep nets.
Their proof idea is the same: first show that $H(0)$ is close to $H^\infty$, then show that $H(t)$ stays close to $H(0)$.
However for the multilayer case, $H(0)$ cannot be proven to be close to $H^\infty$ just by concentration of measure.
When layers are many, perturbations caused by finite width result in deviations exponential with respect to the number of layers $L$.
For this reason, their bound grows exponentially with $L$.
See also \cite{allen2019convergence} for a similar result with a bound depending on $m$ only polynomially, proved using a different technique.
\subsubsection{Generalization guarantees}
Stability of NTK has another interesting consequence.
Suppose the empirical NTK is constant, i.e. $\hat\Theta_t = \hat\Theta_0$.
It is equivalent to say that the corresponding model is linearized:
\begin{equation}
f(x; \theta)
= f(x; \theta_0) + \nabla_\theta^T f(x; \theta_0) (\theta - \theta_0).
\end{equation}
For brevity, denote $\vec u_t = f_t(\vec x)$ and $Z_t^{ik} = \partial_{\theta_i} f(x_k; \theta_t)$.
Hence $Z_t \in \mathbb{R}^{N \times m}$ where $N$ is the total number of parameters and $\vec u_t = \vec u_0 + Z_0^T (\theta_t - \theta_0)$.
Note that $H_t = Z_t^T Z_t$.
Recall the train set predictions for constant kernel:
\begin{equation}
\vec u_t
= \vec y + e^{-H_0 t} (\vec u_0 - \vec y).
\end{equation}
In our linearized dynamics, the weights evolve as follows:
\begin{equation}
\dot\theta_t
= Z_0 (\vec y - \vec u_t)
= Z_0 e^{-H_0 t} (\vec y - \vec u_0).
\end{equation}
Straightforward integration gives:
\begin{equation}
\theta_t
= \theta_0 + Z_0 H_0^{-1} \left(I - e^{-H_0 t}\right) (\vec y - \vec u_0).
\end{equation}
Recalling $H_0 = Z_0^T Z_0$, at the end of training ($t \to \infty$) we get
\begin{equation}
\|\theta_\infty - \theta_0\|_2^2
= (\theta_\infty - \theta_0)^T (\theta_\infty - \theta_0)
= (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0).
\end{equation}
Define $\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$ as a set of models of the form (\ref{eq:two_layered_ReLU_net_ntk}) with output weights $a_{1:n}$ and input weights $w_{1:n}$ such that $\| W - W(0) \|_F \leq B$ for given $w_{1:n}(0)$.
The above considerations state that a trained model always lies in $\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$ with $B = (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$.
Hence our training procedure outputs models in a certain set rather than any model in of the form (\ref{eq:two_layered_ReLU_net_ntk}).
Upper-bounding Rademacher complexity of this model set will give us a generalization bound as we shall see below.
Let us upper-bound the Rademacher complexity conditioned on a dataset $(\vec x, \vec y)$ of size $m$:
\begin{multline}
\Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{\vec x, \vec y} =
\mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{f \in \mathcal{F}_B^{w_{1:n}(0), a_{1:n}}} \left(\frac{1}{m} \sum_{k=1}^m \sigma_k u_k\right) =
\\=
\frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left(\sum_{k=1}^m \sigma_k \frac{1}{\sqrt{n}} \sum_{i=1}^n a_i [w_i^T(0) x_k \geq 0] w_i^{T} x_k\right) =
\\=
\frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left( \vec\sigma^T Z^{T}(0) \theta \right) =
\\=
\frac{1}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \sup_{\| W - W(0) \|_F \leq B} \left( \vec\sigma^T \tilde Z^{T}(0) (\theta - \theta_0) \right) =
\\=
\frac{B}{m} \mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \| Z(0) \vec\sigma \|_2 \leq
\frac{B}{m} \sqrt{\mathbb{E}\,_{\sigma_{1:m} \sim \{-1,1\}^m} \| Z(0) \vec\sigma \|_2^2} =
\frac{B}{m} \| Z(0) \|_F.
\end{multline}
Note that
\begin{equation}
\| Z(0) \|_F^2 =
\frac{1}{n} \sum_{i=1}^n \sum_{k=1}^m [w_i^T(0) x_k \geq 0].
\end{equation}
It is an average of i.i.d random variables, which allows for Hoeffding's inequality:
\begin{equation}
\mathcal{P}(\| Z(0) \|_F^2 - \frac{m}{2} \geq \epsilon) \leq
e^{-2n \epsilon^2 / m^2}.
\end{equation}
This gives w.p. $\geq 1-\delta$ over initialization,
\begin{equation}
\| Z(0) \|_F^2 \leq
\frac{m}{2} + \sqrt{\frac{m^2}{2n} \log\left(\frac{1}{\delta}\right)}.
\end{equation}
Finally, we got that w.p. $\geq 1-\delta$ over initialization,
\begin{equation}
\Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} \leq
\frac{B}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}}.
\end{equation}
Consider zero-one risk: $r(y,z) = [y z < 0]$; we have $R(f) = \mathbb{E}\,_{x,y \sim \mathcal{D}} r(y,f(x))$ and $\hat R(f) = \mathbb{E}\,_{x,y \in S_m} r(y,f(x))$, correspondingly.
From the generalization theory, we know that for any $B$ and for any initialization $w_{1:n}(0), a_{1:n}$, w.p. $\geq 1-\tilde\delta$ over the training dataset, $\forall f \in \mathcal{F}_B^{w_{1:n}(0), a_{1:n}}$,
\begin{equation}
R(f) \leq
\hat R_m(f) + \mathbb{E}\,_{(\vec x, \vec y)} \Rad{\mathcal{F}_B^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta}}
\quad
\text{w.p. $\geq 1 - \tilde\delta$ over $(\vec x, \vec y)$.}
\end{equation}
We want to take $B = (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$ but it depends on the dataset $(\vec x, \vec y)$.
Take a sequence $\{B_j\}_{j=1}^\infty$ monotonically increasing to infinity and a sequence $\{\tilde\delta_j\}_{j=1}^\infty$ of deltas $\in (0,1)$ that sum to $\tilde\delta$.
This allows us to apply a union bound: w.p. $\geq 1-\tilde\delta$ over the training dataset, for any initialization $w_{1:n}(0), a_{1:n}$, $\forall j \in \mathbb{N}$, $\forall f \in \mathcal{F}_{B_j}^{w_{1:n}(0), a_{1:n}}$,
\begin{equation}
R(f) \leq
\hat R_m(f) + \mathbb{E}\,_{(\vec x, \vec y)} \Rad{\mathcal{F}_{B_j}^{w_{1:n}(0), a_{1:n}}}{(\vec x, \vec y)} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta_j}}.
\end{equation}
We are free to choose minimal $j$ such that $B_j \geq (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)$; denote it by $\hat j$.
Let for definiteness $B_j = j$.
Then $B_{\hat j} \leq 1 + (\vec y - \vec u_0)^T \hat\Theta_0^{-1} (\vec y - \vec u_0)$.
Putting all together, we have w.p. $\geq 1-\tilde\delta$ over the training dataset, w.p. $\geq 1-\delta$ over initialization,
\begin{multline}
R(f(\theta_\infty)) \leq
\hat R_m(f(\theta_\infty)) +
\\+
\frac{1 + (\vec y - \vec u_0)^T H_0^{-1} (\vec y - \vec u_0)}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}} + \sqrt{\frac{1}{2m} \log \frac{1}{\tilde\delta_{\hat j}}}.
\label{eq:generalization_bound_for_fixed_acts_model}
\end{multline}
Recall that the bound above was obtained under the assumption of constant NTK.
In order to relax this assumption, one has to show that, possibly for large enough width, $H_t^{-1}$ stays close to $H_0^{-1}$.
Note that when proving global GD convergence we had to prove that $H_t$ stays close to $H_0$, which is different.
The required closeness result is proven in \cite{arora2019fine}, it leads to the following theorem:
\begin{theorem}[\cite{arora2019fine}]
Under the same setting as \cref{thm:convergence_2layer}, $\exists p, C, C_0 > 0$ such that $\forall \delta \in (0,1)$ taking
\begin{equation}
n >
\max\left(
C \frac{m^7}{\lambda_0^4 \delta^p}, \;
C_0 \frac{m^2}{\lambda_0^2} \log\left(\frac{2m}{\delta}\right)
\right)
\end{equation}
guarantees w.p. $\geq 1-\delta$ over the training dataset of size $m$ and w.p. $\geq 1-\delta$ over initialization,
\begin{multline}
R(f(\theta_\infty)) \leq
\hat R_m(f(\theta_\infty)) +
\\+
\frac{1 + (\vec y - \vec u_0)^T \left(H^{\infty}\right)^{-1} (\vec y - \vec u_0)}{\sqrt{m}} \sqrt{\frac{1}{2} + \sqrt{\frac{1}{2n} \log\left(\frac{1}{\delta}\right)}} + \sqrt{\frac{1}{2m} \log \frac{1}{\delta}}.
\end{multline}
\label{thm:generalization_2layer}
\end{theorem}
\section{Standard parameterization and kernel evolution}
\label{sec:standard_param}
\begin{figure}
\label{fig:kernel_velocity}
\includegraphics[width=0.9\textwidth]{images/kernel_velocity.pdf}
\caption{The figure is borrowed from \cite{fort2020deep}.}
\end{figure}
As was noted in \cref{sec:convergence}, NTK diverges under standard parameterization.
Recall the example of a two-layered net:
\begin{equation}
f(x; a_{1:n}, w_{1:n})
= \sum_{i=1}^n a_i \phi(w_i x),
\quad
a_{1:n} \sim \mathcal{N}(0, n^{-1} I),
\quad
w_{1:n} \sim \mathcal{N}(0, I);
\end{equation}
\begin{equation}
\hat\Theta_t(x,x')
= \sum_{i=1}^n \left(\phi(w_i(t) x) \phi(w_i(t) x') + a_i^2(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x'\right).
\end{equation}
At $t=0$, since $w_i$ are independent and of the order of $O(1)$, the sum diverges proportionaly to $n$.
Since under square loss, $\dot f_t(x) = \hat\Theta_t(x,\vec x) (\vec y - f_t(\vec x))$, the model prediction at any point $x$ receive a $O(n)$ increment at the very beginning of training.
In other words, model predictions diverge with width, making the model useless for regression.
However, if the goal is classification, magnitude of predictions does not matter; what matters is their signs for binary classification, or indices of the largest logits when classes are multiple.
Therefore in this case, an infinite-width limit under standard parameterization still may make sense besides of divergent NTK, see \cite{golikov2020dynamically}.
In order to deal with divergence, consider a normalized empirical NTK $\tilde\Theta_t(x,x') = \hat\Theta_t(x,x') / n$; its infinite-width limit at initialization is $\mathbb{E}\,_{w \sim \mathcal{N}(0,1)} \phi(w x) \phi(w x')$; we shall refer it as normalized NTK and denote as $\tilde\Theta(x,x')$.
In contrast to NTK under NTK parameterization, normalized NTK under standard parameterization evolves with time \cite{golikov2020dynamically}:
\begin{multline}
\frac{d\tilde\Theta_t(x,x')}{dt}
= \frac{1}{n} \sum_{i=1}^n \left(\phi(w_i(t) x) \phi'(w_i(t) x') x' + \phi'(w_i(t) x) \phi(w_i(t) x') x\right) \frac{dw_i(t)}{dt}
+\\+ \frac{1}{n} \sum_{i=1}^n a_i^2(t) x x' \left(\phi'(w_i(t) x) \phi''(w_i(t) x') x' + \phi''(w_i(t) x) \phi(w_i(t) x') x\right) \frac{dw_i(t)}{dt}
+\\+ \frac{1}{n} \sum_{i=1}^n 2 a_i(t) \phi'(w_i(t) x) \phi'(w_i(t) x') x x' \frac{da_i(t)}{dt}.
\end{multline}
Recall the gradient flow dynamics under standard parameterization:
\begin{equation}
\frac{a_k(t)}{dt}
= \sum_{j=1}^m \phi(w_k(t) x_j),
\quad
\frac{w_k(t)}{dt}
= \sum_{j=1}^m a_k(t) \phi'(w_k(t) x_j) x_j.
\end{equation}
At $t=0$, we have $\dot a_k = O(1)$, while $\dot w_k = O(n^{-1/2})$.
Since $a_k(0) = O(n^{-1/2})$ and $w_k(0) = O(1)$, it means that for any $t > 0$ independent on $n$, $a_k(t) = O(1)$, $\dot a_k(t) = O(1)$, $w_k(t) = O(1)$, and $\dot w_k(t) = O(1)$.
A naive estimate of the sums then gives $\frac{d\tilde\Theta_t(x,x')}{dt} = O(1) + O(1) + O(1) = O(1)$ for any $t > 0$ independent on $n$.
Therefore the normalized kernel keeps evolving with time even in the limit of infinite width.
This can be the reason for superior performance of neural networks to conventional kernel methods and NTK.
A kernel measures similarity between points in a feature space.
While for NTK this feature space is fixed, a neural net varies its corresponding kernel feature space, hopefully making it better suitable for the task at hand; moreover, under standard parameterization, this feature does not vanish for large width.
The way an empirial NTK varies with time can be measured with kernel velocity, defined as kernel distance between the kernels corresponding to two consequent optimization steps.
Kernel distance is in its turn defined as one minus cosine similarity between Gram matrices $H$ and $H'$ of the corresponding kernels:
\begin{equation}
\rho(H, H') = 1 - \frac{\tr(H H^{\prime,T})}{\sqrt{\tr(H H^T) \tr(H' H^{\prime,T})}}.
\end{equation}
After measuring kernel velocity for a realistic net under standard parameterization, \cite{fort2020deep} distinguished two phases of training: a phase of rapid kernel evolution, and a phase of almost constant NTK, see Figure~\ref{fig:kernel_velocity}.
The first phase is called \emph{chaotic}, while the second one is coined \emph{ordered}.
Curiously enough, these two phases can be distinguished not only by kernel velocity.
Suppose the network is trained up to time $T$, called \emph{spawn epoch}.
Two independent copies of the same network is then trained further.
In other words, we train two networks which remain the same up to time $T$ and may diverge afterwards due to randomness of training procedure.
We then measure \emph{test error barrier} between these two networks, i.e. height of the error "hill" on a straight segment between their corresponding weights.
A small error barrier would mean that training of the two networks ended up in the same valley of test error, which likely means that they are similar.
As one can see in Figure~\ref{fig:kernel_velocity}, the test error barrier drops dramatically with growth of spawn epoch.
Also, the two quantities under discussion, kernel velocity and error barrier appear to be strongly correlated, see again Figure~\ref{fig:kernel_velocity}.
There are also other quantities that experience sharp transition on the border of the two phases: kernel distance between child networks as a function of spawn epoch, ReLU activation Hamming distance, and Hamming distance between responses on the test set; see \cite{fort2020deep} for details.
\section{Beyond NTK}
\label{sec:beyond}
While NTK kernel regression has a natural interpretation of training an infinitely wide neural network under certain parameterization with gradient flow (see \cref{sec:convergence}), NTK is not the only possible kernel that can be constructed using a neural net.
\subsection{NNGP kernel}
One of the other notable "neural kernels" is the NNGP-kernel \cite{lee2018deep}, defined as $K(x,x') = \mathbb{E}\,_\theta f(x; \theta) f(x'; \theta)$, where $f(\cdot; \theta)$ is a parametric model with weights $\theta$ and scalar output.
Suppose $f$ is a neural network with the output layer of the form $f(x) = v^T h(x)$, where $h(x) \in \mathbb{R}^n$ is its last layer representation and $v \sim \mathcal{N}(0, I_n / n)$ independent on $h$.
Then $K(x,x') = \frac{1}{n} \mathbb{E}\, h^T(x) h(x')$.
As we have seen in \cref{sec:limit} on the example of fully-connected and convolutional nets, the last layer representations tend to iid Gaussians as width go to infinity.
In other words, $\forall i \in [n]$ $h^i$ tend to identical and independent Gaussian processes with covariance $\mathbb{E}\, h^i(x) h^i(x') = \frac{1}{n} \mathbb{E}\, h^T(x) h(x')$, which is exactly $K(x,x')$.
This motivates the term "NNGP" --- \emph{Neural Network Gaussian Process}.
Note that we have already seen the object $\mathbb{E}\, h^i(x) h^i(x')$ in \cref{sec:limit}: when $h = h_l$ --- the $l$-th layer hidden representation of a fully-connected network, the above object is hidden layer covariance $q_l(x,x')$.
Therefore the NNGP of this fully-connected network is nothing else but $q_L(x,x')$.
This can be generalized to the whole class of architectures expressible by tensor programs: see the Master theorem of \cite{yang2019tensor_i} mentioned in \cref{sec:convergence}.
That is, any neuron of any hidden representation of a neural network expressible by a tensor program tends to a Gaussian process.
Learning a Gaussian process with zero mean and covariance $K(\cdot,\cdot)$ on a training dataset $(\vec x, \vec y)$ means computing its Bayesian prosterior, which is again a Gaussian with mean $\mu(\cdot \,|\, (\vec x, \vec y))$ and covariance $K(\cdot,\cdot \,|\, (\vec x, \vec y))$ given below:
\begin{equation}
\mu(x \,|\, (\vec x, \vec y)) = K(x,\vec x) K^{-1}(\vec x, \vec x) \vec y;
\end{equation}
\begin{equation}
K(x,x' \,|\, (\vec x, \vec y)) = K(x,x') - K(x,\vec x) K^{-1}(\vec x, \vec x) K(\vec x,x').
\end{equation}
Interestingly, training the last layer of an infinitely wide network with NNGP $K(\cdot,\cdot)$ results in exactly the same Gaussian process.
When only the last layer is trained, the NNGP coincides with the NTK.
Indeed, an NTK-parameterized NN of width $n$ with readout weights $v$ can be expressed as $f(x) = \frac{1}{\sqrt{n}} v^T h(x)$ with $v \sim \mathcal{N}(0, I_n)$.
The empirical NTK is therefore given by $\hat\Theta_0(x,x') = \frac{1}{n} \nabla^T_v (v^T h(x)) \nabla_v (v^T h(x')) = \frac{1}{n} h^T(x) h(x')$, which converges to $\mathbb{E}\, h^i(x) h^i(x') = K(x,x')$ as $n \to \infty$; note that $h(\cdot)$ also depends on $n$.
Recall the model prediction dynamics under constant NTK which is $K$ in our case:
\begin{equation}
f_t(x) = f_0(x) - K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) (f_0(\vec x) - \vec y).
\end{equation}
Since $f_0(\cdot)$ is a Gaussian process as discussed before and $K(\vec x,\vec x)$ is deterministic, $f_t(\cdot)$ is a Gaussian process for any $t \geq 0$.
Its mean $\mu_t(\cdot)$ and covariance $K_t(\cdot,\cdot)$ are:
\begin{equation}
\mu_t^{NNGP}(x) = K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) \vec y;
\end{equation}
\begin{multline}
K_t^{NNGP}(x,x')
= K(x,x')
+\\+ K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K^{-1}(\vec x,\vec x) K(\vec x,x')
-\\- \left[K(x,\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,x') + K(x',\vec x) K^{-1}(\vec x,\vec x) \left(I - e^{-K(\vec x,\vec x) t}\right) K(\vec x,x)\right].
\end{multline}
It is easy to see that $\mu_t^{NNGP}(x) \to \mu(x \,|\, (\vec x, \vec y))$ and $K_t^{NNGP}(x,x') \to K(x,x' \,|\, (\vec x, \vec y))$ as $t \to \infty$ $\forall x,x'$.
If not only the last layer is trained, NNGP does not generally correspond to NTK.
The corresponding training dynamics is given by
\begin{equation}
f_t(x) = f_0(x) - \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) \left(I - e^{-\Theta(\vec x,\vec x) t}\right) (f_0(\vec x) - \vec y).
\end{equation}
While $f_t(\cdot)$ is again a Gaussian process for any $t \geq 0$, its mean and covariance are different.
In particular, as $t \to \infty$, they tend to
\begin{equation}
\mu_\infty^{NTK}(x) = \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) \vec y;
\end{equation}
\begin{multline}
K_\infty^{NTK}(x,x')
= K(x,x')
+ \Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,\vec x) \Theta^{-1}(\vec x,\vec x) \Theta(\vec x,x')
-\\- \left[\Theta(x,\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,x') + \Theta(x',\vec x) \Theta^{-1}(\vec x,\vec x) K(\vec x,x)\right].
\end{multline}
As was shown in \cite{lee2019wide}, there does not exist an initial covariance matrix (a "prior") such that these mean and covariance correspond to Bayesian posterior given the training data.
The "empirical" counterpart of NNGPs is $\hat K(x,x') = \frac{1}{n} h^T(x) h(x')$.
Compared to empirical NTKs, empirical NNGPs are easier to compute as they do not require a backward pass.
The corresponding memory footprint is also lower for empirical NNGPs as they do not require computing Jacobian matrices that scale as $O(N)$ where $N$ is the number of weights.
This makes NNGPs more suitable for large models.
As an example, \cite{park2020towards} used performance of empirical NNGPs as a proxy measure for neural architecture search.
They argue that first, empirical NTKs are too costly to compute, and second, they provide worse learning signal for their task.
NNGP of a generic neural network can be computed in a recursive manner, as was demonstrated in \cref{sec:limit} on the example of fully-connected and convolutional nets: $q_{l+1}(x,x') = \mathbb{E}\,_{[z,z']^T \sim \mathcal{N}(0,\Sigma_l(x,x'))} \phi(z) \phi(z')$, where $\Sigma_l(x,x') = \begin{pmatrix} q_l(x,x) & q_l(x,x') \\ q_l(x',x) & q_l(x',x') \end{pmatrix}$; the Master theorem of \cite{yang2019tensor_i} gives similar fomulas for a generic neural net.
In the above example, there is an operation that maps a kernel $q_l(x,x')$ to a subsequent kernel $q_{l+1}(x,x')$.
\cite{shankar2020neural} presents an algebra of operations on kernels.
While this algebra consists of operations of only three types, it is enough to express NNGP of a fully-connected or a convolutional network with any elementwise nonlinearities.
\subsection{Label-aware NTK}
One of the major problems of kernel methods is \emph{label agnosticism.}
Recall that a kernel evaluated at a pair of points is a scalar product of their mappings to some feaure space: $K(x,x') = \langle \Phi(x), \Phi(x') \rangle$.
Therefore a kernel measures how similar the two points are, and a kernel method uses this information to derive responses on unseen data: $f(x) = K(x,\vec x) \vec\alpha$.
Intuitively, a kernel $K$ should result in a good-generalizing model if $K(x,x')$ is positive when $y=y'$ and negative otherwise.
Therefore the "perfect" kernel would be $K^*(x,x') = y y'$; the obvious problem is that it cannot be computed on unseen data.
A kernel that can be computed on unseen data cannot depend on labels.
Therefore, if data has several possible labelings, for a pair of data points $(x,x')$, there could be a labeling with $y=y'$ and a labeling with $y\neq y'$.
At the same moment, $K(x,x')$ stays the same on both cases; therefore, the corresponding kernel method cannot generalize well on both of the labelings.
As an example of several possible labelings on a single dataset, consider a dataset of pictures with two objects in each frame, and let the two objects belong to two disjoint sets of classes.
Then one of the labelings may consider only the objects of the first classes set, while the other may consider the objects of the second set.
\cite{chen2020label} propose two ways of making a kernel \emph{label-aware.}
The first is mixing the kernel at hand with the perfect kernel $K^*(x,x') = y y'$: $K^{HR}(x,x') = (1-\lambda) K(x,x') + \lambda K^*(x,x')$ for $\lambda \in [0,1]$.
If the perfect kernel was available, the best choice would be to take $\lambda=1$.
Since it is not available, we have to approximate it somehow, therefore making the optimal $\lambda$ to become less than one.
In order to approximate $K^*(x,x')$, we need a model that maps $(x,x')$ to $y y'$.
Since the training dataset for this model consists $O(m^2)$ samples, and since the model itself has to be evaluated on $O(m)$ samples for each test point $x$, the model has to be relatively simple.
\cite{chen2020label} consider models of the form $Z(x,x') = \vec y^T M(x,x',\vec x) \vec y$, where $M \in \mathbb{R}^{m \times m}$.
One of the possible choices of $M$ is $M(x,x',\vec x)_{ij} = \psi(K(x,x'),K(x_i,x_j))$, where $\psi(z_1,z_2)$ measures similarity.
As one can see, this choice of $Z$ takes a linear combination of $y_i y_j$ with weights being similarities of $K(x,x')$ and $K(x_i,x_j)$.
Intuitively, this reads as "$y y'$ and $y_i y_j$ are similar if $K(x,x')$ and $K(x_i,x_j)$ are close".
While the above proposal can be applied to any kernel $K$, the second label-aware kernel of \cite{chen2020label} is a specific modification of NTK.
Let us recall the construction of $\Theta^{NTH}$ resulted from integrating the learning dynamics up to the order $n^{-1}$, taking the limit of $t \to \infty$, and taking expectation (see \cref{sec:finite_width} and specifically Eq.~(\ref{eq:lantk_nth})):
\begin{multline}
\Theta^{NTH}(x_1,x_2)
= O_{2,0}^{(0)}(x_1,x_2) + n^{-1} \mathbb{E}\, O_{2,\infty}^{(1)}(x_1,x_2)
=\\= \Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] - n^{-1} \mathbb{E}\,\left[O_{3,0}^{(1)}(x_1, x_2, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right]
+\\+ n^{-1} \vec y^T \Theta^{-1}(\vec x,\vec x) \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \Theta^{-1}(\vec x,\vec x) \vec y
+\\+ n^{-1} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \Theta^{-1}(\vec x,\vec x) O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \Theta^{-1}(\vec x,\vec x) f_0^{(0)}(\vec x)\right]
-\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y
-\\- n^{-1} \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \mathbb{E}\,\left[f_0^{(0),T}(\vec x) \vec v_k \vec v_k^T O_{4,0}^{(1)}(x_1, x_2, \vec x, \vec x) \vec v_l \vec v_l^T f_0^{(0)}(\vec x)\right].
\end{multline}
Since $\hat\Theta_0(x_1,x_2) = O_{2,0}^{(0)}(x_1,x_2) + n^{-1} O_{2,0}^{(1)}(x_1,x_2) + O(n^{-2})$, we have $\Theta(x_1,x_2) + n^{-1} \mathbb{E}\,\left[O_{2,0}^{(1)}(x_1,x_2)\right] = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + O(n^{-2})$ and $\Theta(x_1,x_2) = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + O(n^{-1})$.
For the same reason, $\mathbb{E}\,\left[O_{4,0}(x_1, x_2, x_3, x_4)\right] = n^{-1} \mathbb{E}\,\left[O_{4,0}^{(1)}(x_1, x_2, x_3, x_4)\right] + O(n^{-2})$.
Suppose $f_0^{(0)}(\vec x) = 0$.
Given this approximation, up to order $O(n^{-2})$,
\begin{multline}
\Theta^{NTH}(x_1,x_2)
\approx \mathbb{E}\,\hat\Theta_0(x_1,x_2) + \vec y^T \left(\mathbb{E}\,\hat\Theta_0(\vec x,\vec x)\right)^{-1} \mathbb{E}\,\left[O_{4,0}(x_1, x_2, \vec x, \vec x)\right] \left(\mathbb{E}\,\hat\Theta_0(\vec x,\vec x)\right)^{-1} \vec y
-\\- \sum_{k,l=1}^m \frac{1}{\lambda_k (\lambda_k+\lambda_l)} \vec y^T \vec v_k \vec v_k^T \mathbb{E}\,\left[O_{4,0}(x_1, x_2, \vec x, \vec x)\right] \vec v_l \vec v_l^T \vec y.
\end{multline}
As one can see, $\Theta^{NTH}(x_1,x_2)$ depends on train labels $\vec y$.
Roughly speaking, this kernel corresponds to the NTK of a network trained until convergence ($t\to\infty$); obviously, this kernel should depend on training data.
As an interesting observation $\Theta^{NTH}(x_1,x_2) = \mathbb{E}\,\hat\Theta_0(x_1,x_2) + \vec y^T M(x_1,x_2,\vec x) \vec y$ for a certain matrix $M$ --- recall that $K^{(HR)}(x_1,x_2)$ considered previously has a similar form.
Note that computing the Gram matrix $\Theta^{NTH}(\vec x, \vec x)$ requires computing the Gram "matrix" of the expected 4-th order empirical kernel $\mathbb{E}\,\left[O_{4,0}(\vec x, \vec x, \vec x, \vec x)\right]$.
Instantiating this tensor requires $O(m^4)$ time and $O(m^4)$ memory which is only possible for very small datasets.
\section{Limits of applicability}
\label{sec:experiments}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{images/experiments/myrtle_avg.png}
\caption{
\label{fig:myrtle}
Myrtle architecture.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{images/experiments/NTK_vs_ENTK_vs_NNGP_CIFAR2_Myrtle_avg_BCE_upto1e4samples.png}
\includegraphics[width=0.49\textwidth]{images/experiments/myrtle_avg_flops_CIFAR2.png}
\caption{
\label{fig:myrtle_bce_cifar2_time_flops}
Myrtle network trained on subsets of CIFAR2 of different sizes.
Different lines refer to different regimes of training (e.g. NTK, NNGP etc.) and different stages of training (e.g. cosntructing the kernel, integrating the dynamics etc.).
We use BCE loss, and integrate the dynamics numerically for $T=10^4$ steps.
We measure training time and number of FLOPS.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{images/experiments/NTK_vs_ENTK_vs_NNGP_CIFAR10_Myrtle_avg_BCE.png}
\includegraphics[width=0.49\textwidth]{images/experiments/myrtle_avg_BCE_acc_CIFAR10.png}
\caption{
\label{fig:myrtle_bce_cifar10_time_accuracy}
Myrtle network trained on subsets of CIFAR10 of different sizes.
Different lines refer to different regimes of training (e.g. NTK, NNGP etc.) and different stages of training (e.g. cosntructing the kernel, integrating the dynamics etc.).
We use cross-entropy loss, and integrate the dynamics numerically for $T=10^4$ steps.
We measure training time and accuracy.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{images/experiments/NTK_vs_ENTK_vs_NNGP_CIFAR10_Resnet50_NLL.png}
\includegraphics[width=0.49\textwidth]{images/experiments/Resnet50_NLL_acc_CIFAR10.png}
\caption{
\label{fig:resnet50_nll_cifar10_time_accuracy}
Resnet50 trained on subsets of CIFAR10 of different sizes.
Different lines refer to different regimes of training (e.g. NTK, NNGP etc.) and different stages of training (e.g. cosntructing the kernel, integrating the dynamics etc.).
We use cross-entropy loss, and integrate the dynamics numerically for $T=10^4$ steps.
We measure training time and accuracy.
}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{images/experiments/ntk_myrtle_avg_time_vs_input_size_STL2_upto96x96.png}
\includegraphics[width=0.49\textwidth]{images/experiments/ntk_myrtle_avg_acc_vs_input_size_STL2_upto96x96.png}
\caption{
\label{fig:myrtle_bce_stl2_time_accuracy}
Myrtle network trained on a subset of STL2 of size 500 with images of different resolutions.
Different lines refer to different regimes of training (e.g. NTK, NNGP etc.) and different stages of training (e.g. cosntructing the kernel, integrating the dynamics etc.).
We use BCE loss, and integrate the dynamics numerically for $T=10^4$ steps.
We measure training time and accuracy.
}
\end{figure}
In this section, we present a small experimental study on scope of applicability for NTK regression to real-time scenarios.
In particular, we would like to investigate first, what is the maximal size $m$ of training dataset of images of given size we can afford with limited computational resources.
Second, what is the maximal image resolution $d$ we can afford given fixed dataset size.
We restrict ourselves to these two questions since for practical purposes, dependence of NTK regression complexity on these two parameters is the most worrying: it is $O(m^2 d^4)$ for constructing the Gram matrix, $O(m^3)$ for integrating the dynamics analytically, and $O(m^2 T)$ for integrating the dynamics numerically for $T$ steps; see \cref{sec:computations}.
We use NeuralTangents \cite{novak2019neural} and perform all our experiments on a single GTX 1080Ti GPU with 12 GiB of memory.
We consider a Myrtle network\footnote{\url{https://myrtle.ai/how-to-train-your-resnet-4-architecture/}} with 64 channels in all convolutional layers, see \cref{fig:myrtle}.
We pick this architecture because it is lightweight and uses only those layers for which NTK can be computed analytically.
For the first experiment, we consider two classes of CIFAR10 and refer this dataset as CIFAR2.
We pick a subset of 1000 samples of the original test set of CIFAR2 and vary the size of the training subset.
We optimize binary cross-entropy (BCE) and integrate the dynamics numerically for $T = 10^4$.
We compute the Gram matrix of a kernel using batch size 4.
On \cref{fig:myrtle_bce_cifar2_time_flops}, we plot training time and the number of floating-point operations (FLOPS) for different stages (i.e. Gram matrix computation, integrating the dynamics, inference on a test set) and for different regimes of training (analytical NTK, analytical NNGP, and empirical NTK) versus size of training dataset.
As one can see, already for relatively small datasets ($m=10^4$), the most time-demanding stage is construction of the Gram matrix $\Theta(\vec x, \vec x)$ (solid line), but not integration (which is also takes time quadratic to size of the dataset) (dotted line).
Also, the time to compute the NNGP kernel is almost the same as the one for NTK, since both are computed analytically; see \cref{sec:limit}.
We could not obtain the point $m=10^4$ for empirical NTK (ENTK) due to numerical reasons.
If we extrapolate the solid line to $m=10^6$, the size of ImageNet, noting the quadratic growth, we will get $5 \times 10^9$ seconds, which is around 160 years of computations.
While our time measurements are device-dependent, we also measure the number of FLOPS, which while being device-independent, grows the same way as time and is also quite large.
This experiment demonstrates that indeed, the naive approach for integrating the NTK dynamics falls short on datasets of realistic sizes, thus striving for major optimizations.
As mentioned in \cref{sec:computations}, a promising approach could be the one of \cite{meanti2020kernel}.
On \cref{fig:myrtle_bce_cifar10_time_accuracy}, we present the same experiment but with all 10 classes of CIFAR10.
We observe the same quadratic time growth issue for all three regimes of training (analytical NTK, analytical NNGP, and empirical NTK).
We also report accuracy for comparison with previous works on small data training with kernel methods (i.e. \cite{arora2019harnessing}).
In addition to experiments with a small network, we experimented with a variant of Resnet50 \cite{he2016deep}.
We modify this architecture by removing batch normalizations and substituting max poolings with average poolings, so to make analytical computations possible.
Results are shown on \cref{fig:resnet50_nll_cifar10_time_accuracy}.
Doing the same extrapolation to ImageNet size, we get $6.25 \times 10^{11}$ seconds, which is around $20000$ years.
Lastly, we consider two classes of STL10 and similarly to CIFAR2, refer this dataset as STL2.
We pick a subset of 100 samples of the original test set of STL2 and 500 samples of its original train set.
While STL10 has fewer labeled examples compared to CIFAR10, it has larger images: $96
\times 96$ for STL10 versus $32 \times 32$ for CIFAR10.
We vary size of the input image and measure training time and accuracy, similarly to the first experiment.
As before, we optimize binary cross-entropy (BCE) and integrate the dynamics numerically for $T = 10^4$.
However, we use batch size 1 for computing the Gram matrix, since larger batch sizes do not fit in GPU memory for large image sizes.
Results are shown on \cref{fig:myrtle_bce_stl2_time_accuracy}.
As before, the most time-demanding part is kernel Gram matrix computation (blue line): it grows as $O(d^4)$, where $d$ is image resolution; see \cref{sec:limit}.
If we extrapolate this line to $d=224$, the resolution on which traditional ImageNet classification models operate, we will get around 150 days of computations.
This experiment therefore demonstrates that not only dataset size, but also image resolution complexity can also be a serious bottleneck in applying NTK approach in practice.
Also, while for dataset size, certain optimizations are available (e.g. \cite{meanti2020kernel}), we are not aware of any optimizations aiming for decreasing image resolution complexity.
\section{Conclusions}
The use of NTK theory is twofold: first, it relates neural networks to kernel methods, a far more well-developped class of models.
Second, it gives a machine learning practitioner a kernel that shares some properties with neural nets.
Recall what we have concerning the first application.
We have a theorem (\cref{thm:master_theorem}) that implies that a neural tangent kernel of a wide class of architectures is deterministic and does not evolve with time in the limit of infinite width, and provides a recurrent formula for the limit.
Therefore a network that is wide enough should share some properties, i.e. convergence and generalization, see \cref{sec:app_theory}, with the corresponding kernel method.
However, the resulting width bounds are far from realistic.
Second, the limit kernel does not evolve with time only under certain non-standard parameterization rarely used in practice.
In contrast, standard parameterization results in evolving (normalized) kernel, see \cref{sec:standard_param}.
The fact that the kernel evolves may be the key to understanding superior performance of neural nets to kernel methods.
Unfortunately, we have little understanding of this aspects at the moment.
Lastly, \cref{thm:master_theorem} requires Gaussian weight initialization rarely used in practice.
Generalizing it to non-Gaussian weight distribution remains to be done in the future.
Let us discuss the second application.
At the moment of writing, computing the exact limit kernel was available only for convolutional and fully-connected networks with average poolings and nonlinearities in a certain class, see \cref{sec:computations}.
For other architectures, one has to rely on empirical NTK which is a biased estimate of the limit one.
Computing the empirical NTK requires instantiating output-by-weight jacobians at every pair of training points, which is especially memory risky for realistically large architectures.
Storing the Gram matrix of the kernel also requires $O(m^2)$ memory where $m$ is dataset size.
Even if the kernel is sucessfully computed on every pair of training points, integrating the training dynamics naively requires inverting the Gram matrix, which costs $O(m^3)$ time, while for datasets of size $10^6$ one can barely afford more than $O(m)$ time and memory.
We study applicability limits of this naive approach in \cref{sec:experiments}.
Still, certain optimization are available, see \cref{sec:computations}.
Also concerning the second application, NTK is not the only kernel that can be constructed using a neural network; certain other kernels may have computational or performance gains compared to NTK, see \cref{sec:beyond}.
|
1,477,468,751,428 | arxiv | \section{Introduction}
We study the detection of almost monochromatic gravitational waves
emitted by known single pulsars in data collected by a detector.
Several such searches were already performed with data collected
by the LIGO and GEO600 detectors \cite{LSC04,LSC05,LSC07,LSC08,LSC10}.
We thus assume that the frequency of the wave (together with its
time derivatives, i.e.\ the spindown parameters)
and the position of the source in the sky are known.
The gravitational-wave signal we are looking for depends on at most
four (often called amplitude) parameters: overall amplitude, initial phase, polarization angle,
and inclination angle (of the pulsar's rotation axis with respect to the line of sight).
In Sec.\ 2 we introduce three statistics by means of which one can test
whether data contains a gravitational-wave signal: the $\mathcal{H}$-statistic
for completely known signals, the ${\mathcal G}$-statistic for signals which depend
on only two unknown parameters (overall amplitude and initial phase),
and the ${\mathcal F}$-statistic suitable for signals depending on all four amplitude parameters.
Both statistics ${\mathcal G}$ and ${\mathcal F}$ are derived from the maximum likelihood (ML) principle,
and the statistic ${\mathcal G}$ is independently obtained using Bayesian approach
and the composite hypothesis testing.
In Sec.\ 3 we study, by means of the Fisher matrix, the theoretical accuracy
of the ML estimators of the signal's parameters
and in Sec.\ 4 we present the results of the Monte Carlo simulations we performed
to test the accuracy of the ML estimators.
\section{Using the ${\mathcal F}$ and ${\mathcal G}$ statistics to perform targeted searches for gravitational waves from pulsars}
\label{sec:stat}
In the case when the signal $s(t)$ we are looking for is completely known,
the test that maximizes probability of detection
subject to a certain false alarm probability is the likelihood-ratio test,
i.e.\ we accept the hypothesis that the signal is present in detector's data $x$ if
\begin{equation}
\label{lr1}
\Lambda(x) := \frac{p_1(x)}{p_0(x)} \geq \lambda_0,
\end{equation}
where the likelihood function $\Lambda(x)$ is the ratio of probability densities
$p_1(x)$ and $p_0(x)$ of the data $x$ when the signal is respectively present or absent.
The parameter $\lambda_0$ is a threshold calculated from a chosen
false alarm probability. Assuming stationary and additive Gaussian noise with
one-sided spectral density constant (and equal to $S_0$) over the bandwidth of the signal,
the $\log$ likelihood function is approximately given by \cite{JKS98}
\begin{equation}
\label{eq:LFpuln}
\ln\Lambda[x(t)] \cong 2 \frac{T_{\text{o}}}{S_0}
\left( \av{x(t)s(t)} - \frac{1}{2} \av{s(t)^2} \right),
\end{equation}
where $T_{\text{o}}$ is the observation time and the time-averaging operator $\av{\cdot}$ is defined as
\begin{equation}
\label{eq:tav}
\av{g} := \frac{1}{T_{\text{o}}}\int^{T_{\text{o}}}_{0}g(t)\,\mathrm{d} t.
\end{equation}
Equation \eqref{eq:LFpuln} implies that the likelihood-ratio test \eqref{lr1}
can be replaced by the test
\begin{equation}
{\mathcal H}[x(t)] := \av{x(t)s(t)} \geq {\mathcal H}_0,
\end{equation}
where the optimal statistic ${\mathcal H}$ in this case is the {\em matched filter}
and ${\mathcal H}_0$ is the threshold for detection.
Suppose now that the signal $s(t;\boldsymbol{\theta})$ depends on a set of unknown parameters $\boldsymbol{\theta}$,
then a suitable test can be obtained using a Bayesian approach
and {\em composite hypothesis} testing. The composite hypothesis in this case
is the hypothesis that when a signal is present it can assume any values of the parameters.
Assuming that the cost functions are independent of the values of the parameters,
we obtain the following Bayesian decision rule to choose the hypothesis
that the signal is present (see e.g.\ \cite{W71}, Chapter 5.9):
\begin{equation}
\label{eq:BayesF}
\frac{1}{p_0(x)}\,{\int_\Theta}p_1(x;\boldsymbol{\theta})\pi(\boldsymbol{\theta})\,\mathrm{d}\boldsymbol{\theta} \geq \gamma_0,
\end{equation}
where $\Theta$ is the parameter space on which $\boldsymbol{\theta}$ is defined
and $\pi(\boldsymbol{\theta})$ is the joint a priori distribution of $\boldsymbol{\theta}$.
The expression on the left hand side of Eq.\ (\ref{eq:BayesF}) is know as
the {\em Bayes factor} and it is the ratio between the
posterior probability distribution on the signal parameters
marginalized over the parameters themselves (this is the signal model
{\em Bayesian evidence}) and the noise model which has no defining
parameters (this is the noise model {\em Bayesian evidence}).
As a template for the response of an interferometric detector
to the gravitational-wave signal from a rotating neutron star
we use the model derived in \cite{JKS98}.
This template depends on the set of following parameters:
$\boldsymbol{\theta}=(h_0,\phi_0,\psi,\iota,\mathbf{f},\delta,\alpha)$,
where $h_0$ is the dimensionless amplitude, $\phi_0$ is an initial phase,
$\psi$ is the polarization angle, $\iota$ is the inclination angle,
angles $\delta$ (declination) and $\alpha$ (right ascension) are equatorial coordinates
determining the position of the source in the sky,
and the `frequency vector' $\mathbf{f}:=(f_0,f_1,f_2,\dots)$
collects the frequency $f_0$ and the spindown parameters of the signal.
In the case of pulsars known from radio observations
we in general know the subset $\boldsymbol{\xi}=(\mathbf{f},\delta,\alpha)$ of the parameters $\boldsymbol{\theta}$.
Sometimes, like in the case of the Vela pulsar,
we also know from X-ray observations the values of the angles $\psi$ and $\iota$ (see \cite{NR04,NR08}
for observational results).
We then have only two unknown parameters: $h_0$ and $\phi_0$.
The response $s(t)$ of the detector to the gravitational wave
we can write in this case in the following form \cite{JKS98}:
\begin{equation}
\label{eq:sig2}
s(t) = h_0 \cos\phi_0 \, h_c(t) + h_0 \sin\phi_0 \, h_s(t),
\end{equation}
where $h_c$ and $ h_s$ are known functions of time,
\begin{equation}
\begin{array}{l}
h_c(t) := A_+ \big(\cos2\psi\,h_1(t)+\sin2\psi\,h_2(t)\big)
- A_\times \big(\sin2\psi\,h_3(t)-\cos2\psi\,h_4(t)\big),
\\[1ex]
h_s(t) := -A_\times \big(\sin2\psi\,h_1(t)-\cos2\psi\,h_2(t)\big)
- A_+ \big(\cos2\psi\,h_3(t)+\sin2\psi\,h_4(t)\big).
\end{array}
\end{equation}
Here the constants $A_+$ and $A_\times$ are
\begin{equation}
\label{aa}
A_{+} := \frac{1}{2} (1 + \cos^2\iota),\quad
A_{\times} := \cos\iota,
\end{equation}
and the four functions of time $h_k$ $(k=1,\ldots,4)$ depend only on parameters $\boldsymbol{\xi}$
and are defined as follows
\begin{equation}
\label{eq:amps}
\begin{array}{ll}
h_1(t;\boldsymbol{\xi}) := a(t;\delta,\alpha) \cos \phi(t;\mathbf{f},\delta,\alpha),
&
h_2(t;\boldsymbol{\xi}) := b(t;\delta,\alpha) \cos \phi(t;\mathbf{f},\delta,\alpha),
\\[1ex]
h_3(t;\boldsymbol{\xi}) := a(t;\delta,\alpha) \sin \phi(t;\mathbf{f},\delta,\alpha),
&
h_4(t;\boldsymbol{\xi}) := b(t;\delta,\alpha) \sin \phi(t;\mathbf{f},\delta,\alpha),
\end{array}
\end{equation}
where $a$, $b$ are the amplitude modulation functions and $\phi$ is
the phase modulation function. Their explicit forms are given in \cite{JKS98}.
Let us calculate the likelihood function for the signal (\ref{eq:sig2}).
Observing that the amplitude modulation functions $a$ and $b$
vary much more slowly than the phase $\phi$ of the signal
and assuming that the observation time is much longer
than the period of the signal we approximately have \cite{JKS98}
\begin{equation}
\label{app1}
\begin{array}{l}
\av{h_1\,h_3} \cong \av{h_1\,h_4} \cong \av{h_2\,h_3} \cong \av{h_2\,h_4} \cong 0,
\\[1ex]
\av{h_1\,h_1} \cong \av{h_3\,h_3} \cong \frac{1}{2} A, \quad
\av{h_2\,h_2} \cong \av{h_4\,h_4} \cong \frac{1}{2} B, \quad
\av{h_1\,h_2} \cong \av{h_3\,h_4} \cong \frac{1}{2} C,
\end{array}
\end{equation}
where we have introduced the time averages
\begin{equation}
\label{ABCdef}
A := \av{a^2}, \quad
B := \av{b^2}, \quad
C := \av{ab}.
\end{equation}
As a consequence of the above approximations we have the following
approximate expressions for the time averaged products of the functions
$h_c$ and $h_s$,
\begin{equation}
\label{eq:simpl}
\av{h_c^2} \cong \av{h_s^2} \cong N,
\quad \av{h_c h_s} \cong 0,
\end{equation}
where $N$ is a constant defined as
\begin{align}
N &:= \frac{1}{2} \Big( A( A_+^2\cos^2 2\psi + A_{\times}^2\sin^2 2\psi)
+ B( A_+^2\sin^2 2\psi + A_{\times}^2\cos^2 2\psi)
\nonumber\\ &\qquad
+ C( A_+^2 - A_{\times}^2) \sin 4\psi \Big).
\end{align}
With the above approximations the likelihood function $\Lambda$ for the signal \eqref{eq:sig2}
can be written as
\begin{equation}
\label{eq:LFpul}
\ln\Lambda[x(t);\phi_0,h_0] \cong 2 \frac{T_{\text{o}}}{S_0}
\left( h_0 \cos\phi_0 \av{x(t)h_c(t)} + h_0 \sin\phi_0 \av{x(t)h_s(t)} - \frac{1}{2} h_0^2 N \right).
\end{equation}
Let us also note that the optimal signal-to-noise ratio (SNR) $\rho$ for the signal \eqref{eq:sig2}
(see \cite{JKS98} for definition) can be approximately computed as
\begin{equation}
\label{snr}
\rho \cong \sqrt{\frac{2T_{\text{o}}}{S_0}\av{s(t)^2}} \cong \sqrt{\frac{2T_{\text{o}} N}{S_0}} h_0.
\end{equation}
It is natural to assume that the prior probability density of the phase parameter $\phi_0$
is uniform over the interval $[0,2\pi)$
and that it is independent of the distribution of the amplitude parameter $h_0$, i.e.
\begin{equation}
\pi(\phi_0) = \frac{1}{2\pi}, \quad \phi \in [0,2\pi).
\end{equation}
With the above assumptions the integral $\int_0^{2\pi} p_1(x;\phi_0,h_0)\pi(\phi_0)\mathrm{d}\phi_0$
can be explicitly calculated (see \cite{W71}, Chapter 7.2)
and we obtain the following decision criterion
\begin{equation}
\label{eq:IGstat}
\exp\bigg(-\frac{h_0^2 N T_{\text{o}}}{S_0}\bigg) \,
I_0\bigg(2 h_0 \sqrt{\frac{T_{\text{o}} N}{S_0} {\mathcal G}[x(t)]}\bigg) \geq \gamma_0,
\end{equation}
where $I_0$ is the modified Bessel function of zero order
and the statistic ${\mathcal G}$ is defined as
\begin{equation}
\label{eq:Gstat}
{\mathcal G}[x(t)] := \frac{T_{\text{o}}}{N S_0} \Big( \av{x(t)h_c(t)}^2 + \av{x(t)h_s(t)}^2 \Big).
\end{equation}
The function on the left-hand side of Eq.\ (\ref{eq:IGstat}) is a monotonically increasing
function of ${\mathcal G}$ and it can be maximized if ${\mathcal G}$ is maximized independently
of the value of $h_0$. Thus the test
\begin{equation}
{\mathcal G}[x(t)] \geq {\mathcal G}_0,
\end{equation}
provides a uniformly most powerful test with respect to the amplitude $h_0$.
When we have no a priori information about the parameters a standard method
is the {\em maximum likelihood} (ML) detection which consists
of maximizing the likelihood function $\Lambda[x(t);\boldsymbol{\theta}]$ with respect to
the parameters of the signal. If the maximum of $\Lambda$ exceeds a
certain threshold we say that the signal is detected.
The values of the parameters that maximize $\Lambda$
are said to be the ML estimators of the parameters of the signal.
For the case of signal (\ref{eq:sig2}) it is convenient to
introduce new parameters
\begin{equation}
A_c := h_0 \cos\phi_0, \quad A_s := h_0 \sin\phi_0.
\end{equation}
Then one can find the ML estimators of the amplitudes
$A_c$ and $A_s$ in a closed analytic form,
\begin{equation}
\hat{A}_{c} \cong \frac{\av{x h_c}}{N},
\quad
\hat{A}_{s} \cong \frac{\av{x h_s}}{N}.
\end{equation}
It is easy to find that the estimators $\hat{A}_{c}$ and $\hat{A}_{s}$ are
unbiased and also that they are of minimum variance,
i.e.\ their variances attain the lower Cram\'er-Rao bound determined by the Fisher matrix.
The variances of both estimators are the same and equal to $1/N$.
Substituting the estimators $\hat{A}_{c}$ and $\hat{A}_{s}$ for the
parameters $A_c$ and $A_s$ in the likelihood function one
obtains a reduced likelihood function. This reduced likelihood function
is precisely equal to the ${\mathcal G}$-statistic given by Eq.\ (\ref{eq:Gstat}),
i.e.\ ${\mathcal G}[x(t)]=\ln\Lambda[x(t);\hat{A}_{c},\hat{A}_{s}]$.
The formula for the ${\mathcal G}$-statistic obtained without usage of the simplifying assumptions
\eqref{app1} is given in Appendix A.
When the all four parameters $(h_0,\phi_0,\psi,\iota)$ are unknown one can
introduce new parameters $A_k$ ($k = 1,\ldots,4$) that are functions
of $(h_0,\phi_0,\psi,\iota)$ such that the response $s(t)$ takes the form
\begin{equation}
\label{eq:sig}
s(t) = A_1 \, h_1(t) + A_2 \, h_2(t) + A_3 \, h_3(t) + A_4 \, h_4(t),
\end{equation}
where the functions $h_k$ are given by Eqs.\ (\ref{eq:amps})
and the parameters $A_k$ read
\begin{equation}
\label{eq:ampone}
\begin{array}{l}
A_1 := h_{0+}\cos2\psi\cos\phi_0 - h_{0\times}\sin2\psi\sin\phi_0,
\\[1ex]
A_2 := h_{0+}\sin2\psi\cos\phi_0 + h_{0\times}\cos2\psi\sin\phi_0,
\\[1ex]
A_3 := -h_{0+}\cos2\psi\sin\phi_0 - h_{0\times}\sin2\psi\cos\phi_0,
\\[1ex]
A_4 := -h_{0+}\sin2\psi\sin\phi_0 + h_{0\times}\cos2\psi\cos\phi_0;
\end{array}
\end{equation}
here $h_{0+}:=h_0\,A_+$ and $ h_{0\times}:=h_0\,A_\times$ [see Eq.\ \eqref{aa}].
The ML estimators of $A_k$ can again be obtained in an explicit analytic form
and the reduced likelihood function is the ${\mathcal F}$-statistic given by (see \cite{JKS98} for details)
\begin{align}
\label{eq:Fstat}
{\mathcal F}[x(t)] := \ln\Lambda[x(t);\hat{A}_1,\ldots,\hat{A}_4] \cong &\,\frac{2T_{\text{o}}}{S_0 D}
\Big( B\, (\av{x h_1}^2 + \av{x h_3}^2) + A\, (\av{x h_2}^2 + \av{x h_4}^2)
\nonumber\\&\qquad\quad
- 2C\, (\av{x h_1} \av{x h_2} + \av{x h_3} \av{x h_4}) \Big),
\end{align}
where $D:=AB-C^2$. The test
\begin{equation}
{\mathcal F}[x(t)] \geq {\mathcal F}_0
\end{equation}
is not a uniformly most powerful test with respect to unknown parameters $(h_0,\phi_0,\psi,\iota)$.
It was recently shown that uniform a priori distributions of $(h_0,\phi_0,\psi,\cos\iota)$
lead to a statistic that can be more powerful than ${\mathcal F}$ \cite{PK09}.
In Fig.\ \ref{fig:roc} we have plotted the receiver operating characteristics (ROC)
for the three statistics ${\mathcal H}$, ${\mathcal G}$, and ${\mathcal F}$ considered in the present section.
\begin{figure}
\begin{center}
\scalebox{0.5}{\includegraphics{fig1.eps}}
\caption{\label{fig:roc}
Receiver operating characteristic (ROC) for the statistics ${\mathcal H}$, ${\mathcal G}$, and ${\mathcal F}$
for the optimal signal-to-noise ratio $\rho=2$.}
\end{center}
\end{figure}
\section{The Fisher matrix}
Using the Fisher matrix we can assess the accuracy of the parameter estimators.
We have two theorems that can loosely be stated as follows.
\newtheorem{theorem}{Theorem}
\begin{theorem}[Cram\`er-Rao bound]
The diagonal elements of the inverse of the Fisher matrix
are lower bounds on the variances of unbiased estimators of the parameters.
\end{theorem}
\begin{theorem}
Asymptotically (i.e.\ when the SNR tends to infinity)
the ML estimators are unbiased and their covariance matrix is equal to the inverse of the Fisher matrix.
\end{theorem}
For an almost monochromatic signal $s=s(t;\boldsymbol{\theta})$,
which depends on the parameters $\boldsymbol{\theta}=(\theta_1,\ldots,\boldsymbol{\theta}_m)$,
the elements of the Fisher matrix $\Gamma$ can be approximately calculated
from the formula
\begin{equation}
\label{mFisher}
\Gamma_{{\theta_i}{\theta_j}} \cong \frac{2T_{\text{o}}}{S_0}
\av{\frac{\partial s}{\partial\theta_i}\frac{\partial s}{\partial\theta_j}},
\quad i,j=1,\ldots,m.
\end{equation}
In the case when only the parameters $h_0$ and $\phi_0$ are unknown (${\mathcal G}$-statistic search),
the Fisher matrix can be computed easily from Eqs. \eqref{eq:sig2} and \eqref{mFisher}.
It is diagonal and the standard deviations of the parameters
defined as the square roots of the diagonal elements of the inverse of the Fisher matrix read:
\begin{equation}
\frac{\sigma_{h_0}}{h_0} = \frac{1}{\rho}, \quad
\sigma_{\phi_0} = \frac{1}{\rho},
\end{equation}
where $\rho$ is the optimal SNR [given in Eq.\ \eqref{snr}].
\begin{figure}
\begin{center}
\scalebox{0.75}{\includegraphics{fig2.eps}}
\caption{\label{fig:pulsar2}
Dependence of standard deviations (calculated from the Fisher matrix)
of the parameters $h_0$, $\phi_0$, $\psi$, and $\cos\iota$
on the cosine of the inclination angle $\iota$.
We have taken $\phi_0=4.03$ and $\psi=-0.22$
(values of other parameters needed to perform the computation of the Fisher matrix
are listed in the text of Sec.\ 3).}
\end{center}
\end{figure}
When all the four amplitude parameters $h_0$, $\phi_0$, $\psi$, and $\iota$ are unknown
(${\mathcal F}$-statistic search), the Fisher matrix can be computed by means of formulas given in Appendix B.
In this case it is not diagonal, indicating that the amplitude parameters are correlated.
The quantities $\sigma_{h_0}/h_0$, $\sigma_{\phi_0}$, $\sigma_{\psi}$, $\sigma_{\iota}$
(where the standard deviations again are defined as square roots of diagonal elements of the inverse of the Fisher matrix)
have rather complicated analytical form but they possess a number of simple properties.
They are inversely proportional to the overall amplitude $h_0$,
independent on the initial phase $\phi_0$, and very weakly dependent on $\psi$,
however there is a strong dependence on $\iota$.
In Fig.\ \ref{fig:pulsar2} we have shown the dependence of the standard deviations
on the cosine of the inclination angle $\iota$.
The time averages from Eqs.\ \eqref{ABCdef} (needed to compute the Fisher matrix)
were computed here for the location of the Virgo detector \cite{Virgo08}
and for a randomly chosen position of the source in the sky.
We have also taken $h_0=6.0948\times10^{-2}$, $T_{\text{o}}=441610$~s, and $S_0=2$~Hz$^{-1}$,
which corresponds to the SNR $\rho\cong28.64\sqrt{2N}$ [see Eq.\ \eqref{snr}].
The same time averages and the values of $T_{\text{o}}$, $h_0$, $S_0$ were used
in the Monte Carlo simulations described in Sec.\ 4.
We see in Fig.\ \ref{fig:pulsar2} that the standard deviations become singular when $\cos\iota=\pm1$.
This singularity originates from the degeneracy of the amplitude parameters for $\cos\iota=\pm1$.
In this case the amplitude parameters from Eqs.\ \eqref{eq:ampone} become
\begin{equation}
\label{eq:ampone1}
A_{1} = h_0 \cos(2\psi\pm\phi_0), \quad
A_{2} = h_0 \sin(2\psi\pm\phi_0), \quad
A_{3} = \mp A_{2}, \quad
A_{4} = \pm A_{1}.
\end{equation}
Thus only two of them are independent. Therefore the determinant of
the 4-dimensional Fisher matrix is equal to zero at $\cos\iota=\pm1$
and consequently its inverse does not exist in this case.
\section{Monte Carlo simulations}
We have performed two Monte Carlo simulations in order to test the performance of the ML estimators.
We have compared the simulated standard deviations of the estimators with the ones obtained from the Fisher matrix.
In particular we have investigated the behavior of the ML estimators near the Fisher matrix singularity at $\cos\iota=\pm1$.
In each simulation run we have generated the signal using Eq.\ \eqref{eq:sig},
we have added it to a white Gaussian noise,
and we have estimated the amplitude parameters using the ${\mathcal F}$-statistic.
Each simulation run was repeated 1000 times for different realizations of the noise.
\begin{figure}
\begin{center}
\scalebox{0.75}{\includegraphics{fig3.eps}}
\caption{\label{fig:pulsar2snr}
Mean and normalized standard deviation of the ML estimator of the amplitude $h_0$ as a function of the SNR.
The top two panels are the means of the estimator
for the two values of $\cos\iota$. The continuous line is the true value and the
circles are results of the simulation for 1000 realizations of the noise.
The bottom two panels are the standard deviations. The continuous line is
obtained form the Fisher matrix whereas the circles are results of the simulation.
We have taken $\phi_0=4.03$ and $\psi=-0.22$.}
\end{center}
\end{figure}
In the first simulation we have investigated the bias and the standard deviation
of the ML estimator of the amplitude parameter $h_0$ as functions of the SNR
for the two cases: $\cos\iota=0.1$ and $\cos\iota=-0.93$.
The results are presented in Fig.\ \ref{fig:pulsar2snr}.
For the first case the ML estimator is nearly unbiased
and its standard deviation is close to the one predicted by the Fisher matrix even for low SNRs.
In the second case the simulation shows considerable bias of the estimator
and its standard deviation lower than the one predicted by the Fisher matrix.
However, Theorem 2 is satisfied in the
second case. For $\cos\iota$ close to $\pm 1$ we have to go to SNR
$\sim 1000$ in order for the ML estimator to be unbiased
and its standard deviation close to the one given by the Fisher matrix.
\begin{figure}
\begin{center}
\scalebox{0.75}{\includegraphics{fig4.eps}}
\caption{\label{fig:pulsar2cin}
Means and normalized standard deviations of the ML estimators of $h_0$ and $\cos\iota$
as functions of $\cos\iota$.
The top two panels are the means of the estimators.
The continuous lines are the true values
and the circles are results of the simulation for 1000 realizations of the noise.
The bottom two panels are the standard deviations. The continuous lines are
obtained form the Fisher matrix whereas the circles are results of the simulation.
We have assumed $\phi_0=4.03$, $\psi=-0.22$, and $\rho=15.6$.
Plots for $0\le\cos\iota\le+1$ (not shown here)
are mirror images of the plots for $-1\le\cos\iota\le0$.}
\end{center}
\end{figure}
In the second simulation, illustrated in Fig.\ 4,
we have investigated the bias and the standard deviation
of the ML estimators of the amplitude parameters $h_0$ and $\cos\iota$ as functions of $\cos\iota$
for the fixed SNR $\rho=15.6$.
We find that for $|\cos\iota|<0.5$ the biases are less than 10\%
and the Fisher matrix overestimates the standard deviations also by less than 10\%.
We see that over the whole range of $\cos\iota$ the standard deviations of the parameters are roughly constant
whereas the biases increases as the $|\cos\iota|$ increases.
At $\cos\iota\pm1$ the amplitude $h_0$ is overestimated by almost a factor of 2.
One reason why Theorem 1 does not apply here is that it holds for unbiased
estimators. Also a more precise statement of Theorem 1
(see e.g.\ Theorem 8 in \cite{JKbook}) requires that the Fisher matrix $\Gamma$
is positive definite for all values of parameters. This last assumption is clearly
not satisfied here as $\det\Gamma=0$ for $\cos\iota=\pm1$.
|
1,477,468,751,429 | arxiv | \section{Introduction}
Strong coupling between spin, lattice, and orbital degrees of freedom in
functional transition metal oxide compounds results in rich behavior such as
the tendency for cooperative Jahn-Teller\cite{dunitz_1957} and spin-Peierls
distortions.\cite{beni_1972} Such coupling between the different degrees of
freedom enables multifunctionality as observed in multiferroics
$R$MnO$_3$($R$ = late rare earth).\cite{Fabreges_2009,lee_2008}
In these systems, manipulation of one property can influence another,
exemplified by the electric field control of magnetic polarization in
HoMnO$_3$\cite{Lottermoser_2004}. Seeking out such strong links between
distinct degrees of freedom represents a powerful strategy in the search for
new multifunctional systems, and affords unique opportunities for a deeper
understanding of these interactions.\cite{Glazkov_2009, Ueda_2005}
One such frequently studied interaction is magnetostructural coupling in
geometrically frustrated antiferromagnets
\cite{tchernyshyov_2004,rudolf_2007} where a structural distortion lifts the
large ground state degeneracy allowing long range magnetic
order.\cite{kant_2010,lee_2007} However, frustration-driven magnetostructural
coupling is not expected in the ferrimagnetic spinels with the formula
$A$Cr$_2$O$_4 $ where $A$ is a magnetic cation. This is a consequence of the
magnetic $A$-O-Cr$^{3+}$ interaction usually being collectively
stronger than the frustrated interactions between the Cr$^{3+}$.
Furthermore, Jahn-Teller activity of the $A$ site cation can cause tetragonal
distortions that should further alleviate frustration in the Cr$^{3+}$
sublattice. Nonetheless, previous structural, thermodynamic, and magnetic
studies of NiCr$_2$O$_4$\cite{ishibashi_2007,Klemme_2002} report a coupled magnetic
and structural transition, and infrared spectroscopy measurements suggest
concurrent magnetic and structural transitions in
CuCr$_2$O$_4$.\cite{bordacs_2009}
Structural transitions at the magnetic ordering temperatures have been
observed in numerous transition metal oxide antiferromagnets such as
Cr$_2$O$_3$,\cite{Greenwald1951} MnO,\cite{smart_1951,Roth1958}
FeO,\cite{smart_1951,Roth1958} CoO,\cite{smart_1951,Roth1958} and
NiO.\cite{smart_1951,Roth1958} Cubic to rhombohedral transformations
are found in MnO, FeO, and NiO, while CoO
undergoes a cubic to tetragonal transition. The rhombohedral lattice
constants of Cr$_2$O$_3$ change at its antiferromagnetic ordering
temperature. Two mechanisms of magnetostructural coupling have been suggested
in these compounds based on neutron and X-ray diffraction measurements.
Li has suggested that magnetostructural coupling in NiO, MnO, CoO, and FeO
is driven by magnetostriction,\cite{Li1955} where anisotropy arises from the
selection of a magnetic ordering axis and drives the magnetocrystalline
deformation. Smart and Greenwald alternatively proposed that distortions in
the above binary oxides are caused by exchange striction, which is the
displacement of interacting ions to strengthen exchange coupling thus
modifying the underlying lattice.\cite{smart1950} The relations between crystal
distortions and exchange interactions are challenging to identify. For
example, it is difficult to find a unique solution to certain magnetic
scattering patterns.\cite{Li1955,Roth1958}
Here, we determine the low temperature structures of NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ across
the transitions associated with magnetic ordering using high resolution
synchrotron powder X-ray diffraction. These compounds are fully-ordered and
stoichiometric normal cubic spinels with the space group $Fd\bar 3m$ at
temperatures above 320\,K\cite{crottaz_1997,ohgushi_2008} for NiCr$_2$O$_4$\/
and 853\,K\cite{Ye_1994,ohgushi_2008} for CuCr$_2$O$_4$.\cite{chukalkin_1985}
Cr$^{3+}$ 3$d^3$ preferentially populates the octahedral sites
because of the strong crystal field stabilization of the half occupied
nondegenerate $t_{2g}$ states and empty $e_g$ states, while Ni$^{2+}$ 3$d^8$
and Cu$^{2+}$ 3$d^9$ are found on the tetrahedral sites.\cite{dunitz_1957} The
tetrahedral crystal field around Ni$^{2+}$ 3$d^8$ and Cu$^{2+}$ 3$d^9$ in the
cubic phase results in fully occupied low energy $e$ levels and triply
degenerate $t_2$ levels rendering this structure potentially
unstable.\cite{kanamori_1960,gerloch_1981} A cooperative lattice distortion --
from cubic to tetragonal symmetry -- lifts the orbital degeneracy in NiCr$_2$O$_4$\/ at
320\,K\cite{Klemme_2002,crottaz_1997,tovar_2006} and in CuCr$_2$O$_4$\/ at
853\,K.\cite{Ye_1994,tovar_2006} There had been a debate in the literature
concerning the ambient temperature structure of CuCr$_2$O$_4$. Using neutron
diffraction data, Prince postulated that the noncentrosymmetric space group
$I\bar42d$ was a better structural fit than
$I4_1/amd$.\cite{prince_1957} More recently, Dollase and O'Neill showed no
statistically significant advantage to using the $I\bar42d$ structural
model over the centrosymmetric structure $I4_1/amd$.\cite{dollase_1997} In the tetragonal structure of CuCr$_2$O$_4$\/, CuO$_4$ tetrahedra are compressed toward a square planar configuration thus lifting orbital degeneracy. \cite{dunitz_1957} The
tetragonal structure of NiCr$_2$O$_4$\/ is known to crystallize in the space group
$I4_1/amd$ with elongated NiO$_4$ tetrahedra.\cite{Ueno_1999} Previous work has also shown further distortion of
tetragonal NiCr$_2$O$_4$\/ to an orthorhombic phase, which occurs at the magnetic
transition temperature $T_N$ = 60\,K and has been observed in thermodynamic,
X-ray diffraction, and magnetic studies.\cite{Klemme_2002,ishibashi_2007}
Noncollinear ferrimagnetism that is not described by the N\'{e}el model is
observed in both NiCr$_2$O$_4$\, and CuCr$_2$O$_4$. Tomiyasu and Kagomiya describe a magnetic
structure comprising of a ferrimagnetic component and an antiferromagnetic
component in NiCr$_2$O$_4$.\cite{tomiyasu_2004,Klemme_2002} These authors used neutron
scattering to show that the antiferromagnetic component orders
at $T$ = 31\,K while the ferrimagnetic component orders at $T$ = 74\,K. A
saturation magnetization moment of 0.3\,$\mu_B$ \textit{per} formula unit has
been reported for NiCr$_2$O$_4$.\cite{tomiyasu_2004} Neutron scattering studies
on CuCr$_2$O$_4$\/ suggest a magnetic structure comprising of two canted Cr$^{3+}$
sublattices with a net moment, and Cu$^{2+}$ sublattice that couples
antiferromagnetically to the net moment of the Cr$^{3+}$ sublattices below
$T_N$ = 135\,K.\cite{prince_1957,tovar_2006,ohgushi_2008} The magnetic
moment of CuCr$_2$O$_4$\/ in this structure is 0.5\,$\mu_B$ \textit{per} formula unit.
Given this prior evidence of concurrent magnetic and structural transitions in
NiCr$_2$O$_4$\cite{ishibashi_2007,Klemme_2002} and CuCr$_2$O$_4$\cite{bordacs_2009}, there
is a clear need for further exploration of these compounds. In this
study, we employ high-resolution temperature-dependent powder X-ray
diffraction, magnetic susceptibility, and heat capacity measurements to
investigate magnetostructural coupling in NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$. This is the first
observation by X-ray powder diffraction of the tetragonal to orthorhombic
structural distortion of CuCr$_2$O$_4$\/ at the ferrimagnetic ordering temperature. We
also reveal for the first time X-ray diffraction evidence of further symmetry
lowering in orthorhombic NiCr$_2$O$_4$\/ at the second magnetic transition
$T$ = 30\,K. These results affirm strong magnetostructural coupling
can also occur in spinels that are not expected to be frustrated.
This new understanding of coupling between spin and lattice degrees of freedom
in NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ suggests that these compounds are promising
magnetodielectrics, and provides considerable motivation for further
investigation of magnetostructural coupling in related spinel compounds.
\section{Methods}
NiCr$_2$O$_4$\/ was prepared by dissolving stoichiometric amounts of
Ni(NO$_3$)$_2\cdot$6H$_2$O and Cr(NO$_3$)$_3\cdot$9H$_2$O in deionized water.
The nitrate solution was heated to evaporate the solvent, leaving a
precipitate that was ground and calcined at 1000$^{\circ}$\,C for 24 hours. A
dark green powder of NiCr$_2$O$_4$\/ was obtained. Black shiny single crystals of
CuCr$_2$O$_4$\/ were prepared following the flux method described by
Ye \textit{et al.}\cite{Ye_1994}
K$_2$Cr$_2$O$_7$ was used as a reactive flux that partly decomposes to
Cr$_2$O$_3$ at $\sim$ 700\,K. Ye \textit{et al.} propose that the
reduction of Cr$^{6+}$ into Cr$^{3+}$ plays an important role in stabilizing
the oxidation state of Cu$^{2+}$ during the synthesis of CuCr$_2$O$_4$.
\cite{Ye_1994} K$_2$Cr$_2$O$_7$ acts both as a flux and a source of
Cr$_2$O$_3$. A 20\,g mixture of 17.8\% mass CuO (Sigma Aldrich 98$\%$) and
82.2\% mass K$_2$Cr$_2$O$_7$ (Fisher 99$\%$) with 0.2\,g Bi$_2$O$_3$ as a
second flux was prepared. The mixture was ground using an agate mortar and
pestle, placed in a covered platinum crucible, heated to 800$^{\circ}$\,C with
a ramp of 100$^{\circ}$C/h, held for 24\,h, and slowly cooled to ambient
temperature at 15$^{\circ}$C/h. After the reaction, black crystals of
CuCr$_2$O$_4$ were collected and washed in boiling water. It should be noted
that more conventional solid state preparation yielded samples with
significantly broader linewidths in the synchrotron X-ray diffraction profile,
potentially obscuring the ability to fully characterize the low-temperature
structure.
High-resolution ($\Delta d/d \approx$ $10^{-4}$) synchrotron X-ray powder
diffraction data were recorded on beamline 11-BM at the Advanced Photon
Source (APS), Argonne National Laboratory.\cite{wang_2008} Scans were
collected using a 2$\Theta$ step size of 0.001$^{\circ}$ with
$\lambda$ = 0.413441\,\AA\/ for NiCr$_2$O$_4$\, and $\lambda$ = 0.41326\,\AA\/ for
CuCr$_2$O$_4$\/ in a closed-flow helium cryostat over the temperature range 7\,K to
300\,K. The sample was in direct contact with the helium
exchange gas during data collection, and was spun at 5\,Hz to improve powder
averaging. Structural models of NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ were fit to the diffraction
data using the Rietveld refinement method as implemented in the EXPGUI/GSAS
software program.\cite{toby_expgui_2001, larson_2000} Crystal structures were
visualized using the program VESTA.\cite{momma_vesta_2008} Both samples
reported here had a small, second impurity phase that was also quantitatively
fit using the Rietveld method. The NiCr$_2$O$_4$\/ sample was determined to have
a 0.5\,wt.-\% of Cr$_2$O$_3$, and the CuCr$_2$O$_4$\/ sample a 1.1\,wt.-\% CuO
impurity.
Magnetic susceptibility measurements on powder samples were performed using a
Quantum Design MPMS 5XL superconducting quantum interference device (SQUID)
magnetometer. Heat capacity measurements were collected on pellets of 50$\%$
mass silver and 50$\%$ mass sample using a Quantum Design Physical Properties
Measurement System. The pellets were prepared by grinding equal amounts of
silver and sample in an agate mortar and pestle followed by pressing at
$\sim$ 330\,MPa. Apiezon N grease was used to enhance thermal coupling
between the sample and the stage. The heat capacity of the Apiezon N grease and
silver were collected separately and subtracted from the measured heat capacity.
\section{Results and Discussion}
\subsection{Magnetism}
\begin{figure}
\centering\includegraphics[width=9cm]{figs/mag_nco.jpg}\\
\caption{(Color online) Magnetic measurements of the spinel NiCr$_2$O$_4$. (a) Zero
field cooled and field cooled temperature
dependent magnetic susceptibility measured under a 1000\,Oe DC field show three
anomalies at 310\,K, 65\,K and 30\,K. NiCr$_2$O$_4$\/ displays little change in the
magnetism at 310\,K, and is seen to order ferrimagnetically at
65\,K, with an additional change in the magnetic structure at 30\,K.
(b) The isothermal field dependent magnetization measured above the magnetic
ordering temperature shows paramagnetic behavior. At 2\,K, the coercive field
and saturation magnetization are significantly larger than what is observed
at 45\,K.}
\label{fig:magnco}
\end{figure}
\begin{figure}
\centering\includegraphics[width=8cm]{figs/cwnco.jpg}\\
\caption{(Color online) Normalized inverse magnetic susceptibility of NiCr$_2$O$_4$\/
showing ideal Curie Weiss paramagnetism above 310\,K. Weak compensated
interactions arise at 310\,K and persist to about 65\,K below which strong
uncompensated interactions dominate. The subtle magnetic transition at 30\,K
is shown in the inset.}
\label{fig:cwnco}
\end{figure}
Three magnetic transitions are observed in the temperature dependent magnetic
susceptibility of NiCr$_2$O$_4$\/ (Fig.\,\ref{fig:magnco}). A high temperature
transition occurs at 310\,K where cooperative Jahn-Teller distortions lift
the orbital degeneracy in NiCr$_2$O$_4$\/ and lower the structural symmetry from cubic
($Fd\bar{3}m$) to tetragonal ($I4_1/amd$) [Fig.\,\ref{fig:magnco} (a)].
Weak, compensated magnetic interactions occur at 310\,K, as illustrated by
the scaled inverse susceptibility of NiCr$_2$O$_4$\/ (Fig.\,\ref{fig:cwnco}).
The scaling is carried out by recasting the Curie-Weiss equation
using:\cite{melot_CW}
\begin{equation}
\frac{C}{\chi|\Theta_{CW}|} + \mbox{sgn}(\Theta_{CW}) = \frac{T}{|\Theta_{CW}|}
\end{equation}
\noindent
The linear dependence of the magnetization on the applied field at 200\,K
[Fig.\,\ref{fig:magnco}(b)] suggests that NiCr$_2$O$_4$\/ is mainly paramagnetic down to
65\,K where there is a transition to a ferrimagnetic state
[Fig.\,\ref{fig:magnco}(a)].
The normalized inverse magnetic susceptibility trace
shows the development of strong uncompensated magnetic correlations at 65\,K
(Fig.\,\ref{fig:cwnco}). A small coercive field and saturation magnetization
is observed in the field dependent magnetization of NiCr$_2$O$_4$\/ at 45\,K
[Fig.\,\ref{fig:magnco}(b)] in agreement with the onset of ferrimagnetic order.
Tomiyasu and Kagomiya attribute the magnetic transition at 65\,K in NiCr$_2$O$_4$\/ to
the ordering of the longitudinal ferrimagnetic component of
NiCr$_2$O$_4$.\cite{tomiyasu_2004} At 30\,K, another anomaly is observed in both zero field cooled (ZFC)
and field cooled (FC) measurements of the temperature dependent magnetic susceptibility
[Fig.\,\ref{fig:magnco}(a)] as well as in the scaled inverse susceptibility
(Fig.\,\ref{fig:cwnco}) of NiCr$_2$O$_4$. Below 30\,K, an increase in the coercive
field and the saturation magnetization of NiCr$_2$O$_4$\/ is observed
[Fig.\,\ref{fig:magnco}(b)]. Previous neutron diffraction measurements of
NiCr$_2$O$_4$\/ attribute this anomaly to the ordering of the antiferromagnetic
component of NiCr$_2$O$_4$.\cite{tomiyasu_2004}
\begin{figure}
\centering\includegraphics[width=8cm]{figs/mag_cco.jpg}\\
\caption{(Color online) Magnetic measurements of the spinel CuCr$_2$O$_4$. (a)
Magnetic susceptibility as a function of temperature under a 1000\,Oe DC field
shows an increase in susceptibility at the magnetic ordering temperature
$\approx$130\,K in both zero field cooled and field cooled
measurements. This is a paramagnetic to ferrimagnetic
transition. (b) Isothermal field dependent magnetization measured above
(200\,K) and below (2\,K) the magnetic ordering temperature.}
\label{fig:magcco}
\end{figure}
The temperature dependent magnetic susceptibility of CuCr$_2$O$_4$\/ shows a rapid
increase at 130\,K where there is a paramagnetic to ferrimagnetic transition
[Fig.\,\ref{fig:magcco}(a)]. The ZFC susceptibility
exhibits a reduced low temperature saturation value when compared to the
FC susceptibility data illustrating domain behavior.
A linear dependence of magnetization with applied field
occurs before the onset of magnetic order while a magnetization trace with a
coercive field of 380\,Oe and a saturation magnetization of 0.725\,$\mu_B$ is
measured at 2\,K. The measured saturation magnetization of CuCr$_2$O$_4$\/ is in good
agreement with that of the triangular magnetic structure observed previously using
neutron powder diffraction.\cite{prince_1957}
The Curie-Weiss (CW) equation $\chi = C/(T-\Theta_{CW})$ is applied to
paramagnetic regimes of NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ yielding an effective moment
($\mu_{eff}$) of 6.53 $\mu_B$ \textit{per} formula unit for NiCr$_2$O$_4$\/ and
4.27\,$\mu_B$ \textit{per} formula unit for CuCr$_2$O$_4$. The expected
$\mu_{eff}$ of NiCr$_2$O$_4$\/ is 6.16\,$\mu_B$ \textit{per} formula unit of
NiCr$_2$O$_4$. This value is slightly smaller that the experimentally determined value of
6.53\,$\mu_B$ \textit{per} formula unit obtained from fitting the paramagnetic
regime to the Curie-Weiss model, implying a small orbital contribution to the
measured moment. The expected $\mu_{eff}$ of 5.74\,$\mu_B$ \textit{per} formula unit
of CuCr$_2$O$_4$\/ is much larger than the experimental value suggesting the likely
presence of magnetic correlations in the paramagnetic regime.\cite{kemei2012}
The Weiss temperature ($\Theta_{CW}$) of NiCr$_2$O$_4$\/ is $-$487\,K while that of
CuCr$_2$O$_4$\/ is $-$147\,K. The frustration index ($|\Theta_{CW}|/T_N$) of NiCr$_2$O$_4$\/
is about 7.8 and that of CuCr$_2$O$_4$\, is 1.1 indicating that NiCr$_2$O$_4$\/ is the more
frustrated compound. The negative sign of $\Theta_{CW}$ coupled with the low
saturation magnetization observed in isothermal field dependent measurements
is consistent with noncollinear ferrimagnetic ordering in NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$.
\begin{figure}
\centering\includegraphics[width=8cm]{figs/VT_nco.jpg}\\
\caption{(Color online) Magnetostructural coupling in NiCr$_2$O$_4$. (a) NiCr$_2$O$_4$\/ orders
ferrimagnetically at $T_N$\,=\,65\,K, where the normalized inverse magnetic
susceptibility deviates negatively from ideal Curie-Weiss
paramagnetic behavior. (b) A structural transition occurs
at the ferrimagnetic ordering temperature seen from the
splitting of the tetragonal 440 diffraction peak into 080 and 800
orthorhombic peaks. Below 30\,K, a subtle peak narrowing and intensity change
is coincident with anomalies in magnetic and specific heat measurements.
125\,K and 7\,K diffraction patterns are shown to the right and left of the
central panel.}
\label{fig:vtnco}
\end{figure}
\begin{figure}
\centering\includegraphics[width=8cm]{figs/VT_cco.jpg}\\
\caption{(Color online) Magnetostructural coupling in CuCr$_2$O$_4$. (a) Long-range
ferrimagnetic order occurs at $T_N$ = 130\,K in CuCr$_2$O$_4$\/ where the normalized
inverse magnetic susceptibility of CuCr$_2$O$_4$\/ deviates negatively from
ideal Curie-Weiss behavior (b) Concurrent with the onset of
magnetic order is a structural transition seen in the
splitting of the tetragonal 322 reflection into orthorhombic 206
and 260 reflections. Diffraction patterns at 288\,K and 7\,K are shown
to the right and left of the central plot respectively.}
\label{fig:vtcco}
\end{figure}
The magnetic transitions of CuCr$_2$O$_4$\/ and NiCr$_2$O$_4$\/ are strongly coupled to the
lattice. All magnetic changes in NiCr$_2$O$_4$\/ are accompanied by structural
transitions. The known Jahn-Teller cubic to tetragonal structural distortion
in NiCr$_2$O$_4$\/ at 310\,K causes a small change in the temperature dependent
magnetization [Fig.\,\ref{fig:magnco}(a)].\cite{ishibashi_2007} Ishibashi
and Yasumi reported further distortion from tetragonal to orthorhombic
symmetry at the onset of ferrimagnetic order
($T_N$ = 65\,K).\cite{ishibashi_2007}
We observe this tetragonal to orthorhombic crystal
distortion occurring concurrently with the onset of ferrimagnetic order in
NiCr$_2$O$_4$\/ in Fig.\,\ref{fig:vtnco}. A low temperature anomaly at $T$ = 30\,K,
has been observed in magnetic susceptibility and heat capacity measurements
of NiCr$_2$O$_4$\/ however, there is no prior report of a concurrent structural
distortion. \cite{ishibashi_2007,Klemme_2002} In the current study, using
high-resolution X-ray powder diffraction, we find evidence for a structural
distortion at $T$ = 30\,K, as described in detail in a later section.
Similarly, an orthorhombic distortion of the already Jahn-Teller
distorted tetragonal CuCr$_2$O$_4$\/ occurs concurrently with ferrimagnetic ordering
at 130\,K (Fig.\,\ref{fig:vtcco}). This transition in CuCr$_2$O$_4$, not previously
noted in structural or diffraction studies, is observed here using
variable-temperature high-resolution synchrotron X-ray powder diffraction
performed on a sample of crushed single-crystals.
\subsection{Crystal structure}
\begin{table*}
\caption{\label{tab:rietveld}
Structural parameters of NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ obtained from Rietveld refinement
of high-resolution synchrotron X-ray diffraction data collected at
temperatures above and below the orthorhombic distortion of both compounds.}
\centering
\begin{tabular}{llllllll }
\hline
&& NiCr$_2$O$_4$\/ & & & & CuCr$_2$O$_4$\/ & \ \\
\hline
&&Orthorhombic & Tetragonal & & & Orthorhombic & Tetragonal \ \\
\hline
\hline
$T$ & & 10\,K & 100\,K & & & 10\,K & 298\,K\ \\
Space group & & $Fddd$& $I4_1/amd$& & & $Fddd$ & $I4_1/amd$\ \\
Setting & & origin 2 & origin 2 & & & origin 2 & origin 2\ \\
$Z$ & & 8 & 4 & & & 8 & 4\ \\
$a$ (\AA) & & 8.18139(5) & 5.79029(2) & & & 7.71271(2) & 6.03277(1) \\
$b$ (\AA) & & 8.16699(4) & 5.79029(2) & & & 8.53611(2) & 6.03277(1) \\
$c$ (\AA) & & 8.56786(4) & 8.54639(4) & & & 8.54357(2) & 7.78128(1) \\
Vol/$Z$ (\AA$^3$) & & 71.5601(6) & 71.6346(4) & & & 70.3098(3) & 70.7986(2) \\
Ni/Cu & & $8a$ (1/8,\,1/8\,1/8) & $4a$ (0,\,1/4,\,3/8)& &
& $8a$ (1/8,\,1/8\,1/8)& $4a$ (0,\,1/4,\,3/8) \\
$U_{iso}$ ($10^2$ \AA$^2$) & & 0.01(1) & 0.13(1)& & & 0.08(1)& 0.67(1) \\
Cr & & $16d$ (1/2,\,1/2, \,1/2) & $8d$ (0,0,0) & & &
$16d$ (1/2,\,1/2, \,1/2) & $8d$ (0,0,0) \\
$U_{iso}$ ($10^2$ \AA$^2$) & & 0.01(1) & 0.019(1)& & & 0.07(1) & 0.29(1) \\
O & & $32h$ ($x,y,z$) & $16h$ ($0,y,z$)& & & $32h$ ($x,y,z$) & $16h$ ($0,y,z$)\ \\
& & $x$ 0.2561(2)& $x$ 0& & & 0.2446(1) & 0 \\
& & $y$ 0.2589(2) & $y$ 0.5152(2)& & & 0.2675(2) & 0.5364(1)\ \\
& &$z$ 0.2683(1) & $z$ 0.2322(2)& & & 0.2675(2) & 0.2526(1)\ \\
$U_{iso}$ (10$^2$ \AA$^2$) & & 0.03(2) & 0.16(2)& & & 0.06(2)& 0.55(1) \\
$\chi^2$ & & 3.85 & 4.15 & & & 2.31 & 3.84 \\
$R_p$ (\%) & & 6.25 & 7.06 & & & 7.50 & 8.96 \\
$R_{wp}$ (\%) & & 8.39 & 9.41 & & & 8.39 & 6.65 \\
\hline
\hline
\end{tabular}
\end{table*}
The ambient temperature structure of both compounds can be indexed in the
tetragonal centrosymmetric space group $I4/amd$. At 298\,K, NiCr$_2$O$_4$\/ is still
undergoing the Jahn-Teller driven cubic-tetragonal transition and better
structural parameters of the tetragonal phase are obtained at 100\,K.
Structural parameters obtained from Rietveld refinement of 100\,K diffraction
data for NiCr$_2$O$_4$ and 298\,K diffraction data for CuCr$_2$O$_4$\/ to the space group
$I4/amd$ are shown in Table\,\ref{tab:rietveld} and are in good agreement with
previous reports.\cite{crottaz_1997,Ueno_1999}
\begin{figure}
\centering\includegraphics[width=9cm]{figs/cnco.jpg}\\
\caption{(Color online) High resolution synchrotron powder X-ray diffraction
of NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$. (a) The low temperature diffraction pattern of NiCr$_2$O$_4$\/ is
indexed to the orthorhombic space group $Fddd$. The lowering of average
crystal symmetry in NiCr$_2$O$_4$\/ from tetragonal to orthorhombic symmetry is
illustrated by the splitting of the (b) tetragonal (440) reflection into
(c) orthorhombic 800 and 080 reflections. (d) Like NiCr$_2$O$_4$, the
low temperature diffraction data of CuCr$_2$O$_4$\/ is indexed to the orthorhombic
space group $Fddd$ which is evident in the splitting of (e) (220) tetragonal
reflections into (f) 004 and 040 orthorhombic reflections. Structural
models are fit to the X-ray powder diffraction patterns using the Rietveld
refinement method.}
\label{fig:structure}
\end{figure}
Magnetic ordering drives further structural distortions in NiCr$_2$O$_4$\/ and
CuCr$_2$O$_4$.\cite{bordacs_2009,ishibashi_2007} The low symmetry structures of
NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ are described by the orthorhombic spacegroup $Fddd$. $Fddd$
is a maximal nonisomorphic subgroup of $I4_1/amd$ and is derived from the
parent $Fd\bar{3}m$ by loss of all threefold rotation axes and part of the
twofold screw axes. Rietveld refinement fits of 10\,K diffraction data to the
orthorhombic space group $Fddd$ for both NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ are shown in
Fig.\,\ref{fig:structure}. Symmetry lowering in NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ is
demonstrated by the splitting of certain high symmetry diffraction peaks as
illustrated in Fig.\,\ref{fig:structure} (c) and (f). The current work is
the first description of the orthorhombic $Fddd$ structure for CuCr$_2$O$_4$. In
NiCr$_2$O$_4$, variable-temperature synchrotron X-ray diffraction measurements
show additional structural changes below 30\,K, in concurrence with anomalies
in specific heat and susceptibility measurements of NiCr$_2$O$_4$\/ reported both here
and previously in the literature. This low temperature structural change of
NiCr$_2$O$_4$\/ is discussed in detail in a later subsection.
\begin{figure}
\centering\includegraphics[width=8cm]{figs/latnco.jpg}\\
\caption{(Color online) Changes in lattice parameters as a function of
temperature in NiCr$_2$O$_4$. (a) A cubic to tetragonal structural transition occurs
at 310\,K where the $a$ lattice constant of the cubic phase diverges into $a$
and $c$ lattice parameters of the tetragonal phase. The $a$ lattice constant
of the tetragonal cell is multiplied by $\sqrt{2}$ to clearly follow trends
in the lattice parameters of NiCr$_2$O$_4$. In the tetragonal phase, the $a$
parameter decreases (b) while $c$ increases (a) with decreasing temperature.
At 65\,K, a tetragonal to orthorhombic structural distortion occurs resulting
in three distinct lattice constants as shown in (a) and (b). (c) Variation of
the cell volume normalized by the number of formula units($Z$) in each cell.
A further structural distortion of orthorhombic NiCr$_2$O$_4$\/ occurs at 30\,K where
there is a slight discontinuity of the lattice parameters (a) and (b) and
cell volume (c); this is highlighted by the dashed line at $T$ = 30\,K.
In (a), (b) and (c) the error bars are smaller than the data symbols.}
\label{fig:latticenco}
\end{figure}
\begin{figure}
\centering\includegraphics[width=8cm]{figs/lat_cco.jpg}\\
\caption{(Color online) (a) The thermal evolution of lattice parameters of
CuCr$_2$O$_4$\/ reveals a tetragonal $I4_1/amd$ to orthorhombic $Fddd$ structural
transition at $\sim$ 130\,K. The tetragonal $a$ lattice parameter has been
multiplied by $\sqrt{2}$ to match the low temperature $b$ and $c$ lattice
values of the orthorhombic $Fddd$ cell. (b) Temperature dependence of the
cell volume normalized by the number of formula units ($Z$) in each cell
shows a steady decrease with temperature. In both (a) and (b), the error bars
are smaller than the data symbols.}
\label{fig:latticecco}
\end{figure}
Changes in structural symmetry are reflected in the temperature dependence of
lattice parameters. At 310\,K there is a cubic to tetragonal transition in NiCr$_2$O$_4$\/ that
splits the cubic $a$ lattice constant into tetragonal $a$ and $c$ lattice
parameters [Fig.\,\ref{fig:latticenco} (a) and (b)]. Below 310\,K, the
tetragonal NiCr$_2$O$_4$\/ distortion grows, with an increasing $c$ and a decreasing
$a$ lattice constant (plotted as $\sqrt{2}a$). Below 65\,K, magnetic ordering
occurs concurrently with a transition to orthorhombic symmetry. The
tetragonal $a$ lattice parameter of NiCr$_2$O$_4$\/ diverges into distinct orthorhombic
$a$ and $b$ lattice constants [Fig.\,\ref{fig:latticenco}(b)]. At 30\,K, a
slope change clearly visible in the $a$ and $c$ lattice parameters
[Fig.\,\ref{fig:latticenco}] matches anomalies in other property measurements
as will be discussed later. CuCr$_2$O$_4$\/ is already tetragonal at ambient temperature
due to cooperative Jahn-Teller ordering at 853\,K. The tetragonal lattice
constants of CuCr$_2$O$_4$\/ diverge below 300\,K with $c$ decreasing and the $a$
lattice constant [plotted as $\sqrt{2}a$ in Fig.\,\ref{fig:latticecco}(a)]
increasing, resulting in an enhanced tetragonal distortion with decreasing
temperature. Below 130\,K, where an orthorhombic distortion occurs
concurrently with the onset of ferrimagnetic order
[Fig.\,\ref{fig:latticecco}(a)], distinct $a$, $b$, and $c$ orthorhombic
lattice constants emerge. The orthorhombic lattice constants continue to
diverge from 130\,K to the lowest temperatures measured as indicated in
Fig.\,\ref{fig:latticecco}(a). The structural change due to orbital ordering
in NiCr$_2$O$_4$\/ at 310\,K results in a discontinuity of the normalized cell volume
indicating a first order phase transition. In contrast, in the low
temperature tetragonal to orthorhombic phase transitions in NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/
the continuous slope of the normalized cell volume through the
magnetostructural transition indicates a second order phase transition
[Fig.\,\ref{fig:latticenco}(c) and \ref{fig:latticecco} (b)].
\begin{figure}
\centering\includegraphics[width=8cm]{figs/NiO.jpg}\\
\caption{(Color online) The variation in NiO$_4$ polyhedra as a
function of temperature. (a) The Ni-O bond length remains relatively constant
in all the structural phases (b) The single O-Ni-O angle of the cubic phase
separates into a larger angle and a smaller angle in the tetragonal phase.
Below the orthorhombic transition, there are three distinct O-Ni-O angles.}
\label{fig:NiO}
\end{figure}
\begin{figure}
\centering\includegraphics[width=8cm]{figs/CuO.jpg}\\
\caption{(color online) Changes in the CuO$_4$ polyhedra as a function of
temperature (a) There is an overall decrease in the Cu-O bond distance, (b)
an increase in the larger O-Cu-O angle, and (c) a decrease in the smaller
O-Cu-O angle coupled with a splitting of this angle. These trends are
obtained from Rietveld refinement of synchrotron X-ray diffraction data.}
\label{fig:CuO}
\end{figure}
Structural changes in NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ originate from deformations of
NiO$_4$ and CuO$_4$ polyhedra. In a perfect tetrahedron, all bond lengths are
equal and all O-Cation-O angles are 109.5$^{\circ}$. Ideal NiO$_4$ tetrahedra
are observed in the cubic NiCr$_2$O$_4$\/ structure above 310\,K [Fig.\,\ref{fig:NiO}
(a) and (b)]. Orbital ordering results in a distorted tetrahedron with a
single Ni-O bond distance, but two O-Ni-O angles [Fig.\,\ref{fig:NiO} (a)
and (b)] in the tetragonal phase. Below 65\,K, the orthorhombic structure
preserves a single Ni-O bond length, but splits the O-Ni-O angles into three
distinct O-Ni-O angles in the NiO$_4$ tetrahedra [Fig.\,\ref{fig:NiO} (a) and
(b)]. These distortions in Ni-O bond lengths and O-Ni-O bond angles result in
an elongation of NiO$_4$ tetrahedra. At ambient temperature, CuO$_4$
tetrahedra are already significantly distorted with two different O-Cu-O
angles and a single Cu-O bond distance. With decrease in temperature and the
onset of the orthorhombic structural transition, we note a decrease in Cu-O
bond lengths [Fig.\,\ref{fig:CuO}(a)], an increase in the larger O-Cu-O
angle [Fig.\,\ref{fig:CuO}(b)] and a decrease in the smaller O-Cu-O angle
[Fig.\,\ref{fig:CuO}(c)]. The two smaller O-Cu-O angles divide into two. The
overall effect of these structural changes is a flattening of the CuO$_4$
polyhedra toward a square planar configuration. The differences in the
distortion of the CuO$_4$ and NiO$_4$ tetrahedra are apparent in the average
low temperature structures of NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ shown in
Fig.\,\ref{fig:vestastructure}.
\subsection{Heat capacity}
There are several interesting features in the heat capacity of NiCr$_2$O$_4$\/ and
CuCr$_2$O$_4$\/ that occur concurrently with magnetic and structural transformations
in these compounds. Klemme and Miltenburg report three anomalies in the heat
capacity of NiCr$_2$O$_4$\/ occurring at 310\,K, 75\,K, and 30\,K.\cite{Klemme_2002}
Our heat capacity measurements over the temperature range 3\,K $ \leq T \leq$
200\,K for NiCr$_2$O$_4$\/ show two anomalies at 65\,K and 30\,K
[Fig.\,\ref{fig:hc}(a)].
The Jahn-Teller cubic-tetragonal structural distortion of
NiCr$_2$O$_4$\/ causes the anomaly in heat capacity at 310\,K reported by Klemme and
Miltenburg.\cite{Klemme_2002} The transition into a ferrimagnetic ordered
state [Fig.\,\ref{fig:vtnco}(a)] that occurs concurrently with a structural
change [Fig.\,\ref{fig:vtnco}(b)] results in the change in entropy that we
observe at 65\,K and was reported by Klemme and Miltenburg to occur at
$T$ = 75\,K. Klemme and Miltenburg also reported an additional anomaly in
specific heat at 30\,K; Ishibashi and Yasumi noted a change in magnetic
susceptibility at this temperature. \cite{Klemme_2002,ishibashi_2007} We
observe this anomaly in the heat capacity of NiCr$_2$O$_4$\/ at 30\,K and attribute it
to an additional change in the magnetic [Fig.\,\ref{fig:hc}(b)] and crystal
structure [Fig.\,\ref{fig:vtnco}(b)] as will be discussed in section D of
this paper.
There are two anomalies in the specific heat of CuCr$_2$O$_4$\/ at 130\,K and 155\,K
[Fig.\,\ref{fig:hc}(c)]. The anomaly at 130\,K is coincident with
ferrimagnetic [Fig.\,\ref{fig:hc}(d)] and tetragonal-orthorhombic
[Fig.\,\ref{fig:vtcco}(b)] phase transitions in the compound. The
transition into the orthorhombic ferrimagnetic state in CuCr$_2$O$_4$\/ occurs through
an intermediate step with signatures in Fisher heat capacity and specific heat
measurements at 155\,K
[Fig.\,\ref{fig:hc} (c) and (d)].\cite{fisher_1962} Slight
structural effects accompany this second transition as shown in
Fig.\,\ref{fig:latticecco}(b) where there is a subtle inflection point of the
evolution of cell volume with temperature. Further characterization of this
intermediate change in the magnetism of CuCr$_2$O$_4$\/ at about 155\,K requires
careful investigation in future study.
\begin{figure}
\centering\includegraphics[width=9cm]{figs/NCCO.jpg}\\
\caption{(Color online) Low temperature (10\,K) orthorhombic crystal
structures of (a)NiCr$_2$O$_4$\/ and (b) CuCr$_2$O$_4$\/ projected down the [101] direction.
Ni(grey) and Cu(red) are tetrahedrally coordinated by oxygen (orange).
Chromium is shown in blue. The elongation of NiO$_4$ tetrahedra along with
the compression of CuO$_4$ polyhedra is clearly seen in the low temperature
average structures.}
\label{fig:vestastructure}
\end{figure}
\begin{figure}
\centering\includegraphics[width=8cm]{figs/NCCO_HC}\\
\caption{(color online) (a) Entropy changes in NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$\/ resulting
from structural and magnetic transformations. (a) The heat capacity of
NiCr$_2$O$_4$\/ shows two anomalies at 65\,K and 30\,K. (b) Fisher heat capacity of
NiCr$_2$O$_4$\/ indicating release of magnetic entropy occurring at the same
temperatures where changes in specific heat are observed. (c) CuCr$_2$O$_4$\/ also
shows two transitions in the heat capacity at 155\,K and 130\,K. Concurrent
with these changes in heat capacity of CuCr$_2$O$_4$\/ are variations in magnetic
structure as illustrated by Fisher heat capacity shown in (d).}
\label{fig:hc}
\end{figure}
\subsection{30\,K magnetostructural transition of NiCr$_2$O$_4$}
\begin{figure}
\centering\includegraphics[width=8cm]{figs/NCOLT}\\
\caption{(color online) Changes in magnetic order and heat capacity of NiCr$_2$O$_4$\/
at $T$ = 30\,K are accompanied by a structural change. (a) Zero field
cooled and field cooled temperature dependent magnetic susceptibility
measurements of NiCr$_2$O$_4$\/ show a change in magnetic order at $T$ = 30\,K.
Concurrent with this transition in magnetism is a change in entropy indicated
by the anomaly in heat capacity. The central panel tracks changes in intensity
of the orthorhombic 080 and 800 reflections at $T$ = 30\,K
illustrating that a structural change takes place at $T$ = 30\,K. (c) This
structural change is also reflected in the temperature dependent lattice
constants of NiCr$_2$O$_4$\/ which vary at this temperature.}
\label{fig:NCOlt}
\end{figure}
During the ferrimagnetic transition of NiCr$_2$O$_4$, a simultaneous cooperative
crystal distortion from tetragonal to orthorhombic symmetry occurs as
reported by Ishibashi and Yasumi.\cite{tomiyasu_2004, ishibashi_2007} We
observe this magnetostructural transition in NiCr$_2$O$_4$\/ at $T$ = 65\,K
(Fig.\,\ref{fig:vtnco},\ref{fig:latticenco}).
Magnetic susceptibility measurements
by Ishibashi and Yasumi show yet another low temperature magnetic transition
in NiCr$_2$O$_4$\/ at $T$ = 31\,K that was reported by Tomitasu and Kagomiya as
corresponding to the ordering of the antiferromagnetic component of the
magnetic structure of NiCr$_2$O$_4$. Klemme and Miltenburg also observed a change in
entropy at this temperature\cite{Klemme_2002}, however, no changes of the
average structure of NiCr$_2$O$_4$\/ have been observed at $T$ = 31\,K.
\cite{tomiyasu_2004, ishibashi_2007} Our measurements reveal similar
anomalies in the magnetism and specific heat measurements of NiCr$_2$O$_4$\/
[Fig.\,\ref{fig:NCOlt}(a)] at $T$ = 30\,K. Furthermore, we observe a slight
change in average structure at this temperature. The central panel in
Fig.\,\ref{fig:NCOlt}(b) tracks a NiCr$_2$O$_4$\/ Bragg diffraction peak as a function
of temperature and shows a distinct peak narrowing and intensity change below
30\,K. Likewise, the $Fddd$ lattice parameters obtained from Rietveld
analysis of the variable temperature diffraction data
Fig.\,\ref{fig:latticenco} show a clear change in slope near 30\,K
[Fig.\,\ref{fig:NCOlt}(b)]. However, no evidence for a further change of NiCr$_2$O$_4$\/
symmetry (\textit{eg.} to monoclinic) below 30\,K is found in these
high-resolution powder diffraction data. This is the first report of a
structural effect concurrent with reported anomalies in heat capacity and
magnetic measurements, and will be further examined in future studies.
\section{Conclusions}
Structural changes occur concurrently with magnetic phase transitions in
NiCr$_2$O$_4$\/ and CuCr$_2$O$_4$. We have resolved details of the crystal structure of
the low temperature phase of NiCr$_2$O$_4$\/ and
CuCr$_2$O$_4$\/ in the orthorhombic space group $Fddd$ and present the first
structural description of orthorhombic CuCr$_2$O$_4$. We find that the magnetic
transition at 30\,K in NiCr$_2$O$_4$\/ is also accompanied by further, subtle structural
anomaly. Pronounced elongation of NiO$_4$ tetrahedra, and compression of
CuO$_4$ tetrahedra toward a square planar configuration drive the distortions
into the orthorhombic phase in these compounds. As postulated by
Smart and Greenwald, we suggest that multiple exchange coupling pathways in
the distorted orthorhombic structure are likely to be the reason behind the
strong magnetostructurtal coupling observed in these compounds.\cite{smart1950}
We anticipate that this study will inspire further investigation of such
coupling in ferrimagnetic spinels.
\section{Acknowledgements}
The 11-BM beamline at the Advanced Photon Source is supported by the
Department of Energy, Office of Science, Office of Basic Energy Sciences,
under contract no. DE-AC02-06CH11357.
MCK thanks P. T. Barton and A. Goldman for helpful discussions.
MCK is supported by the Schlumberger Foundation Faculty for the Future
Fellowship, and the research (MCK and RS) is supported by the National Science
Foundation through a Materials World Network grant (DMR 0909180).
We acknowledge the use of shared experimental facilities of the Materials
Research Laboratory: an NSF MRSEC, supported by NSF DMR 112105. The MRL is a
member of the NSF-supported Materials Research Facilities Network
(www.mrfn.org).
|
1,477,468,751,430 | arxiv | \section{Introduction}
As is well known from numerous experiments, nuclear $\beta^{-}-$decay in few- and many-electron atoms often proceeds with an `additional' atomic ionization. The general
equation of this process can be written as (see, e.g., \cite{Blat}, \cite{Fro05})
\begin{equation}
X \rightarrow Y^{2+} + e^{-} + e^{-}(\beta) + \overline{\nu} + \Delta E \label{eq1}
\end{equation}
where the symbols $X$ and $Y$ designate two different chemical elements (isotopes) with almost equal masses. The symbols X and Y in Eq.(\ref{eq1}) are used to designate
both atoms/ions and the corresponding atomic nuclei. If $Q$ is the electric charge of the parent (or incident) nucleus $X$, then the nuclear charge of the final nucleus
$Y$ is $Q + 1$. Below, the electric charge of the parent nucleus ($Q$) is designated by the notation $Q_1$, while the electric charge of the final nucleus is denoted by
the notation $Q_2 (= Q + 1)$. In Eq.(\ref{eq1}) the notation $e^{-}$ stands for the secondary (or slow) electron formed in the unbound spectrum during the decay, while
the notation $e^{-}(\beta)$ designates the primary (or fast) $\beta^{-}-$electron and $\overline{\nu}$ denotes the electron's anti-neutrino. The total energy $\Delta E$
released during the $\beta^{-}-$decay, Eq.(\ref{eq1}), is a given value (for each $\beta^{-}-$decay) which cannot be changed in actual experiments. Formally, the numerical
value of $\Delta E$ coincides with the maximal (kinetic) energy of the primary $\beta^{-}-$electron emitted in Eq.(\ref{eq1}).
Our goal in this study is the analysis of the properties of secondary electrons emitted during atomic $\beta^{-}-$decay. In general, the properties of secondary electrons,
e.g., their velocity spectra, can be used to describe the electron density distribution and electron-electron correlations in the incident atom. Moreover, by using recently
developed experimental methods one can predict many interesting details of $\beta^{-}$ decay in few-electron atoms and ions. Note that despite a number of experiments
performed to investigate `additional' ionization of atoms during nuclear $\beta^{-}$-decay our current understanding of some important details of this process is still far
from complete. In particular, the spectrum of the secondary electron emitted during nuclear $\beta^{-}$-decay in atoms has not been investigated in earlier studies. In this
communication we derive a closed analytical formula for such a spectrum. Furthermore, it is crucial to explain how the electron-electron correlations in parent atoms can
affect the secondary-electron spectrum. Another interesting problem discussed in this study is the formation of very fast secondary electrons (so-called $\delta-$electrons)
during nuclear $\beta^{-}$-decay in few-electron atoms/ions.
Since the first papers published in 1950's (see, e.g., \cite{Mig1940}), it became clear that by analyzing numerically generated spectra of the final state probabilities
during atomic $\beta^{-}-$decay, Eq.(\ref{eq1}), we can obtain a significant amount of useful information about the parent (or incident) atom/ion, including its atomic
state, presence of various excitations, etc (see, e.g., \cite{Fro05}, \cite{Our1} - \cite{PRC2}). Furthermore, if the spectra of the final state probabilities could be
evaluated to high accuracy (from numerical computations), then based on these spectra we would be able to predict the atom and its isotope in which nuclear $\beta^{-}-$decay
has occured. A number of important details about electron distributions in such atoms/ions can also be accurately predicted. This conclusion is very important in applications
to few- and many-electron atoms/ions with very short life-times. This emphasizes the importance of knowledge of the final state probabilities for different atoms, ions,
molecules and atomic clusters.
In this study we also determine the distributions (or spectra) of the final state probabilities of $\beta^{-}-$decaying atoms/ions, but our main goal is the analysis of the
cases when this decay proceeds with an `additional' atomic ionization, Eq.(\ref{eq1}). Note that currently all calculations of the final state probabilities for
$\beta^{-}$decaying atoms, ions and molecules are performed with the use of the sudden approximation which is based on the fact that velocities of $\beta^{-}$-electrons
($v_{\beta}$) emitted during the nuclear $\beta^{-}-$decay are significantly larger than the usual velocities of atomic electrons $v_a$. In particular, in light atoms we
have $v_{\beta} \ge 50 v_a - 200 v_a$. This is also true for the velocities of the secondary electrons $e^{-}$ which can be emitted as `free' particles during the reaction,
Eq.(\ref{eq1}), i.e. $v_{\beta} \gg v_{\delta}$. The inequality $v_{\beta} \gg v_a$ allows one to apply the sudden approximation and analyze the nuclear $\beta^{-}$-decay in
light atoms by calculating the overlaps of the incident and final (non-relativistic) atomic wave functions. The sudden approximation is based on the assumption that the wave
function of the incident system does not change during the fast process, i.e. its amplitude and phase do not change. In other words, the electron density distribution in the
maternal atom does not change during the nuclear $\beta^{-}$-decay (see discussions in \cite{LLQ} and \cite{MigK}).
Our analysis of the properties of secondary electrons emitted during nuclear $\beta^{-}$-decay in few-electron atoms begins from the general discussion of the final state
probabilities and sudden approximation which has been extensively used in calculations of such probabilities. This problem is discussed in the next Section. In Section III we
determine the actual velocity spectrum of the secondary $\beta^{-}-$electrons emitted during nuclear $\beta^{-}$-decay of the one-electron tritium atom. The more general case
of few-electron atoms is considered in Section IV where we show explicitly that the energy/velocity spectra of secondary electrons essentially depend upon electron-electron
correlations (or, inter-particle correlations) in the parent few-electron atoms/ions. In Section V we evaluate the overall probabilities to observe very fast secondary
electrons (or $\delta-$electrons) during nuclear $\beta^{-}$-decay in few-electron atoms. Concluding remarks can be found in the last Section.
\section{Final state probabilities}
In the sudden approximation the final state probability of the process, Eq.(\ref{eq1}), equals the overlap integral of the wave function of the parent atom $X$ and wave
function of the final double-charged ion $Y^{2+}$ multiplied by the wave function of the outgoing (or `free') electron which has a certain momentum ${\bf p}$. The direction of
the momentum ${\bf p}$ in space coincides with the direction of motion/propagation of the actual free electron that is observed in experiments. Moreover, at large distances
each of these free-electron wave functions must be a linear combination of a plane wave and incoming spherical wave. Functions with such asymptotics take the form \cite{Maxim}
(see also \S 136 in \cite{LLQ})
\begin{eqnarray}
\phi_{p}(r, {\bf n}_p \cdot {\bf n}_r) = N_f \exp(\frac{\pi}{2} \zeta) \Gamma(1 + \imath \zeta) \; \; {}_1F_1\Bigl(-\imath \zeta, 1, -\imath ({\bf p} \cdot {\bf r} - p r)\Bigr)
\exp[\imath ({\bf p} \cdot {\bf r})] \label{Cwave}
\end{eqnarray}
where $N_f$ is the normalization constant defined below, ${}_1F_1(a, b; z)$ is the confluent hypergeometric function and $\zeta = \frac{Q_2}{a_0 p} = \frac{\alpha Q_2}{\gamma v}$,
where $a_0$ is the Bohr radius, $\alpha$ is the fine structure constant and $\gamma$ is the Lorentz $\gamma-$factor \cite{Jack} (see below) of the moving electron. The notations
$p$ and $v$ stand for the momentum and velocity of the outgoing (or `free') electron. Also in this equation the two unit vectors ${\bf n}_p$ and ${\bf n}_r$ are defined as follows
${\bf n}_p = \frac{{\bf p}}{p}$ and ${\bf n}_r = \frac{{\bf r}}{r}$. There are a number of advantages in using the wave function of the free electron which moves in the Coulomb field
of the central `bare' nucleus, or positively charged ion in the form of Eq.(\ref{Cwave}). Some of these advantages are discussed in \S 136 of \cite{LLQ}. In particular, the choice
of the $\phi_{p}(r, {\bf n}_p \cdot {\bf n}_r)$ function in the form of Eq.(\ref{Cwave}) directly leads to explicit formulas for the probability amplitudes, i.e. there is no need
to perform any additional transformations of these values.
Let us consider nuclear $\beta^{-}$ decay in actual atomic systems. First, consider the $\beta^{-}-$decaying hydrogen (or tritium) atom. The whole process is described by the
following equation: ${}^{3}$H = ${}^{3}$He$^{2+} + e^{-} + e^{-}(\beta) + \overline{\nu}$. For simplicity, we shall assume that the central atomic nucleus is infinitely heavy.
Also, in this study we shall assume that all parent (or incident) $\beta^{-}-$decaying atoms were in their ground $1^2s-$states (before $\beta^{-}-$decay). In atomic units, where
$\hbar = 1, m_e = 1$ and $e = 1$, the ground state wave function of the one-electron, hydrogen-like atom/ion is $\frac{\eta \sqrt{\eta}}{\sqrt{\pi}} \exp(-\eta r)$. In the case of
$\beta^{-}$-decaying hydrogen/tritium atom we chose $Q_1 = Q = 1$ and $\eta = \frac{Q_1}{a_0}$, while for the final helium ion He$^{2+}$ we have $Q_2 = Q + 1(= 2)$ and $\zeta =
\frac{Q_2}{a_0 p} = \frac{\alpha Q_2}{\gamma v}$
The probability amplitude equals the overlap integral between the $\frac{\eta \sqrt{\eta}}{\sqrt{\pi}} \exp(-\eta r)$ function and the $N_f \phi_{kl}(r, {\bf n}_p \cdot {\bf n}_r)$
functions, Eq.(\ref{Cwave}). Calculations of similar integrals (or probability amplitudes) with the
function $\phi_{kl}(r, {\bf n}_p \cdot {\bf n}_r)$, Eq.(\ref{Cwave}), are relatively simple and straightforward. There are a few steps in this procedure. First, we can write the
following expression derived in \cite{Maxim}
\begin{eqnarray}
I_1(\eta) &=& 4 \pi \int \exp[\imath ({\bf p} \cdot {\bf r} - \eta r)] \; \; {}_1F_1\Bigl(-\imath \zeta, 1, -\imath ({\bf p} \cdot {\bf r} - p r)\Bigr) r dr \nonumber \\
&=& 4 \pi \frac12 \Bigl[ \frac12 p^2 + \frac12 \eta^2 \Bigr]^{\imath \zeta - 1} \Bigl[ - \frac12 p^2 + \frac12 \eta^2 - \imath \eta p \Bigr]^{-\imath \zeta} \label{Max1}
\end{eqnarray}
after a few steps of additional transformations this formula is reduced to the form
\begin{eqnarray}
I_1(\eta) = 4 \pi \Bigl( \frac{\eta + \imath p}{\eta - \imath p} \Bigr)^{\imath \zeta} \; \; \frac{1}{\eta^2 + p^2} \label{Max2}
\end{eqnarray}
By using the following identity (see, e.g., Eq.(1.622) in \cite{GR})
\begin{eqnarray}
\ln\Bigl( \frac{\eta + \imath p}{\eta - \imath p} \Bigr) = 2 \imath \arctan\Bigl(\frac{\eta}{p}\Bigr) \label{Max3}
\end{eqnarray}
we reduce the expression for the $I_1(\eta)$ integral to the form
\begin{eqnarray}
I_1(\eta) = 4 \pi \frac{1}{\eta^2 + p^2} \exp\Bigl[-2 \zeta \arctan\Bigl(\frac{\eta}{p}\Bigr)\Bigr] \label{Max4}
\end{eqnarray}
All integrals which are needed to determine amplitudes of the final state probabilities can be derived by calculating partial derivatives of the $I_1(\eta)$ integral, Eq.(\ref{Max4}), with
respect to the variable $- \eta$. For instance, for our present purposes we need the integral $I_2(\eta)$ which is written in the form
\begin{eqnarray}
& & I_2(\eta) = 4 \pi \int \exp[\imath ({\bf p} \cdot {\bf r} - \eta r)] \; \; {}_1F_1\Bigl(-\imath \zeta, 1, -\imath ({\bf p} \cdot {\bf r} - p r)\Bigr) r^{2} dr \nonumber \\
&=& -\frac{\partial I_1(\eta)}{\partial \eta} = 8 \pi \frac{\eta + \zeta p}{(\eta^2 + p^2)^2} \exp\Bigl[-2 \zeta \arctan\Bigl(\frac{\eta}{p}\Bigr)\Bigr] \label{Max5}
\end{eqnarray}
The $I_2(\eta)$ integral, Eq.(\ref{Max5}) (with the additional normalization factors $N_f$ and $N_{{\rm H}}$) determines the probability of the `additional' ionization of the
hydrogen/tritium atom from its ground $1^2s$-state during the nuclear $\beta^{\pm}$ decay. The momentum of the `free' electron is ${\bf p}$ and $p = \mid {\bf p} \mid$ is its
absolute value. If we want to determine the final state probabilities of atomic ionization during nuclear $\beta^{\pm}$ decay of the hydrogen/tritium atom from excited $s-$states,
then higher derivatives from the $I_1(\eta)$ integral are needed. In general, all integrals $I_n(\eta)$ can be found with the use of the formula
\begin{eqnarray}
I_n(\eta) = (-1)^{n} \Bigl[\frac{\partial}{\partial \eta}\Bigr]^n I_1(\eta) = 2^{n+2} \; \; \pi \; \; \frac{P_n(\eta, \zeta, p)}{(\eta^2 + p^2)^n} \exp\Bigl[-2 \zeta
\arctan\Bigl(\frac{\eta}{p}\Bigr)\Bigr] \label{Max51}
\end{eqnarray}
where $P_n(\eta, \zeta, p)$ is a polynomial function of all its variables. In derivation of formulas for the integrals $I_n(\eta)$ it is convenient to assume that these three variables
$\eta, \zeta$ and $p$ are independent of each other. However, to produce actual formulas for the probability amplitudes and final state probabilities we have to take into account
the following relation between these variables: $\frac{\eta}{p} = \frac{Q_1}{Q_2} \zeta$, or $\zeta p = \frac{Q_2}{Q_1} \eta$. This allows us to write the following expression for
the integral $I_2(\eta)$
\begin{eqnarray}
I_2(\eta) = 8 \pi \frac{\eta \Bigl(\frac{Q_2}{Q_1} + 1\Bigr)}{(\eta^2 + p^2)^2} \exp\Bigl[-2 \Bigl(\frac{Q_2 \eta}{Q_1 p}\Bigr) \arctan\Bigl(\frac{Q_2 \eta}{Q_1 p} \Bigr)\Bigr]
\label{Max53}
\end{eqnarray}
where we have used the two variables $\eta$ and $p$. However, in some cases two other variables (e.g., $\zeta $ and $p$) are more convenient. Note that it is possible to produce a few
useful relations between $I_n(\eta)$ and $I_{n-1}(\eta), I_{n-2}(\eta), \ldots, I_{1}(\eta)$ integrals. Such relations allow one to determine all integrals $I_{n}(\eta)$ without any
actual computation.
\section{Tritium atom}
Consider nuclear $\beta^{-}$-decay in the one-electron hydrogen/tritium atom ${}^{3}$H, or in some hydrogen-like ion with nuclear electric charge $Q$. According to the formulas derived
above the probability amplitude ${\cal A}_{i \rightarrow f}$ is
\begin{eqnarray}
{\cal A}_{i \rightarrow f} = 8 \pi N_H N_f \; \; \frac{\eta \Bigl(\frac{Q_2}{Q_1} + 1\Bigr)}{(\eta^2 + p^2)^2} \; \; \exp\Bigl[-2 \Bigl(\frac{Q_2 \eta}{Q_1 p}\Bigr)
\arctan\Bigl(\frac{Q_2 \eta}{Q_1 p}\Bigr)\Bigr]
\label{Max54}
\end{eqnarray}
where $N_{{\rm H}} = \sqrt{\frac{\eta^3}{\pi a^{3}_{0}}}$ is the normalization constant of the hydrogen-atom wave function, while $N_f$ is the normalization constant of the wave function which
represent the `free' electron. The numerical value of this normalization constant ($N_f$) is determined by the following equality
\begin{eqnarray}
N^{-2}_f = \exp(\frac{\pi}{2} \zeta) \; \Gamma(1 + \imath \zeta) \; \exp(\frac{\pi}{2} \zeta) \Gamma(1 - \imath \zeta) = \exp(\pi \zeta) \frac{\pi \zeta}{\sinh(\pi \zeta)}
= \frac{2 \pi \zeta}{1 - \exp(-2 \pi \zeta)} \label{Max55}
\end{eqnarray}
see, e.g., \cite{AS}. In other words, the probability amplitude ${\cal A}_{i \rightarrow f}$ equals
\begin{eqnarray}
{\cal A} = 4 \sqrt{\frac{2 \eta^{3}}{\zeta} \Bigl[1 - \exp\Bigl(-2 \pi \frac{Q_2 \eta}{Q_1 p}\Bigr)\Bigr]} \; \frac{\eta \Bigl(\frac{Q_2}{Q_1} + 1\Bigr)}{(\eta^2 + p^2)^2} \;
\exp\Bigl[-2 \Bigl(\frac{Q_2 \eta}{Q_1 p}\Bigr) \arctan\Bigl(\frac{Q_2 \eta}{Q_1 p}\Bigr)\Bigr] \label{Max555}
\end{eqnarray}
The expression for the infinitely small final state probability $\Delta P_{i \rightarrow f}$ takes the form
\begin{eqnarray}
& & \Delta P_{i \rightarrow f} = \mid {\cal A} \mid^2 p^2 \Delta p = \frac{32 \eta^3}{\zeta} \; \; \Bigl[1 - \exp\Bigl(-2 \pi \frac{Q_2 \eta}{Q_1 p}\Bigr)\Bigr] \; \;
\frac{p^2 \eta^2 \Bigl(\frac{Q_2}{Q_1} + 1\Bigr)^2}{(\eta^2 + p^2)^4} \nonumber \\
& & \times \exp\Bigl[-4 \Bigl(\frac{Q_2 \eta}{Q_1 p}\Bigr) \arctan\Bigl(\frac{Q_2 \eta}{Q_1 p}\Bigr)\Bigr] \Delta p \label{Max56}
\end{eqnarray}
To produce the final expression which is ready for calculations we have to replace here the variables $\eta$ and $\zeta$ by the following expressions $\eta = \frac{Q_1}{a_0}, \frac{\eta}{p}
= \frac{\alpha Q_1}{\gamma v}$ and $\zeta = \frac{Q_2 \eta}{Q_1 p} = \frac{\alpha Q_2}{\gamma v}$, where $Q_1(= Q)$ is the electric charge of the incident nucleus (or central positively charged
ion) and $a_0 = \frac{\hbar^2}{m_e e^2}$ is the Bohr radius. In atomic units, where $\hbar = 1, e = 1$ and $m_e = 1$, the Bohr radius equals unity and the ratio $\frac{\eta}{p}$ equals to the
ratio $\frac{\alpha Q_1}{\gamma v}$ (since $m_e = 1)$, where $\alpha = \frac{\hbar^2}{m_e e^2}$ is the fine structure constant and $v = \mid {\bf v} \mid$ is the absolute value of the electron's
velocity (expressed in atomic units). The factor $\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}} = \frac{1}{\sqrt{1 - \alpha^2 v^2}}$ is the Lorentz $\gamma-$factor \cite{Jack} of the moving
electron. In atomic units the electron's velocity cannot exceed the value of $c = \alpha^{-1} (\approx 137)$.
\subsection{Velocity spectrum}
From, Eq.(\ref{Max56}), one finds the following expression for the final state probability disctribution, or $P_{i \rightarrow f}(v)$ distribution:
\begin{eqnarray}
& & \frac{d P_{i \rightarrow f}}{dv} = \frac{32 Q_1}{\alpha Q_2} \; \; \Bigl[1 - \exp\Bigl(-2 \pi \frac{\alpha Q_2}{\gamma v}\Bigr)\Bigr] \; \;
\frac{(Q^{2}_{1} + Q^{2}_{2})^2 \gamma^{4} v^{3}}{(Q^{2}_1 + \gamma^2 v^2)^4} \nonumber \\
& & \times \exp\Bigl[-4 \Bigl(\frac{\alpha Q_2}{\gamma v}\Bigr) \arctan\Bigl(\frac{\alpha Q_2}{\gamma v}\Bigr)\Bigr] \label{Max565}
\end{eqnarray}
The expression on the right-hand side of this equality essentially coincides with the $v-$spectrum of the `free' electrons emitted during nuclear $\beta^{-}-$decay in one-electron atoms/ions.
Rigorously speaking, any spectral function must be normalized, i.e. its integral over $v$ (from $v_{min} = 0$ to $v_{max} = c = \alpha^{-1}$ in $a.u.$) must be equal unity. This allows
one to obtain the following expression for the $v-$spectral function (or $v-$spectrum, for short) \cite{Fro2015}:
\begin{eqnarray}
& & S_e(v; Q) = \frac{32 Q_1}{{\cal S}(Q) \alpha Q_2} \; \Bigl[1 - \exp\Bigl(-2 \pi \frac{\alpha Q_2}{\gamma v}\Bigr)\Bigr] \; \; \frac{(Q^{2}_{1} + Q^{2}_{2})^2 \gamma^{4} v^{3}}{(Q^{2}_1
+ \gamma^2 v^2)^4} \nonumber \\
& & \times \exp\Bigl[-4 \Bigl(\frac{\alpha Q_2}{\gamma v}\Bigr) \arctan\Bigl(\frac{\alpha Q_2}{\gamma v}\Bigr)\Bigr] \label{Max567}
\end{eqnarray}
where the normalization constant ${\cal S}(Q)$ can be obtained (for each pair $Q_1 = Q$ and $Q_2 = Q + 1$) with the use of numerical integration. For the tritium atom $Q_1 = 1$ and $Q_2 = 2$ we have
found that ${\cal S}(Q)$ $\approx$ 196.611833628395. As expected, the formula, Eq.(\ref{Max567}), contains only the absolute values of the free-electron velocity $v$ (or momentum $p$) and electric
charges of the atomic nuclei $Q_1 = Q$ and $Q_2 = Q + 1$. The velocity of the fast $\beta^{-}-$electron is not included in this formula. This is a direct consequence of the sudden approximation
used to derive this formula. In general, by using the known $v$-spectral function we can evaluate the probability $p(v)$ to observe a secondary electron which moves with the
velocity $v$ (expressed in atomic units).
Note that equation (\ref{Max567}) is written in a manifestly relativistic form, i.e. formally the energies of secondary electrons can be arbitrary. However, both wave functions used in our
calculations of the overlap integrals are non-relativistic. Furthermore, in applications to actual atoms and ions, the total energies of the emitted secondary electrons are non-relativistic, e.g.,
$E \le 50$ $keV$ for arbitrary atoms and $E \le 25$ $keV$ for light atoms. This means we do not need to apply any relativistic, or even semi-relativistic approximation. In other words, we can
always assume that $\gamma = 1$ in the formula, Eq.(\ref{Max567}). The non-relativistic spectral function of secondary electrons then takes the form
\begin{eqnarray}
& & S_e(v; Q) = \frac{32 Q_1}{{\cal S}(Q) \alpha Q_2} \; \; \Bigl[1 - \exp\Bigl(-2 \pi \frac{\alpha Q_2}{v}\Bigr)\Bigr] \; \; \frac{(Q^{2}_{1} + Q^{2}_{2})^2 v^{3}}{(Q^{2}_1
+ v^2)^4} \nonumber \\
& & \times \exp\Bigl[-4 \Bigl(\frac{\alpha Q_2}{v}\Bigr) \arctan\Bigl(\frac{\alpha Q_2}{v}\Bigr)\Bigr] \label{Max5675}
\end{eqnarray}
In applications to real (light) atoms the differences between these two spectral functions, defined by Eq.(\ref{Max567}) and Eq.(\ref{Max5675}), are very small for all light atoms. This follows
from the explicit form of the right-hand side of these two equations, which contains an exponential cut-off factor for large velocities/energies. In this study all computational results have been
determined with the use of the spectral function, Eq.(\ref{Max567}).
\subsection{Calculations}
In actual experiments the integral of the spectral function $S_e(v; Q)$ between the $v_1$ and $v_2$ values ($v_2 > v_1$) gives one the probability $P(v_1;v_2)$ to detect the `free' electron emitted
during the process, Eq.(\ref{eq1}), with the velocity bounded between $v_1$ and $v_2$. This probability is normalized over all possible free electron velocities. However, in actual experiments, in
addition to such bound-free transitions we always observe a large number of bound-bound transitons. In this case the problem of determining the absolute values of probabilities of the partial
bound-free transitions is reduced to calculations of the conditional probabilities. To solve this problem one needs to know the total probability of the bound-bound transitions $P_{bb}$ during the
nuclear $\beta^{-}$-decay. If this value is known, then it is easy to find the total probability of the bound-free transitions $P_{bf}$ = 1 - $P_{bb}$ and absolute value of the partial bound-free
probability ${\cal P}(v_1;v_2) = P_{bf} P(v_1;v_2) = (1 - P_{bb}) P(v_1;v_2)$
Let us consider the $\beta^{-}$-decay in the one-electron tritium atom ${}^{3}$H (or T). For simplicity, here we restrict our analysis to the $\beta^{-}$-decay of the tritium atom from its ground
$1^{2}s-$state. Moreover, we shall assume that the atomic nucleus in the hydrogen/tritium atom is infinitely heavy. In general, during the nuclear $\beta^{-}$-decay in such a one-electron tritium
atom one can observe a large number of bound-bound transitions such as H$(1^{2}s) \rightarrow$ He$^{+}(n^{2}s)$, where $n$ is the principal quantum number of the one-electron (or hydrogen-like)
He$^{+}$ ion. The sudden approximation leads to the conservation of the electron angular momentum (or $L(L + 1)$ value) during nuclear $\beta^{-}$-decays in few-electron atoms. The total electron
spin (or $S(S + 1)$ value) is also conserved (as well as the spatial parities $\hat{\pi}$ of the incident and final wave functions) \cite{Fro05}. This means that bound-bound transitions from the
$1^{2}s$-state of the tritium atom to all bound $n^{2}s-$states of the one-electron helium ion (He$^{+}$) are possible. In this study the probabilities of such transitions have been determined to
high accuracy and can be found in Table I. Their numerical calculations are relatively simple, since we only need to determine the overlap of the two hydrogen-like, i.e., one-electron, wave
functions. The sum of such probabilities convergences to the total probability of the bound-bound transition. The convergence of the $P_{bb}$ probability obtained with the use of the 100 - 1500
lowest $n^{2}s-$states in the He$^{+}$ ion can be understood from Table II. The difference between unity and this probability $P_{bb} \approx$ 0.97372735(10) equals the total probability $P_{bf}
\approx$ 0.02627265(10) of the bound-free transitions for the process, Eq.(\ref{eq1}).In other words, the $P_{bf}$ value is the total ionization probability of the He$^{+}$ ion during nuclear
$\beta^{-}$-decay in the tritium atom. For the one-electron ${}^3$H atom such a probability ($\approx$ 2.627 \%) is quite small, but in many atoms the probabilities of similar processes are larger.
For instance, for the $\beta^{-}$-decay of the Li atom from its ground $2^{2}S-$state, the corresponding probability is $\approx$ 15 \% \cite{Our1}. In many weakly-bound atomic ions, e.g., in the
two-electron H$^{-}$ ion \cite{Fro05}, the overall probability of bound-free transitons is comparable and even larger than the total probability of bound-bound transitions. Numerical calculations
of the bound-bound state probabilities for other atomic and molecular systems can be found, e.g., in \cite{PRC1}, \cite{PRC2}. Here we do not want to discuss such calculations, since our current
goal is to investigate the bound-free transitions during nuclear $\beta^{-}$ decay in few-electron atoms.
Convergence of the spectral integral ${\cal S}(Q)$ for the $\beta^{-}$-decay of the hydrogen/tritum atom with an infinitely heavy nucleus has been investigated in the following way. First, let us note
that our method is based on the division of the main velocity interval between $v_{min} = 0$ and $v_{max} = \alpha^{-1}$ into $N$ equal intervals $\delta = \frac{v_{max} - v_{min}}{N}$. To perform
numerical integration each of these intervals $\delta = \frac{v_{max} - v_{min}}{N}$ is separated into $2^{N_s-2} + 1$ interior sub-intervals which are used in the `extended' trapezoidal method
\cite{Recp} and \cite{Num}. In our calculations both $N$ and $N_s$ values have been varied, e.g., $N$ = 5000, 1000, $\ldots$ and $N_s$ = 6, 8, 10, 12. Finally, we have determined the resulting numerical
value of ${\cal S}(Q)$ in Eq.(\ref{Max567}) to high accuracy: ${\cal S}(Q) \approx$ 196.611833628395. This value has been used in all numerical calculations of probabilities.
Table III contains numerical results for probabilities of the bound-free transitions $p_{bf}(0,v)$ during nuclear $\beta^{-}$ decay of the hydrogen/tritium atom with infinitely heavy nucleus. In these
probabilities the velocities of the final electrons (in $a.u.$) are bounded between $v_1 = 0$ and $v_2 = v$. Note again that these probabilities ($p_{bf}(0,v)$) are the absolute probabilities of the
bound-free transitions, i.e. all bound-bound transitions are ignored. To obtain the total probabilities of the bound-free transitions the $p_{bf}(0,v)$ values from Table III must be multiplied by the
factor $P_{bf} \approx$ 0.02627265(10). Then one finds for the overall probability to observe seconday (or `free') electrons following nuclear $\beta^{-}$-decay in atoms with the velocity $v$ bounded
between $v_1$ and $v_2$ values: $\overline{P}_{bf}(v_1,v_2) = P_{bf} (p_{bf}(0,v_2) - p_{bf}(0,v_1))$. For instance, in the case of nuclear $\beta^{-}$ decay of the hydrogen/tritium atom with infinitely
heavy nucleus the overall probability to observe the secondary (or `free') electron with the velocity located in the interval $0.6 \le v \le 3.0$ is $\overline{P}_{bf}(v_1,v_2) = P_{bf} \cdot
(p_{bf}(0,v_2) - p_{bf}(0,v_1)) \approx 0.02627265 \cdot (0.901846525528880670 - 0.0659857766537821459) \approx 0.0219602769$, or 2.196028 \% of all $\beta^{-}$ decays. The first conditional probability
$p_{bf}(0,v_2)$ corresponds to $v_2 = 3.0$, while the second value $p_{bf}(0,v_1)$ has been determined for $v_1 = 0.6$. Note that for the $\beta^{-}$-decaying tritium atom the velocities of more than
90 \% of all secondary electrons are located between $v = 0.4$ and $v = 3.2$ (in $a.u.$) This range of velocities of secondary electrons corresponds to the maximum of the $v-$distribution for the ${}^3$H
$\rightarrow$ ${}^3$He$^{2+}$ + $e^{-} + e^{-}(\beta) + \overline{\nu}$ decay. Probabilities to observe secondary electrons with different velocity distributions can be evaluated analogously by using our
results from Table III. In many cases it is more convenient to use the (partial) probabilities $p(v_1, v_2)$ defined for proximate numerical values of the two velocities $v_1$ and $v_2$, rather than the
probabilities $p(0, v_1)$ and $p(0, v_2)$ defined above. The corresponding numerical values of these probabilities $p(v_1, v_2)$ (for $v_1 \ne 0$) can be found in Table IV.
\section{$\beta^{-}$-decays in few-electron atoms}
Our original interest in problems discussed in this study was based on the fact that in actual applications it is often important to know not only the value $P_{bf}$, but also the so-called partial
probabilities $p_{i \rightarrow {\bf p}}$, where $i$ is the incident state in the parent atom (tritium), while the notation ${\bf p}$ states for the final state of the `free' electron (in momentum
space) which moves in the field of the final ion (He$^{2+}$ ion). We have developed an effective method for numerical calculations of such probabilities. This method is described in detail below.
By using the formulas Eq.(\ref{Max565}) and Eq.(\ref{Max567}) one can determine all final state probabilities and $p-$ and $v-$spectra of the secondary (or `free') electrons emitted during nuclear
$\beta^{-}$-decay in few-electron atoms. In general, our additional investigations of atomic ionization during nuclear $\beta^{-}$-decay in few-electron atoms unambiguously lead to the conclusion that
the spectra of secondary electrons, partial probabilities of bound-free transitions $p_{i \rightarrow {\bf p}}$, and the total probability of such transitions $P_{bf}$ depend upon the electron-electron
correlations in the incident bound state of the maternal atom. This means that we can study electron-electron correlations in the maternal (or parent) atom by analyzing the spectra of the secondary
electrons emitted during its nuclear $\beta^{-}$ decay. This conclusion is important for future experimental studies.
To illustrate the general situation with few-electron atoms and ions let us consider $\beta-$decaying two-electron atoms and ions, i.e., He-like atomic systems with $\beta^{-}$-decays. Simple and very
compact analytical expressions for the bound state wave functions of two-electron atoms/ions can be derived in relative and/or perimetric coordinates \cite{Fro98}. The exact wave functions of such
atomic systems are truly correlated and depend upon all three relative coordinates $r_{32}, r_{31}$ and $r_{21}$. It is very difficult to explain in a few lines all aspects of integration in relative
and/or perimetric coordinates and we do not attempt to do so here. For our purposes in this study we can operate with the following approximate analytical expression for the two-electron wave function
(see, e.g, \cite{LLQ} and \cite{March}):
\begin{eqnarray}
\Psi = N_1 N_2 \; \; \exp[ -(Q - q) (r_{1N} + r_{2N})] = \frac{(Q - q)}{\pi a_0} \; \; \exp[ -(Q - q) (r_{1N} + r_{2N})] \label{two-el}
\end{eqnarray}
where $Q$ is the electric charge of atomic nucleus ($Q \geq 2$), while $Q - q$ is the `effective' electric charge of atomic nucleus. A small correction $q$ ($q \le 1$) is introduced in this equation
to represent an `effective' contribution of electron-electron correlations. In Eq.(\ref{two-el}) the indexes 1 and 2 stand for the two atomic electrons, while index $N$ designates the atomic nucleus
whichis assumed to be infinitely heavy. It can be shown that such a simple wave function provides a quite accurate approximation to the actual two-electron wave function. For the ground state of the
He atom, the approximate wave function, Eq.(\ref{two-el}), reproduces $\approx$ 98.15 \% of its `exact' total energy. The optimal value of the parameter $q$ in Eq.(\ref{two-el}) equals $\frac{5}{16}$
\cite{LLQ}, \cite{March}. On the other hand, the approximate wave function is represented in a factorized form (see, Eq.(\ref{two-el})), which contains no mix of inter-electron coordinates. Now, we
can repeat all calculations made in this study by using the approximate wave function, Eq.(\ref{two-el}). Finally, we arrive at the following expression for the $v-$spectrum of secondary electrons
emitted during the nuclear $\beta^{-}$ decay of the two-electron atom/ion with the nuclear electric charge $Q$:
\begin{eqnarray}
& & S_e(v; Q; q) = F(Q; q) \frac{32 Q_1}{{\cal S}(Q;q) \alpha Q_2} \; \; \Bigl[1 + \exp\Bigl(-2 \pi \frac{Q_2 \alpha}{\gamma v}\Bigr)\Bigr] \; \; \frac{(Q^{2}_{1} + Q^{2}_{2})^2 \gamma^{4}
v^{3}}{(Q^{2}_1 + \gamma^2 v^2)^4} \nonumber \\
& & \times \exp\Bigl[-4 \Bigl(\frac{\alpha Q_2}{\gamma v}\Bigr) \arctan\Bigl(\frac{\alpha Q_2}{\gamma v}\Bigr)\Bigr] \label{Max568}
\end{eqnarray}
where $Q_1 = Q - q, Q_2 = Q + 1$ and the additional factor $F(Q; q)$ is written in the form
\begin{equation}
F(Q; q) = \frac{\sqrt{Q^3 (Q - q)^3}}{(Q - \frac{q}{2}\Bigr)^{3}}
\end{equation}
This factor is, in fact, the probability that the second electron will stay bound (in the ground $1s-$state of the newly formed hydrogen-like ion) during the nuclear $\beta^{-}$-decay in the two-electron
He-like atom/ion. As one can see from Eq.(\ref{Max568}) the correction for the electron-electron correlations (factor $q$ from Eq.(\ref{two-el})) is included in the final expression for the spectral
function $S_e(v; Q; q)$, Eq.(\ref{Max568}), of secondary electrons. In addition to the appearence of an extra factor $F(Q;q)$ in Eq.(\ref{Max568}), this factor also changes the `effective' electric charge
of the nucleus in the incident atom/ion ($Q_1 = Q - q$) and produces changes in the normalization constant ${\cal S}(Q;q)$ in the expression for the spectral function (or spectrum) of secondary electrons.
These observations illustrate the idea that electron-electron correlations in the maternal atom directly affect the explicit form of the spectra of secondary electrons emitted during the nuclear $\beta^{-}$
decay. For few-electron atoms this statement can be rigorously proved with the use of the natural orbital expansions for highly accurate (or truly correlated) variational wave functions for such systems
(see, e.g., \cite{MCQ}, \cite{David}). Note again that in the non-relativistic approximation we have to assume that $\gamma = 1$ in Eq.(\ref{Max568}) and $v$ is expressed in atomic units, where the unit
velocity equals the $\frac{e^2}{\hbar} = \alpha c$ value.
In general, it is hard to determine the final state probabilities in few-electron atoms/ions to the same accuracy as we did above for the one-electron tritium atom. The main problem is related to accurate
evaluations of the electron-electron correlations in such atomic systems. Another problem in actual calculations of the overlap integrals between the incident and final wave functions follows from the fact
that the total numbers of essential (or internal) variables are different in these wave functions. For simplicity, let us consider the nuclear $\beta^{-}-$decay of the three electron Li atom which originally
was in its ground $1^2S-$state. In this case Eq.(\ref{eq1}) takes the form
\begin{equation}
{\rm Li} \rightarrow {\rm Be}^{2+} + e^{-} + e^{-}(\beta) + \overline{\nu} \label{eq11}
\end{equation}
Suppose we want to use the bound state wave functions for the incident Li atom and Be$^{2+}$ ion. The incident wave function of the Li-atom contains six inter-particle coordinates, e.g., three
electron-nucleus coordinates $r_{4i}$ ($i$ = 1, 2, 3) and three electron-electron coordinates $r_{12}, r_{13}, r_{23}$. In the final wave function which describes the Be$^{2+}$ ion and a `free' electron
one finds three electron-nucleus coordinates $r_{4i}$ ($i$ = 1, 2, 3) and only one electron-electron coordinate $r_{12}$. Here we assume that the `free' electron wave function, Eq.(\ref{Cwave}), depends
upon the $r_{43} = r_{34}$ electron-nucleus coordinate only. Briefly, this means that the two electron-electron coordinates $r_{13}, r_{23}$ are lost during the sudden transition form the incident to the
final state in Eq.(\ref{eq11}). In atomic systems with five-, six- and more electrons there are additional problems related to the appearance of the so-called `unnecessary' relative coordinates in the
bound state wave functions (for more details, see, e.g., \cite{Fro06}). For instance, there are ten relative coordinates (since the number of combinations from 5 by 2 is: $C^{2}_{5}$ = 10) in an arbitrary
four-electron atom/ion, but only nine of them are truly independent in three-dimensional space. Here we cannot discuss all aspects of these interesting problems and note only that each of these two
problems presents significant difficulties for accurate computations of actual atoms and ions.
Finally, we have developed an approximate method which can be used to determine the final state probabilities for all states which arise after the nuclear $\beta^{-}$-decay and which belong to the
continuous spectrum of the final ion, Eq.(\ref{eq11}). This method is based on the natural orbital expansions of all few-electron wave functions which are included in the overlap integral between wave
functions of the incident and final states. For the process, Eq.(\ref{eq11}), the wave function of the incident state describes the ground $2^2S-$state of the three-electron Li atom. The final state wave
function is the product of the bound state wave function of the two-electron Be$^{2+}$ ion and the one-electron wave function of the `free' electron, Eq.(\ref{Cwave}) which moves in the central field of
this ion. In the method of natural orbital expansions the bound state wave functions of few- and many-electron atoms are represented by the sums of the products of their natural orbitals $\chi_{k}(r_{i})
= \chi_{k}(r_{iN})$ (the symbol $N$ stands here for the nucleus) which are some simple single-electron functions of one radial variable $r_{iN} = r_{i}$ only. In other words, we are looking for the best
approximation of the actual wave function of an $N_e-$electron atomic system by linear combinations of $N_e$-products of functions each of which depends upon one radial electron-nucleus coordinate $
r_{iN}$ ($i = 1, \ldots, N_e$) only. The natural orbital expansion is the `best' of all such linear combinations in Dirac's sense \cite{Dirac}, since the first-order density matrix is diagonal in the
natural orbitals.
In our case for the three-electron Li-atom and final two-electron Be$^{2+}$ ion we can write the following natural orbital expansions
\begin{eqnarray}
\Psi_{L=0}(\bigl\{ r_{ij} \bigr\})({\rm Li}) &=& \sum^{N_1}_{n=1} C_n \chi^{(1)}_{n}(r_{1}) \chi^{(2)}_{n}(r_{2}) \chi^{(3)}_{n}(r_{3}) \label{no1} \\
\Psi_{L=0}(\bigl\{ r_{ij} \bigr\})({\rm Be}^{2+}) &=& \sum^{N_2}_{k=1} B_k \xi^{(1)}_{k}(r_{1}) \xi^{(2)}_{k}(r_{2}) \label{no2}
\end{eqnarray}
respectively. Here $\chi_{n}(r_{i})$ and $\xi^{(i)}_{n}(r_{i})$ are the (atomic) natural orbitals constructed for the three-electron Li atom and two-electron Be$^{2+}$ ion (see, e.g., \cite{MCQ}, \cite{David}).
The coefficients $C_n$ and $B_k$ are the coefficients of the natural orbital expansions constructed for the $2^2S$-state of the Li atom and for the ground $1^1S-$state of the Be$^+$ ion, respectively. In
general, these coefficients are determined as the solutions (eigenvectors) of associated eigenvalue problems. Note that each of these natural orbitals depends upon the corresponding electron-nucleus coordinate
$r_{i}$ only (or $r_{4i}$ coordinate in our notation). In general, the natural orbital expansions do not include any of the electron-electron (or correlation) coordinates. The use of the natural orbital
expansions for the few-electron wave functions allows one to simplify drastically all calculations of the final state probabilities. Indeed, by using the natural orbital expansions one can show that all overlap
integrals are represented as the product of three one-dimensional integrals, or as finite sums of such products. Briefly, we can say that application of the natural orbital expansions for few-electron atomic
wave functions allows one to reduce calculations of the overlap integrals to a very simple procedure, e.g., for the process, Eq.(\ref{eq11}), one finds for the probability amplitude $M_{if}$:
\begin{eqnarray}
M_{if} &=& \sum^{N_1}_{n=1} \sum^{N_2}_{k=1} C_n B_k \int_{0}^{+\infty} \chi^{(1)}_{n}(r_{1}) \xi^{(1)}_{k}(r_{1}) r^2_1 dr_1 \int_{0}^{+\infty} \chi^{(2)}_{n}(r_{2}) \xi^{(2)}_{k}(r_{2})
r^2_2 dr_2 \nonumber \\
& & \times \int_{0}^{+\infty} \chi^{(3)}_{n}(r_{3}) \phi_{kl}(r_3) r^2_3 dr_3 \label{amplt}
\end{eqnarray}
where $\phi_{kl}(r_3)$ are the functions from Eq.(\ref{Cwave}). In other words, computations of the overlap integrals are now reduced to the calculation of one-dimensional integrals and products of such
integrals. The total number of integrals used in Eq.(\ref{amplt}) equals the number of bound electrons in the parent (or incident) atom/ion. In other words, in this method we do not face any problem
related either to different numbers of independent variables in the incident and final wave functions, or to the existance of `unnecessary' relative coordinates in many-electron atomic systems. The formula,
Eq.(\ref{amplt}), can be used to determine the overall probabilities of the $\beta^{-}$-decay with the emission of a `free' electron during nuclear $\beta^{-}$ decay in three-electron atoms/ions. Analogous
expressions for the probability amplitudes $M_{if}$ and final state probabilities $P_{if} = \mid M_{if} \mid^2$ can be derived for arbitrary few- and many-electron atoms and ions.
\section{Formation of fast secondary electrons}
In this Section we briefly discuss the emission of very fast secondary electrons from $\beta^{-}$-decaying few-electron atoms and ions. The velocities of such `fast' secondary electrons significanly exceed
`averaged' velocities of any `secondary' electron emitted in the process, Eq.(\ref{eq1}). In a number of books and textbooks such fast electrons are often called the $\delta-$electrons. Sudden acceleration
of these electrons to large velocities is related to the transfering of a large amount of momentum from a very fast, `relativistic' $\beta^{-}$-electron to one of the atomic electrons. Formally, this
process can be written in the form
\begin{equation}
X \rightarrow Y^{2+} + e^{-}(\delta) + e^{-}(\beta) + \overline{\nu} \label{fse}
\end{equation}
where $e^{-}(\delta)$ is the fast scondary electron emitted and accelerated to relatively large velocities during nuclear $\beta^{-}$-decay. It is clear that the probability of such a process is small. In the
lowest-order approximation such a probability is evaluated as $P \approx \alpha^4 P_e$, where $P_e$ is the probability of free-electron emission in the process, Eq.(\ref{fse}), and $\alpha = \frac{e^2}{\hbar
c} \approx \frac{1}{137}$ is the dimensionless fine-structure constant which is a small numerical value in QED. More accurate evaluation leads to a formula which contains additional factors which increase the
numerical value of $P$. Let us derive the formula which can be used to evaluate the probability of emission of the fast $\delta-$electrons during $\beta^{-}$-decay in few-electron atoms and ions.
In reality, the fast secondary electron arises when a substantial amount of momentum-energy is transfered from the very fast $\beta^{-}$-electron to a slow atomic electron. Therefore, we can write the
following integral relation between the spectral functions of the primary and secondary electrons \cite{Fro2016}
\begin{equation}
S_{\delta}(\gamma_2) = \int_{1}^{\gamma_{max}} F(\gamma_2, \gamma_1) S_{\beta}(\gamma_1) d\gamma_1 \label{fse1}
\end{equation}
where $S_{\beta}(\gamma_1)$ and $S_{\delta}(\gamma_2)$ are the spectral functions of the primary electrons (or $\beta^{-}$-electrons) and secondary electrons (or $\delta$-electrons), respectively. In this
equation the notation $F(\gamma_2, \gamma_1)$ stands for the kernel of an integral transformation, which is a real function, if both arguments are bounded between unity and $\alpha^{-1}$. The explicit form
of this kernel has been found in \cite{Fro2015}. To express this kernel let us introduce the value $\Delta = \frac{\gamma_2 - 1}{\gamma_1 - 1}$, where $\gamma_1$ and $\gamma_2$ are the $\gamma-$factors of
the $\beta^{-}-$ and $\delta-$electrons, respectively. By using this new variable ($\Delta$) we can write the following formula \cite{Fro2015} for the probability to emit one $\delta-$electron whose
$\gamma-$factor equals the $\gamma_{2}$ value
\begin{eqnarray}
P(\gamma_2) = \int_{1}^{\gamma_{max}} \Bigl(\frac{d\sigma}{d \Delta}\Bigr) \Bigl(\frac{d\Delta}{d\gamma_1}\Bigr) S_{\beta}(\gamma_1) d\gamma_1 = (\gamma_2 - 1) \int_{1}^{\gamma_{max}}
\Bigl(\frac{d\sigma}{d \Delta}\Bigr) S_{\beta}(\gamma_1) \frac{ d\gamma_1}{(\gamma_1 - 1)^2} \label{fse15}
\end{eqnarray}
where $\frac{d\Delta}{d\gamma_1} = \frac{\gamma_2 - 1}{(\gamma_1 - 1)^2}$ and the formula for the differential cross-section $\frac{d\sigma}{d\Delta}$ is \cite{Fro2015}:
\begin{eqnarray}
\frac{d\sigma}{d\Delta} &=& \zeta \frac{16 N_e \pi \alpha^4 a^{2}_{0} \gamma^{2}_1}{(\gamma^{2}_1 - 1) (\gamma_1 - 1)} \; \; \langle \frac{a^{2}_0}{r^{2}_{eN}} \rangle \; \; \frac{1}{\Delta^2 (1
- \Delta)^2} \Bigl\{ 1 - \Bigl[ 3 - \Bigl(\frac{\gamma_1 - 1}{\gamma_1}\Bigr)^2 \Bigr] \Delta ( 1 - \Delta) \nonumber \\
&+& \Bigl(\frac{\gamma_1 - 1}{\gamma_1}\Bigr)^2 \Delta^2 (1 - \Delta)^2 \Bigr\} \label{fse155}
\end{eqnarray}
where $N_e$ is the total number of bound electrons in the parent $\beta^{-}$-decaying atom/ion, $\langle \frac{a^{2}_0}{r^{2}_{eN}} \rangle = \langle \frac{1}{r^{2}_{eN}} \rangle$ (in $a.u.$) is the
atomic expectation value of $\frac{a^{2}_0}{r^{2}_{eN}} = \frac{1}{r^{2}_{eN}}$ computed for all bound (atomic) electrons, $\zeta$ is some numerical constant, while $\alpha$ and $a_0$ are the
fine-structure constant and Bohr radius, respectively. Note that the formula, Eq.(\ref{fse155}), can be considered as an integral transformation of the $\beta-$electron spectrum (or spectrum of the
primary fast electrons). The explicit formula for the spectrum of secondary $\delta-$electrons directly follows from Eqs.(\ref{fse15}) - (\ref{fse155}) which must be integrated over $\gamma_1$ from 1 to
$\gamma_{max} = \frac{\Delta E}{m_e c^2}$, where $\Delta E$ is the total energy released in the nuclear $\beta^{-}$-decay. This problem can be solved by integrating term-by-term in Eq.(\ref{fse15}), where
$\frac{d\sigma}{d \Delta}$ must be taken from Eq.(\ref{fse155}).
The final step of our procedure is to find an accurate expression for the spectrum of the primary $\beta^{-}$ electrons which must be used in Eq.(\ref{fse1}). This problem was considered in a large number
of papers \cite{Fermi} - \cite{Bethe}. Experimental energy spectra of the emited primary $\beta^{-}$ electrons can be found, e.g., in \cite{Cook} and \cite{Neary}, where the $\beta^{-}$ decays of the
${}^{64}$Cu and ${}^{210}$Bi atoms were studied in detail. As follows from these studies the spectral function of the primary $\beta^{-}$-electrons can be written in the form:
\begin{eqnarray}
S_{\beta}(\gamma) d\gamma &=& {\cal C}_{\gamma} \cdot F(Q + 1, (\gamma - 1) m_e c^2) \; \; \Bigl[ \frac{\Delta E^{\prime} + m_e c^{2}}{m_e c^{2}} - \gamma - 1 \Bigr]^2 (\gamma^2 - 1)^{\frac12}
\; \; \gamma d\gamma \label{eq55a} \\
&=& {\cal C}^{\prime}_{\gamma} \cdot F(Q + 1, \gamma - 1) \; \; \Bigl[ \frac{\Delta E^{\prime}}{m_e c^{2}} - \gamma \Bigr]^2 (\gamma^2 - 1)^{\frac12} \; \; \gamma
d\gamma \nonumber
\end{eqnarray}
where $\Delta E^{\prime} = \Delta E - m_e c^2$. This expression almost exactly coincides with the formula, Eq.(210), derived in \cite{Bethe}, i.e.
\begin{eqnarray}
S_{\beta}(\gamma) d\gamma = {\cal C}^{\prime}_{\gamma} \; \; \Bigl[ \frac{\Delta E^{\prime}}{m_e c^{2}} - \gamma \Bigr]^2 (\gamma^2 - 1)^{\frac12} \; \; \gamma d\gamma \label{eq555a}
\end{eqnarray}
The spectrum, Eq.(\ref{eq555a}), contains no Fermi function as was introduced by Fermi in \cite{Fermi}. In general, the assumption that $F(Q + 1, \gamma - 1) = 1$ works well for light atoms, but
for intermediate ($Q \ge 40$) and heavy ($Q \ge 75$) atoms the Fermi function in Eq.(\ref{eq55a}) is really needed. As follows from Eq.(\ref{eq555a}) the normalization constant ${\cal C}^{\prime}_{\gamma}$
is a function of the thermal energy released during the nuclear $\beta^{-}$ decay, i.e. of the $\frac{\Delta E^{\prime}}{m_e c^{2}}$ ratio, where $m_e$ = 0.5110998910 $MeV/c^2$. Inverse values of the
normaliztion factors $\Bigl({\cal C}^{\prime}_{\gamma}\Bigr)^{-1}$ determined numerically for different $\Delta E^{\prime}$ values can be found in Table V. By using the formulas, Eqs.(\ref{fse15}) -
(\ref{fse155}) and Eq.(\ref{eq55a}), one can obtain a closed analytical formula for the probabilities of emission and energy/velocity spectrum of the fast secondary electrons (or $\delta-$electrons)
emitted during the nuclear $\beta^{-}$ decay in arbitrary few- and many-electron atoms/ions.
\section{Conclusions}
We have considered nuclear $\beta^{-}$-decays in few-electron atoms and ions which lead to an additional ionization of the final ion in which one of the atomic electrons becomes unbound. The procedure is
developed for determining the corresponding transition probabilities and the velocity/energy spectrum of secondary electrons. Formation of fast secondary electrons ($\delta-$electrons) during
nuclear $\beta^{-}$-decay in few-electron atoms/ions is also briefly discussed.
It should be mentioned that the important role of bound-free transitions during the nuclear $\beta^{-}$ decay in few-electron atoms has been emphasized since earlier works by Migdal (see, e.g., \cite{LLQ},
\cite{MigK} and references therein). In this study we have chosen the proper wave functions to describe the unbound (or `free') electron which is emitted during the nuclear $\beta^{-}$ decay. This allows
us to solve a number of long-standing problems, e.g., to derive the explicit formulas for the velocity/energy spectra of secondary electrons emitted during nuclear $\beta^{-}$-decay. Furthermore, now it is
absolutely clear that the spectra of the emitted secondary electrons have different forms for different few-electron atoms/ions, since these spectra strongly depend upon the electron-electron correlations in
the bound state of the parent atom/ion. From here one finds the `similarity law' between the velocity spectra of secondary electrons emitted during nuclear $\beta^{-}$-decay of two different atoms/ions
which have the same (or similar) electron configurations. We also describe an approach which can be useful for derivation of the velocity/energy spectrum of very fast secondary electrons ($\delta-$electrons)
which are observed during nuclear $\beta^{-}$ decays in few- and many-electron atoms/ions.
|
1,477,468,751,431 | arxiv | \section{Introduction}
Newton's third law is a central pillar of physics. Much of what we
know about the dynamical evolution of galaxies comes from $N$-body
simulation, but most $N$-body codes use approximations that break the
third law.
A well-known example of the consequences of breaking it is provided by
the sinking satellite problem \citep{HW89,W89,VW99}; the dynamical
friction felt by the satellite is grossly overestimated if one
``pins'' the centre of the host galaxy, ignoring the galaxy's $l=1$
dipole response.
This example is perhaps extreme, but there are many other situations
where one is interested in the detailed response of a galaxy to
asymmetric perturbations and would like to be able to model it without
having to worry about artifacts arising from violations of Newton's third law.
Examples include modelling bar-halo
interactions (see \citet{McMehnen} and references therein) and the
wandering of central supermassive black holes.
This paper describes the $N$-body code {\sc grommet} (GRavity On Multiple
Meshes Economically and Transparently), which has been designed
specifically to model the detailed dynamical evolution of individual
galaxies without using any approximations that violate Newton's third
law. I assume that the galaxy is collisionless. It is
completely described by a distribution function (DF) $f(\b x,\b v;t)$,
which gives the (mass) density of particles in phase space, along with
the potential~$\Phi(\b x;t)$ generated by this DF and any external
sources. The evolution of $f$ is governed by the collisionless
Boltzmann equation (CBE),
\begin{equation}
\label{eq:CBE}
\frac{\partial f}{\partial t} + \b v\cdot\nabla f + \b a\cdot\frac{\partial f}{\partial\b v}=0,
\end{equation}
where the accelerations $\b a\equiv -\partial\Phi/\partial\b x$. As \citet{HO92}
and \citet{LCB} emphasise, in a collisionless $N$-body code particles
are not to be thought of as representing stars or groups of stars.
Instead one is using the method of characteristics to
integrate~(\ref{eq:CBE}), estimating the accelerations $\b a(\b x)$ by
Monte Carlo sampling. Of course, the shot noise in these estimates
means that in practice any simulation will never be perfectly
collisionless. Therefore it is important to make $N$ as large as
possible in order to minimize the effects of this noise. So, {\sc grommet}
has been designed to be both fast and economical on memory.
In section~\ref{sec:potsolve} below I describe the multiple-mesh
procedure used by {\sc grommet} to estimate accelerations.
Section~\ref{sec:move} shows how this leads naturally to a
momentum-conserving block-timestep integrator based on Duncan, Levison
\& Lee's (1998) potential-splitting scheme. In
section~\ref{sec:tests} I present the results of some tests and also
compare {\sc grommet}'s performance against other codes'.
Section~\ref{sec:summary} sums up. For completeness, I include in an
Appendix an explanation of James' (1977) method, which is used in
Section~\ref{sec:potsolve}.
\section[]{Potential solver}
\label{sec:potsolve}
The task of the potential solver in a collisionless $N$-body code is to
estimate the accelerations
\begin{equation}
\label{eq:accels}
\b a(\b x) = -\nabla\int {G\rho(\b x')\over |\b x-\b x'|}\,{\rm d}^3\b x',
\end{equation}
where one does not know the density distribution $\rho(\b x)$
explicitly, but instead only has a discrete sample of $N$ particles
with positions $\b x_i$ and masses $m_i$ drawn from it.
\subsection{Particle-mesh method}
At the heart of {\sc grommet}'s potential solver is the particle mesh (PM)
method \citep{Hock}. It uses a cubical mesh, with vertices at
positions $\b x_{ijk}$, spaced a distance $h$ apart. The procedure
for obtaining an initial estimate of the accelerations
(eq.~\ref{eq:accels}) felt by each particle follows.
\begin{enumerate}
\item Loop over all $N$ particles using cloud-in-cell interpolation
to build up the discretized density distribution $\rho_{ijk}=\rho(\b
x_{ijk})$;
\item Calculate the potential~$\Phi_{ijk}$
corresponding to this~$\rho_{ijk}$ using James' (1977) method (see
Appendix);
\item Looping again over all $N$ particles, use a finite-difference
approximation to estimate accelerations -$\partial\Phi/\partial\b x$ at the mesh
points surrounding each particle, then interpolate the value of the
acceleration at the particle's location using the same cloud-in-cell
scheme employed in step~(i).
\end{enumerate}
Since steps (i) and~(iii) use the same interpolation scheme, this
procedure produces accelerations that obey Newton's third law subject
to one extra condition: the finite-difference scheme in step (iii)
cannot provide meaningful accelerations for particles that lie in the
outermost layer of mesh cells, which means that those particles should
be omitted in step~(i). This seems an almost trivial point, but it is
important for the refinement scheme introduced below. It turns out
that for the scheme below to work properly we have to peel off the
outer {\it two} layers of cells. I typically use meshes with $64^3$
or $128^3$ cells, of which then only $60^3$ or $124^3$ are assignable
in step~(i).
Apart from respecting Newton's third law, the other attractive
features of the PM method are its efficiency and its linear scaling
with~$N$: the time needed to carry out step (ii) is
independent of $N$, but for a typical mesh with $64^3$ cells the
overall time is dominated by the ${\cal O}(N)$ cost of carrying out the
assignment steps (i) and (iii) once $N\gtrsim5\times10^5$; similarly, the
memory needed to store mesh quantities and to carry out James' method
is negligible compared to that used for storing the particles'
masses, positions and accelerations.
The major disadvantage of the PM method is that it does not work well
for centrally concentrated mass distributions, since each particle has
an effective size of order the mesh spacing~$h$. In other words, the
mesh spacing sets the effective softening length used in the
calculation of the forces.
\begin{figure}
\begin{center}\includegraphics[width=0.5\hsize]{mesh}\end{center}
\caption{An example of the multiple mesh scheme used to calculate
accelerations. Particles A, B and~C all lie within the region
covered by the outer, coarse mesh, but B and C also lie inside the
fine, inner mesh. An initial estimate of the forces on all three
particles comes from using the PM method on the coarse mesh. This
is refined by isolating those particles within the inner mesh,
recalculating their interparticle forces first using the fine mesh,
then using the coarse, and adding the difference
to the initial coarse-mesh estimate. Therefore, the force
between A and each of B and C is obtained using the coarse mesh, but
that between B and~C comes from the fine mesh. In all cases
Newton's third law is respected. \label{fig:mesh}}
\end{figure}
\subsection{Refinement scheme}
The natural remedy of this shortcoming is to introduce finer submeshes
in interesting, higher-density regions and to recalculate the
accelerations for particles inside each submesh. But how best to
include the effect of the parent mesh's gravity field on the
accelerations calculated in each submesh and vice versa? One
possibility is to solve Poisson's equation on the submesh subject to
boundary conditions interpolated from the parent mesh
\citep[e.g.,][]{Anninos,Jessop}. This is a key element of the
widely-used family of multigrid methods \citep[e.g,.][]{ART,MLAPM},
and would be straightforward to apply in {\sc grommet} using the method of
equivalent charges (see Appendix). However, all of these schemes
violate Newton's third law, as one can easily see by considering the
force between a particle inside a submesh and another one outside.
{\sc grommet} instead uses a simplified version of the scheme proposed by
\citet{Gelato} (see also figure~\ref{fig:mesh}). The acceleration
felt by each particle is calculated using a series of nested
``boxes''. We start with the outermost toplevel box, which
discretizes the simulation volume into, say, $n_x\times n_y\times
n_z=60^3$ assignable cells.
\crap{
The PM scheme
on this toplevel box provides an initial estimate of the particles'
accelerations.
}
This box, like any other box, can contain zero, one or more subboxes.
Each subbox contains two meshes: a coarse one composed of an
$(n_x/2)\times (n_y/2)\times (n_z/2)$ subblock of the parent's cells,
and a fine one that covers the same subblock twice as finely in each
direction, with $n_x\times n_y\times n_z$ cells.
For the most common situation in which each box contains no
more than one subbox, the acceleration at any position $\b x$ is given
by the sum over all boxes,
\begin{equation}
\label{eq:grommetaccel}
\b a(\b x) = \sum_j \b a_j(\b x),
\end{equation}
where the contribution from the $j^{\rm th}$ box,
\begin{equation}
\label{eq:al}
\b a_j(\b x) = \b a_j^{+}(\b x) - \b a_j^{-}(\b x),
\end{equation}
is the difference between accelerations calculated using the PM method
on the box's fine (+) and coarse (-) meshes, simply ignoring any
particles that lie outside. The outermost toplevel box
($j=0$) has no coarse mesh, so $\b a_0^{-}=0$. In this scheme the
acceleration between any two particles is calculated using the box
with the finest mesh spacing that encloses them both and Newton's
third law is obeyed to machine precision. This last feature comes at
a cost though: the acceleration~(\ref{eq:grommetaccel}) is
discontinuous at box boundaries, a point to which I return below.
Sometimes one might want to refine a region that cannot be
enclosed within a single subbox. If one simply tiles the region
using, say, two abutting subboxes, the force between particles located
at either side of the boundary between them will be calculated using
the coarse parent mesh, which
is usually not desirable. The solution is to let the subboxes overlap by
a few mesh cells and then correct eq.~(\ref{eq:grommetaccel}) for the
double counting of particles in the overlap region by introducing a
third subbox whose boundaries are given by the intersection of the two
overlapping subboxes and subtracting the accelerations~(\ref{eq:al})
obtained in this new subbox. In contrast, \citet{Gelato} introduce a
buffer zone around each box and treat particles in the buffer zone
differently from the rest. Their scheme violates Newton's third law.
I have deliberately omitted any automated scheme for deciding where
and when to introduce subboxes; these schemes inevitably break
time-reversibility, and, for the type of problem the code was designed
for, I expect that the user will already have a much better idea
of how best to place boxes.
\section{Moving particles}
\label{sec:move}
The characteristic equation of the CBE is
\begin{equation}
\label{eq:charac}
\frac{{\rm d} t}{1} = \frac{{\rm d}\b x}{\b v} = \frac{{\rm d}\b v}{\b a},
\end{equation}
where the accelerations $\b a(\b x,t)$ depend on the DF~$f$ through
equ.~(\ref{eq:accels}). The most straightforward and widely used way
of following the characteristics is by using a leapfrog integrator.
The (fixed-timestep) leapfrog produces an approximate solution
to~(\ref{eq:charac}) that respects many of its important symmetries;
it is symplectic\footnote{This assumes that the accelerations are
smooth, which is not the case for many collisionless
$N$-body codes, including {\sc grommet}.},
reversible in time and, when the accelerations are
obtained using a potential solver that respects Newton's third law,
it conserves linear momentum.
An unattractive feature of the leapfrog is that it uses the same fixed
timestep for all particles. Consider a deeply plunging radial orbit
in a model galaxy with a central density cusp or black hole.
Integrating this orbit accurately near pericentre requires a very
small timestep, which, in the standard leapfrog scheme, means that all
other particles have to be integrated using the same small timestep,
even those on loosely bound circular orbits. This can be
prohibitively expensive, since it involves calculating the full set of
accelerations $\b a(\b x,t)$ for all particles at every timestep.
{\sc grommet} uses a block-timestep scheme to improve efficiency. Each of
the boxes of section~\ref{sec:potsolve} above has an associated
timestep, which can be chosen to be either equal to that of its parent
box or a factor of two shorter.
Broadly speaking, a particle's
position and velocity are updated using the shortest timestep of any
of the boxes enclosing it, but the force between any pair of particles
is updated only on the timestep of the longest particle, thus
conserving linear momentum. The rest of this section makes this
somewhat vague description more precise.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\hsize]{multikddk}
\end{center}
\caption{The sequence of steps for motion in the
Hamiltonian~(\ref{eq:hamfrogs}) with two levels of timestep
refinement.
For any given timestep level~$l$, the $K$ operation ``kicks''
particles inside any boxes having that timestep level, applying to each an
impulse $\frac12\tau_l\cdot(-\partial V_{(l)}/\partial\b x)$, where the timestep
$\tau_l=2^{-l}\tau_0$.
These impulses change the particles' velocities, but not their
positions. They conserve the particles' total linear momentum.
The $D$ operation ``drifts'' all particles
for a time $\frac12\tau_l$, changing their positions but not their velocities.
\label{fig:multikddk}}
\end{figure*}
\subsection{The standard leapfrog integrator}
Recall that a leapfrog integrator with a single, fixed timestep $\tau$
corresponds to motion in a time-dependent Hamiltonian
\citep[e.g.,][]{Hotbert}
\begin{equation}
\label{eq:hamfrog}
H = T + \sum_{k=-\infty}^\infty \delta_\epsilon\left(k-\frac{t}{\tau}\right) V(\b
x_1,\ldots,\b x_N),
\end{equation}
where $T\equiv\frac12\sum_i m_iv_i^2$ is the kinetic energy of all the
particles and
$\delta_\epsilon(x)\equiv\frac12(\delta(x-\epsilon)+\delta(x+\epsilon))$
with $0<\epsilon\ll1$. The periodic comb of delta functions turns on
the potential energy $V(\b x_1,\ldots,\b x_N)$ only at times
$t=(k\pm\epsilon)\tau$ for integer~$k$. Integrating the
resulting equations of motion from time $t=k\tau$ to $t=(k+1)\tau$
yields
\begin{align}
\b v_i(k+\textstyle\frac12) &= \b v_i(k)+
{\textstyle\frac12}\tau\b a_i(k),\\
\b x_i(k+1) & = \b x_i(k) + \tau \b v_i(k+\textstyle\frac12),\\
\b v_i(k+1) & = \b v_i(k+\textstyle{\frac12})+\textstyle{\frac12}\tau\b a_i(k+1),
\end{align}
where the accelerations $\b a_i(k)\equiv-{\partial V}/m_i{\partial\b x_i}$
evaluated at time~$t=k\tau$. This is just the sequence of steps for
the kick-drift-kick form of the leapfrog: the potential is turned on
briefly just after $t=k\tau$ resulting in a ``kick'' (denoted $K$) to the
particles' velocities; the particles then ``drift'' ($D$) along at
their new velocities until the potential turns on again just before
$t=(k+1)\tau$, at which point they receive another kick. The
drift-kick-drift form of the leapfrog can be obtained by adding
$\frac12$ to the argument of the delta functions or, alternatively, by
integrating the equations of motion from $(k-\frac12)\tau$ to
$(k+\frac12)\tau$ instead.
Another way of looking at each of these versions of the leapfrog is to
consider them as compositions of the two time-asymmetric first-order
symplectic integrators (each applied left to right),
$K(\tau/2)D(\tau/2)$ and $D(\tau/2)K(\tau/2)$, whose first-order error
terms cancel \citep[e.g.,][]{SahaTremaine}. In the following I write
the leapfrogs as the sequence of operations $KDDK$ and $DKKD$,
dropping the $(\tau/2)$ arguments.
\subsection{A block-timestep leapfrog}
\label{sec:multimove}
In {\sc grommet} the accelerations $\b a(\b x)$ are given by a
sum~(\ref{eq:grommetaccel}) of contributions~(\ref{eq:al}) from boxes
with different spatial refinement levels. The outermost box is
associated with a timestep $\tau_0$ and timestep level $l=0$.
Each subbox has a timestep $\tau_l=2^{-l}\tau_0$ with timestep
level $l$ either equal to that of its parent or larger by one.
Let us add together all the
contributions~(\ref{eq:al}) to $\b a(\b x)$ from boxes having timestep
level~$l$ and write the result as $\b a_{(l)}(\b x)$. Let
$V_{(l)}(\b x)$ be the corresponding contribution to the potential
energy.
Instead of turning on the full potential $V=\sum_l V_{(l)}$ at every
timestep, consider the alternative Hamiltonian
\begin{equation}
\label{eq:hamfrogs}
H = T + \sum_{l=0}^{l_{\rm max}}
\sum_{k=-\infty}^\infty \delta_\epsilon\left(k-\frac{t}{2^{-l}\tau_0}\right) V_{(l)}(\b
x_1,\ldots,\b x_N),
\end{equation}
where $l_{\rm max}$ is the maximum timestep refinement level and each
$V_{(l)}$ is turned on only at times $t=2^{-l}k\tau_0$. This is a
variant of the potential splitting used by \citet{DLL98} to model
close encounters in planetary systems.
Integrating the equations of motion for this new Hamiltonian results
in a nested sequence of $KDDK$ leapfrog steps, as shown in
figure~\ref{fig:multikddk}. The sequence can be produced using the
following simple recursive algorithm:
\vbox{
\obeylines
\tt
Step($l$, $\tau$):
\quad if $l>l_{\rm max}$:
\qquad Drift($\tau/2$)
\quad else:
\qquad Kick($l$,$\tau/2$)
\qquad Step($l+1$,$\tau/2$)
\qquad Step($l+1$,$\tau/2$)
\qquad Kick($l$,$\tau/2$)
}
This algorithm is called initially with $l=0$ and $\tau=\tau_0$. Each
{\tt Kick($l$,$\tau/2$)} operation applies an impulse
$-\frac12\tau\nabla V_{(l)}$ to all particles, which changes the
particles' velocities but not their positions. The {\tt Drift}
operation moves the particles once the complete set of impulses has
been applied.
This algorithm requires a factor $\sim l_{\rm max}$ fewer kick
operations (and therefore fewer expensive force evaluations) than a
simple leapfrog with a single timestep $2^{-l_{\rm max}}\tau_0$.
It is obvious that it conserves linear momentum and is reversible in
time. Unlike the integrator in \citet{DLL98}, however, it is {\it
not} symplectic; the discontinuities in the accelerations at box
boundaries mean that the Poincar\'e integral invariants are not
conserved.
\section{Tests and comparisons}
\label{sec:tests}
I have carried out a number of simple tests with small numbers of
particles ($1<N\lesssim20$) to confirm that my implementation of the
ideas above really does respect Newton's third law and conserve linear
momentum. These small-$N$ tests serve only as minimal sanity checks;
as stressed by \cite{MLAPM}, truly interesting tests of a
collisionless code come not from testing how faithfully it reproduces
the solution to the two-body problem, but rather from its
ability to model collisionless systems accurately using large numbers
of particles.
In this section I use some simple collisionless galaxy models to test
{\sc grommet}'s potential solver and integrator, comparing results obtained
from {\sc grommet} against those obtained from two other codes. Both of the
other codes are available as part of the NEMO package. The first is
the fast tree code described in \citet{Dehnen02}. It obtains
accelerations from a Cartesian multipole expansion. This respects
Newton's third law and a standard leapfrog integrator built around
this potential solver then naturally conserves linear momentum. (A
multiple-timestep version is also available, but it does not conserve
momentum.) The second code \citep{HO92} uses the so-called
``self-consistent field'' (SCF) method, which represents the density
and potential using a truncated basis function expansion. It shows no
respect for Newton's third law, but, like {\sc grommet}, is optimized for
modelling single galaxies.
\begin{figure}
\begin{center}\includegraphics[width=0.8\hsize]{accels}\end{center}
\caption{Fractional errors in the accelerations
at randomly selected positions within and around an
$N=10^7$-particle realization of a
truncated power-law sphere. The lower set of points plot results
calculated using the potential solver of section~\ref{sec:potsolve}
using 8 levels of refinement
of a $60^3$ mesh with $x_{\rm max}=2$.
The upper set (offset by 0.1 vertically) are for
results obtained using a tree code with
fixed softening length
$\epsilon=10^{-2}$.
\label{fig:fracaccels}}
\end{figure}
\begin{figure}
\begin{center}\includegraphics[width=0.8\hsize]{accelszoom}\end{center}
\caption{Fractional errors in the accelerations inside a
$10^8$-particle realization of a power-law sphere, a factor of 10
more particles than in figure~\ref{fig:fracaccels}. The lower set
of points plot results obtained using the potential solver of
section~\ref{sec:potsolve} with the same set of nested boxes and
$60^3$ mesh employed for figure~\ref{fig:fracaccels}.
The middle and upper set show the effects of using finer meshes with
$124^3$ (middle) and $252^3$ (upper) cells, offset by 0.04 and 0.08 respectively.
\label{fig:fracaccelszoom}}
\end{figure}
\subsection{Static tests}
Real galaxies have steep central density cusps
\citep[e.g.,][]{LauerNuk}, so an obvious test of the potential solver
is to check the accelerations it returns for an $N$-body realization
of a truncated power-law sphere with density profile
\begin{equation}
\rho(r)\propto
\begin{cases}
r^{-\alpha}, & \hbox{if $r<r_{\rm max}$},\cr
0, & \hbox{otherwise.}
\end{cases}
\end{equation}
I have generated a realization with $r_{\rm max}=1$, $\alpha=2$ having
$10^7$ equal-mass particles and used eq.~(\ref{eq:grommetaccel}) to
calculate accelerations at randomly selected positions inside and
around the sphere. For this I use a toplevel box enclosing the
region $|\b x|<2$ together with eight levels of refinement, the
boundary of the $i^{\rm th}$ subbox being given by $|\b x|=2^{1-i}$.
Figure~\ref{fig:fracaccels} plots the fractional difference between
the results of this procedure against the exact, analytical expression
for the acceleration. For radii $2^{-5}<r<1$ the RMS fractional
error is only 0.0023, rising to 0.007 for $2^{-8}<r<2^{-5}$, within
which there are relatively few particles. The source of this good and
desirable behaviour is the decrease in the effective softening length
as one moves to smaller length scales.
For comparison, the upper set of points in figure~\ref{fig:fracaccels}
plot the errors in the accelerations of the same $10^7$-particle
sphere calculated at the same positions using the tree code {{\sc falcon}}
with softening kernel $P_2$ and fixed softening length
$\epsilon=10^{-2}$. The RMS fractional error in the resulting
accelerations for radii $2^{-5}<r<1$ is 0.011, over four times larger than
{\sc grommet}'s, while for $r<2^{-5}$, the calculated accelerations become
systematically too low. {{\sc falcon}} takes about 2.5 times longer than
{\sc grommet} to produce these results and needs more than three times the
memory.
Perhaps the most worrying feature of the nested box scheme of
section~\ref{sec:potsolve} is that the
accelerations~(\ref{eq:grommetaccel}) are discontinuous at box
boundaries. One can see some hints of this discontinuity in
figure~\ref{fig:fracaccels} at $\log_2 r = -1$, $-2$, $-3$, but it is
even clearer in figure~\ref{fig:fracaccelszoom} which plots the
fractional errors in a $10^8$-particle realization.
Even if one were to run a simulation with such large $N$, the
discontinuity itself is unlikely to be important because the
integration scheme in section~\ref{sec:move} does not depend
explicitly on the derivatives of the accelerations (but the
discontinuity does mean that the integrator is not symplectic, as
noted earlier). More important is the fact that if the discontinuity
is noticeable it means that the bias in the estimates of the
accelerations has become significant. The natural solution is then to
move to a finer mesh (e.g., $124^3$ cells instead of $60^3$,
figure~\ref{fig:fracaccelszoom}).
\subsection{Dynamical tests}
\label{sec:hernqtest}
For the dynamical tests I use a spherical isotropic \citet{Hernquist}
model with density profile
\begin{equation}
\label{eq:hernq}
\rho(r) = \frac{Ma}{2\pi r(a+r)^3}.
\end{equation}
This idealized model is in equilibrium. Then by Jeans' theorem
\citep{BT} its DF~$f_0(\b x,\b v)$ can depend on $(\b x,\b v)$ only
through the integrals of motion, which are the energy ${\cal E}$ and angular
momentum $\b J$ per unit mass. Since the model is isotropic the DF
cannot depend on the latter and so $f=f_0({\cal E})$.
A straightforward procedure for generating initial conditions
(hereafter ICs) corresponding to this model would be to draw $N$
particles directly from $f_0({\cal E})$, assigning each a mass $M/N$.
Integrating~(\ref{eq:hernq}), the fraction of particles inside
radius~$r$ would then be $r^2/(a+r)^2$, showing that there would be
relatively few particles with radii $r\ll a$, deep inside the
interesting $r^{-1}$ central density cusp. To improve resolution near
the centre, I instead generate initial conditions using a multi-mass
scheme, drawing particles from an anisotropic
sampling DF \citep{LCB} with {\it number} density
\begin{equation}
\label{eq:fs}
f_s({\cal E},J^2) = h({\cal E},J^2)f_0({\cal E}),
\end{equation}
where \citep{Sigurdsson}
\begin{equation}
h({\cal E},J^2) \equiv A\times
\begin{cases}
\left(\frac{r_{\rm peri}}a\right)^{-\lambda} & \hbox{if $r_{\rm peri}<a$},\cr
1 & \hbox{otherwise},
\end{cases}
\end{equation}
$r_{\rm peri}({\cal E},J^2)$ is the particle's pericentre radius
and the constant $A$ is chosen to normalize~$f_s$.
When the parameter $\lambda=0$, the sampling DF $f_s$ is identical to
$f_0({\cal E})$. Increasing $\lambda$ improves the sampling of the cusp by
increasing the number density of particles having pericentres $r_{\rm
peri}<a$. To balance this increase in number density each particle
is assigned a mass $Mf_0/Nf_s=M/Nh({\cal E},J^2)$ so that the phase-space
mass density is still given by the desired~$f_0({\cal E})$
For the tests below I adopt units $G=M=a=1$ and draw $2\times10^6$
particles with radii in the range $10^{-3}<r<10^2$ from the sampling
DF~(\ref{eq:fs}) with $\lambda=1$. Poisson noise in the resulting
distribution of particles makes it slightly asymmetric, which has two
unwanted consequences \citep[see also][]{McMehnen}. First, the centre
of mass of the system moves with a constant velocity of order
$\sim10^{-3}(GM/a)^{1/2}$ because the total linear momentum of the
particles is small, but non-zero. Second, the asymmetry quickly
destroys the inner part of the $r^{-1}$ density cusp, even when viewed
a frame co-moving with the centre of mass. To remove both of these
effects, I extend my ICs to include the mirror distribution obtained
by reflecting each of the $2\times10^6$ particles with $(\b x,\b
v)\to(-\b x,-\b v)$. The full ICs then have $N=4\times10^6$
particles.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\hsize]{hernqrho}
\end{center}
\caption{Inner density profile of the same realization
of a Hernquist model after it has been evolved for 10 time units
using a simple leapfrog integrator with
accelerations obtained using different potential solvers: {\sc grommet} (light solid curve), {\tt
{\sc falcon}} (dotted curve) and the SCF method (dashed). The heavy solid curve
plots the density profile of the the initial conditions.
\label{fig:hernqrho}}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=0.3\hsize]{diffXgrommet}
\includegraphics[width=0.3\hsize]{diffXfalcon}
\includegraphics[width=0.3\hsize]{diffXscf}
\end{center}
\caption{RMS fractional change in the angular momenta
of particles in the models
of figure~\ref{fig:hernqrho}, measured from $t=0$ to $t=10$
and plotted as a function of the particles' pericentre radii. The same random
selection of particles is used to generate each panel.
\label{fig:hernqdiff}}
\end{figure*}
\subsubsection{Evolution of an (almost) equilibrium model}
Of course, one does not expect an $N$-body model evolved from these
ICs to be in perfect equilibrium; the ICs omit particles outside the
range $10^{-3}<r<10^2$ and are constructed assuming the exact
potential corresponding to the density distribution~(\ref{eq:hernq})
instead of the softened potential used in the $N$-body code.
Nevertheless, it is interesting to compare the evolution of the
$N$-body model obtained from {\sc grommet} with those obtained from the
other two codes.
Figure~\ref{fig:hernqrho} shows the density profile of the models
after 10 time units (or $\sim66$ circular orbit periods at $r=0.01$).
All three models use the same simple leapfrog integrator with timestep
$2\times10^{-3}$; only the source of the accelerations is different.
For {\sc grommet} I use boxes with boundaries at $|\b x|=100\times2^{-i}$
for $i=0,\ldots,12$. Each box has $60^3$ assignable cells, the cell
length varying from 3.33 in the toplevel box down to
$0.8\times10^{-3}$ in the innermost box. {\sc falcon}'s results are
obtained using kernel $P_2$ with softening length $\epsilon=10^{-3}$,
while the SCF expansion uses the \citet{HO92} basis function expansion
truncated at $n_{\rm max}=6$ radial and $l_{\rm max}=4$ angular terms.
The results in figure~\ref{fig:hernqrho} are unsurprising. The
density at the very centre of the {\sc grommet} and {\sc falcon} models falls
slowly because because the ICs omit particles with radii $r<10^{-3}$
and do not take into account the softening in these codes. In
contrast, the density profile of the SCF model does not change
significantly because its basis function expansion is incapable of
producing anything that deviates strongly from a Hernquist model on
small spatial scales.
Much more is happening at the level of individual orbits, however.
All of these models begin with spherical symmetry and remain
spherical, apart from the effects of Poisson noise. Therefore the
amount of diffusion in the angular momentum~$J$ of their particles'
orbits serves as a convenient measure of how far each code is from
being perfectly collisionless.
Figure~\ref{fig:hernqdiff} shows that the particles in all three
models suffer from significant amounts of diffusion. The SCF model
shows the least diffusion, but it is only marginally better
than {\sc grommet}; although the SCF potential remains close to the exact
Hernquist potential, the flickering of the expansion coefficients with
time makes the orbits diffuse just like in any other code. The
diffusion is worst in the {\sc falcon} model, particularly for orbits having
pericentres much larger than its fixed softening length
$\epsilon=10^{-3}$. All of these results are based on the variation
in orbits' angular momentum in models integrated from $t=0$ to
$t=10$, but I find similar results for models integrated from, say, $t=10$ to
$t=50$ when scaled to account for the longer timescale over which the
diffusion occurs.
\begin{table}
\centering
\begin{tabular}{rcl}
\bf Code & \bf Time & \bf Comment\\
{\sc falcon} & 2.1 & single timestep\\
SCF & 1.3 & single timestep, $(n_{\rm max},l_{\rm max})=(6,4)$ \\
{\sc grommet} & 1.0 & single timestep\\
{\sc grommet} & 0.3 & four levels of timestep refinement\\
{\sc grommet} & 0.16 & seven levels of timestep refinement\\
\end{tabular}
\caption{Comparison of time required for different codes to integrate the multi-mass
Hernquist model of section~\ref{sec:hernqtest}, relative to the
single-timestep implementation of {\sc grommet}.
Neither the {\sc grommet} nor the SCF models take advantage of the reflection symmetry of
this simple problem.}
\label{tab:timings}
\end{table}
The results presented so far have been obtained using an integrator
with a single small timestep, but the dynamical time inside the cusp
of a Hernquist model varies with radius $r$ approximately as
$r^{1/2}$. As, e.g., \citet{Zemp} have argued, it is natural to
advance particles using a timestep proportional to the local dynamical
time. We can come close to the optimal $\tau\propto r^{1/2}$ scaling by
using the block-timestep integrator of section~\ref{sec:multimove}
above and halving the timestep on every {second} subbox.
To test the practicality of this scheme, I have run a model with
timesteps $\tau=32\times10^{-3}$ for particles with $|\b
x|>100\times2^{-6}\simeq1.5$, shrinking by a factor of two at the
boundaries $|\b x|=100\times2^{-i}$ of boxes $i=6$, 8, 10 and~12. In
the innermost ($i=12$) box the timestep is $2\times10^{-3}$, the same
used for the single-timestep run above. This multiple-timestep model
yields results almost indistinguishable from the single-timestep {\sc grommet}
model plotted in figures \ref{fig:hernqrho} and~\ref{fig:hernqdiff},
but is three times faster (see table~\ref{tab:timings}). If it were
appropriate to halve the timestep at {\it all} box boundaries
$i=6,\ldots,12$ (see below for an example) then the block-timestep
scheme would yield a sixfold increase in speed over the
single-timestep integrator.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\hsize]{hernqadia}
\end{center}
\caption{The results of adiabatically adding a Plummer sphere
potential to an initially isotropic Hernquist model.
The Plummer sphere has radius $2\times10^{-3}a$ and a final mass
$2\times10^{-3}\,M_{\rm gal}$.
The results obtained using {\sc grommet}'s multiple-timestep scheme are almost
identical to those calculated from Young's (1980) method.
\label{fig:hernqadia}}
\end{figure}
\subsubsection{Response to an adiabatically grown blob}
For a slightly more interesting test, I model the growth of a black
hole at the centre of a galaxy by slowly adding a Plummer sphere
potential
\begin{equation}
\label{eq:plummer}
\Phi_{\rm b}(\b x; t) = -\frac{GM_{\rm b}(t)}{\sqrt{r^2+b^2}}
\end{equation}
to a multi-mass Hernquist model. The scale radius of the Plummer
sphere $b=2\times10^{-3}$ and its mass grows with time as \citep{Sigurdsson}
\begin{equation}
\label{eq:massplum}
M_{\rm b}(t) = M_{\rm f}\times
\begin{cases}
\left[ 3\left(\frac{t}{t_{\rm g}}\right)^2 -
2\left(\frac{t}{t_{\rm g}}\right)^3 \right] & \hbox{if
$t<t_{\rm g}$}\\
1 & \hbox{otherwise},
\end{cases}
\end{equation}
its final mass $M_{\rm f} = 2\times10^{-3}$ being reached in a time $t_{\rm g}=5$.
A safe, formal way of including the effects of this external potential
in {\sc grommet} is to add an extra term
\begin{equation}
\sum_{k=-\infty}^\infty \delta_\epsilon\left(k-\frac{t}{2^{-l_{\rm
max}}\tau_0}\right)\sum_{i=1}^N m_i\Phi_{\rm b}(\b x_i;t)
\end{equation}
to the Hamiltonian~(\ref{eq:hamfrogs}). Integrating the resulting
equations of motion then leads to the modifications needed in the
block-timestep algorithm (section~\ref{sec:multimove}). In this case
the necessary modifications are obvious, but for more realistic
situations (e.g., if the mass of the external source did not change in
time and if the location of its centre were not pinned to $\b x=0$)
then it is helpful to start from~(\ref{eq:hamfrogs}) to ensure that
the perturbation is turned on at the appropriate times and momentum
conserved.
As above, I use a nested series of boxes with boundaries at $|\b
x|=100\times2^{-i}$ with $i=0,\ldots,12$, each box covered by a $60^3$
mesh. Boxes 0 to~5 share a common timestep~$\tau_0=5\times10^{-3}$.
This is refined in every subsequent box, so that the timestep
associated with box~$i\ge6$ is $2^{5-i}\tau_0$ and the innermost box
($|\b x|<0.024$) has timestep $\sim4\times10^{-5}$.
My initial conditions consist of $10^6$ particles drawn from the
sampling DF~(\ref{eq:fs}) above. The artificially imposed potential at
$\b x=0$ means that this simulation only makes sense if the particles'
centre of mass is also at $\b x=0$. As an alternative to symmetrizing
the ICs as before, I instead modify step~(i) of the PM method
(section~\ref{sec:potsolve}) to reflect the particle distribution
through each of the planes $\b x=0$, $\b y=0$, $\b z=0$ when assigning
mass to meshes. This increases the effective $N$ used for the
potential by a factor of 8 at little cost. The density profile of the
final model is plotted in figure~\ref{fig:hernqadia}. It agrees well
with the predictions obtained using Young's (1980) method.
\section{Summary}
\label{sec:summary}
I have described {\sc grommet}, a fast, economical particle-multiple-mesh
$N$-body code designed specifically for modelling the dynamical
evolution of individual galaxies. In other words, it is designed to
tackle almost exactly the same type of problem to which the SCF method
\citep{HO92} is applied. Indeed, {\sc grommet} can -- loosely -- be thought
of as a variant of the SCF method using a Cartesian basis function
expansion with millions of expansion coefficients (the density at each
mesh vertex in each of the nested boxes). Any application of the SCF
method requires that one make a careful choice of the basis functions
used to represent the density and potential. Similarly, in {\sc grommet}
one has to choose, by hand, the set of nested boxes to use.
For a realistic model galaxy with $N\gtrsim10^6$, the single-timestep
incarnation of {\sc grommet} is comparable in speed to an SCF code using a
low-order basis expansion and shows comparable amounts of relaxation.
For most applications, however, {\sc grommet} will be much faster: its
nested-box potential solver admits an efficient natural block-timestep
integrator (section~\ref{sec:multimove}), leading to an approximate
three- to six-fold increase in speed for realistic galaxy models; the
SCF method typically requires a fairly high-order expansion to produce
(reasonably) unbiased results \citep[e.g.,][]{KHB}, which makes it
much slower in practice. But perhaps the main advantage of {\sc grommet}
over SCF methods based on spherical harmonic expansions is that it
respects Newton's third law and is therefore suitable for use in
studying $l=1$ perturbations without fear of artefacts due to
centring.
To my knowledge, the tree code {\sc falcon} \citep{Dehnen02} is the only
other code that can model realistically inhomogeneous galaxies without
breaking the third law. For $N\gtrsim10^6$ {\sc grommet}'s potential solver
is more than twice as fast as {\sc falcon}'s and much less memory hungry.
This efficiency comes at a cost though, since {\sc grommet}'s nested-box
scheme is optimized for modelling perturbations of single galaxies.
It would be interesting to see whether the potential-splitting scheme
used here (section~\ref{sec:multimove}; \citet{DLL98}) works as well
for {\sc falcon}, or indeed any other code that respects the third law, as
it does for {\sc grommet}.
\section*{Acknowledgments}
I thank James Binney, Walter Dehnen and Ben Moore for
helpful discussions, and the Royal Society for financial support.
|
1,477,468,751,432 | arxiv | \section{Introduction}
It is important to understand the dynamics of higher-dimensional black objects, since it tells us much about the nature of higher-dimensional gravitational theories and their holographically dual quantum field theories. The strong non-linearity of gravity, however, usually prevents us from understanding the dynamical properties of black objects beyond the linear-perturbation regime without highly sophisticated skills of numerical computation.
The Gregory-Laflamme (GL) instability~\cite{Gregory:1993vy}, which is a universal instability of higher-dimensional black objects, is a good example to see the above situation. Though the GL instability in the non-linear regime is quite interesting, its analysis needs sophisticated skills of numerical relativity~\cite{Lehner:2010pn}. While there exists a semi-analytic higher-order perturbation method~\cite{Gubser:2001ac}, it seems applicable only to static problems.
Recently, Emparan, Suzuki, and Tanabe showed that the Einstein equations describing the horizon dynamics of black branes in both Minkowski and Anti-de Sitter (AdS) background are recast in the form of coupled non-linear diffusion-type equations when the number of spatial dimensions is large~\cite{Emparan:2015gva}. This result provides us with a unique approach to the non-linear dynamics of black objects in higher dimensions. The authors indeed showed that the unstable black strings converge to non-uniform black strings (NUBSs), which had been predicted to happen above a critical dimension~\cite{Sorkin:2004qq}, by solving the diffusion equations numerically with a few lines of {\sl Mathematica} code. It is added that the blackfold approach~\cite{Camps:2010br} is also thought to serve as a powerful approach to analyze the evolution of GL instability.
Once the simple diffusion equations were obtained~\cite{Emparan:2015gva}, it is natural to ask if the non-linear properties of black-brane dynamics can be understood analytically. In this paper, we develop a systematic non-linear perturbation theory of asymptotically flat and AdS black branes, allowing the perturbations to be dynamical. Using the Fourier and Laplace transformation to solve the partial differential equations (PDEs), the perturbation equations are solved order by order for given arbitrary initial conditions up to the integration associated with the inverse transformation.
While the formulation is so general that it would be applicable to various problems, we pick up several examples as the initial conditions, which are a Gaussian wave packet, a step-function like shock configuration, and quite general discretely superposed sinusoidal waves. For these examples, the integration associated with the inverse transformation is completed up to the first or second order, and the properties of solutions are examined. Through these examples, one will see the validity of formalism itself and some unknown, or yet-to-be-confirmed, non-linear properties of black-brane dynamics. In particular, in the case of asymptotically flat black branes, an interesting non-liner property of GL instability resulting from the mode-mode coupling is unveiled at the second order. In the case of shock propagation on asymptotically AdS black branes, the analytic description of non-equilibrium steady state (NESS), which was recently discussed in the Riemann problem of relativistic fluid mechanics and field theories~\cite{Herzog:2016hob}, is presented.
This paper is organized as follows. In Sec.~\ref{sec:flat}, the asymptotically flat black branes are investigated. In Sec.~\ref{sec:formalism}, we present the perturbation equations for asymptotically flat black branes and their general form of solutions. In Sec.~\ref{sec:gauss}, we apply the general result to the Gaussian wave packet. In Sec.~\ref{sec:sin}, we consider the discretely superposed sinusoidal waves. In Sec.~\ref{sec:ads}, we consider the non-linear perturbation of asymptotically AdS black branes. Here, the formulation and applications are presented in parallel with Sec.~\ref{sec:flat}, but a new example of initial condition, the step-function like shock, is investigated in Sec.~\ref{sec:shock}. Section~\ref{sec:conc} is devoted to conclusion. Throughout this paper, we follow the notations in Ref.~\cite{Emparan:2015gva}.
\section{Asymptotically flat black branes}
\label{sec:flat}
\subsection{Perturbation equations and general form of solutions}
\label{sec:formalism}
In the large-$D$(imension) approach, the horizon dynamics of vacuum black branes without a cosmological constant are described by two functions, $m(t,z)$ and $p(t,z)$, where $t$ is time and $z$ is the spatial coordinate along which the horizon extends~\cite{Emparan:2015gva}. $m$ and $p$ represent the mass and momentum distributions along the horizon, respectively. $m \to +0$ corresponds to the pinching off of the horizon. The equations of motion for these quantities take form of coupled non-linear diffusion equations,
\begin{gather}
(\pd_t - \pd_z^2) m + \pd_z p
=
0,
\label{eom1}
\\
(\pd_t - \pd_z^2) p - \pd_z m
=
- \pd_z ( \frac{p^2}{m} ),
\label{eom2}
\\
t>0,
\;\;\;
-\infty < z < \infty.
\label{domain}
\end{gather}
A uniform black-brane solution corresponds to $m(t,z) \equiv 1$ and $p(t,z) \equiv 0$. Since we are interested in the dynamical deformation of such a uniform solution, we introduce one-parameter families of $m(t,z)$ and $p(t,z)$, and expand them around the uniform black-brane solution,
\begin{gather}
m(t,z;\epsilon) = 1+ \sum_{\ell=1}^\infty m_\ell (t,z) \epsilon^\ell,
\label{expansion1}
\\
p(t,z;\epsilon) = \sum_{\ell=1}^\infty p_\ell (t,z) \epsilon^\ell,
\label{expansion2}
\end{gather}
where $\epsilon$ is a constant parameterizing the families. Substituting these expansions into Eqs.~\eqref{eom1} and \eqref{eom2}, we obtain the equations of motion at $ O(\epsilon^\ell) \; (\ell \in {\mathbb N})$,
\begin{gather}
\dot{m}_\ell - m_\ell'' + p_\ell'
=
0,
\label{peom1}
\\
\dot{p}_\ell - p_\ell'' -m_\ell'
=
\psi_\ell,
\label{peom2}
\end{gather}
where the dot and prime denote the derivatives with respect to $t$ and $z$, respectively. The right-hand side of Eq.~\eqref{peom2}, $\psi_\ell (t,z)$, which we call a source term, is a polynomial of the lower-order perturbations and their first spatial derivatives,
\begin{gather}
\psi_1 \equiv 0,
\label{source1}
\\
\psi_\ell
=
\psi_\ell ( m_1, p_1, m_1', p_1', \cdots , m_{\ell-1}, p_{\ell-1}, m_{\ell-1}', p_{\ell-1}' ),
\;\;\;
\ell \geq 2.
\label{source>1}
\end{gather}
For example, the source terms for $\ell=2$ and $\ell =3$ are given by
\begin{gather}
\psi_2
=
-2p_1 p_1',
\label{source2}
\\
\psi_3
=
2 m_1 p_1 p_1' + m_1' p_1^2 -2 p_1 p_2' - 2p_1' p_2.
\label{source3}
\end{gather}
In the rest of this section, we are looking for the general form of solutions to the perturbation equations~\eqref{peom1} and \eqref{peom2}, combining the Fourier and Laplace transformations (see, {\it e.g.}, \cite{Duffy}). A similar technique is found to be used in Ref.~\cite{CFM,Miyamoto:2008uf} to analyze the higher-order perturbation of surface-diffusion equation, which is a single non-linear PDE.
Before starting to solve Eqs.~\eqref{peom1} and \eqref{peom2}, let us introduce the notations associated with the Fourier and Laplace transformations. For a given function, say $f(t,z)$, we shall denote its Fourier transformation with respect to $z$ by $\bar{f}(t,k)$, and its Laplace transformation with respect to $t$ by the corresponding capital letter $F(s,z)$. Namely,
\begin{align}
\bar{f}(t,k) &:= {\cal F}[ f(t,z) ] = \int_{-\infty}^\infty f(t,z) e^{-ik z} dz,
\;\;\;
i:=\sqrt{-1},
\\
F(s,z) &:= {\cal L}[f(t,z)] = \int_0^\infty f(t,z) e^{-st} dt.
\end{align}
Then, a capital letter with a bar denotes a Fourier-Laplace transformation as
\be
\bar{F}(s,k) := ( {\cal L} \circ {\cal F} ) [ f(t,z) ].
\ee
In addition, we define two kind of convolutions,
\begin{gather}
f(t,z) \ast g(t,z) := \int_0^t f(t-\tau,z) g(\tau,z) d\tau,
\\
f(t,z) \star g (t,z) := \int_{-\infty}^{\infty} f(t,z-\xi) g(\tau,\xi) d\xi.
\end{gather}
With the notations introduced above, the Fourier-Laplace transformed version of Eqs.~\eqref{peom1} and \eqref{peom2} are written as coupled algebraic equations in a matrix form
\begin{gather}
{\bm A}
\left(
\begin{array}{c}
\bar{M}_\ell(s,k) \\
\bar{P}_\ell(s,k) \\
\end{array}
\right)
=
\left(
\begin{array}{c}
\bar{m}_\ell (0,k) \\
\bar{p}_\ell (0,k) + \bar{\Psi}_\ell(s,k) \\
\end{array}
\right),
\label{peom3}
\\
{\bm A}
:=
\left(
\begin{array}{cc}
s+k^2 & ik \\
- ik & s+k^2 \\
\end{array}
\right),
\label{A}
\end{gather}
where we have used ${\cal F}[ \pd_z^n f(t,z) ] = (ik)^n \bar{f}(t,k) \; (n \in {\mathbb N})$ and $ {\cal L}[ \pd_t f(t,z) ] = sF(s,z) -f(0,z)$.
The solution to Eqs.~\eqref{peom1} and \eqref{peom2} are obtained after multiplying Eq.~\eqref{peom3} by ${\bm A}^{-1}$ from left and inversely transforming it,
\be
\left(
\begin{array}{c}
m_\ell(t,z) \\
p_\ell(t,z) \\
\end{array}
\right)
=
({\cal F}^{-1} \circ {\cal L}^{-1})
\left[
{\bm A}^{-1}
\left(
\begin{array}{c}
\bar{m}_\ell (0,k) \\
\bar{p}_\ell (0,k) + \bar{\Psi}_\ell(s,k) \\
\end{array}
\right)
\right] .
\label{sol}
\ee
By simple algebra, the inverse matrix ${\bm A}^{-1}$ is found to be decomposed into two parts,
\begin{gather}
{\bm A}^{-1}
=
\sum_{\sigma = +,-} \frac{ 1 }{s-s_\sigma (k)} {\bm B}_\sigma,
\label{Ainverse}
\\
{\bm B}_\sigma
:=
\frac12
\left(
\begin{array}{cc}
1 & - \sigma i \\
\sigma i & 1\\
\end{array}
\right),
\label{B}
\\
s_\sigma (k) := k(\sigma 1 - k ).
\label{dispersion}
\end{gather}
See Fig.~\ref{fig:disp} for plot of $s=s_\pm(k)$, which corresponds to the dispersion relation of waves.
\begin{figure}[tb]
\begin{center}
\begin{minipage}[c]{0.9\textwidth}
\linespread{0.85}
\begin{center}
\includegraphics[height=4cm]{fig1_dispersion.eps}
\caption{{\small {\sf Dispersion relations $s=s_+(k)$ (black solid) and $s=s_-(k)$ (red dashed), defined by Eq.~\eqref{dispersion}.
}}}
\label{fig:disp}
\end{center}
\end{minipage}
\end{center}
\end{figure}
After this decomposition, one can perform the inverse Laplace transformation ${\cal L}^{-1}$ in Eq.~\eqref{sol} to obtain
\begin{align}
\left(
\begin{array}{c}
m_\ell(t,z) \\
p_\ell(t,z) \\
\end{array}
\right)
&=
\sum_{\sigma=+,-}
{\bm B}_\sigma
\left(
\begin{array}{c}
{\cal F}^{-1} [ e^{ s_\sigma (k) t } \bar{m}_\ell (0,k) ]\\
{\cal F}^{-1} [ e^{ s_\sigma (k) t } \bar{p}_\ell (0,k) + e^{ s_\sigma (k) t } \ast \bar{ \psi }_\ell (t,k) ]\\
\end{array}
\right)
\label{sol2}
\\
&=
\sum_{\sigma=+,-}
{\bm B}_\sigma
\left(
\begin{array}{c}
{\cal F}^{-1}[ e^{s_\sigma (k) t} ] \star m_\ell (0,z) \\
{\cal F}^{-1}[ e^{s_\sigma (k) t} ] \star p_\ell (0,z) + {\cal F}^{-1}[ e^{s_\sigma (k) t} ] \star \ast \psi_\ell (t,z) \\
\end{array}
\right).
\label{sol3}
\end{align}
Here, we have used ${\cal L}^{-1}[ \frac{1}{s-a} ] = e^{a t}$, ${\cal L}^{-1}[ F(s,z) G(s,z) ] = f(t,z) \ast g(t,z)$, and ${\cal F}^{-1}[ \bar{f}(t,k) \bar{g}(t,k) ] = f(t,z) \star g(t,z)$.
Equations \eqref{sol2} and \eqref{sol3} are exactly what we wanted, namely, the general form of solutions to the perturbation equations~\eqref{peom1} and \eqref{peom2} for given arbitrary initial conditions, $ m_\ell (0,z) $ and $ p_\ell (0,z) \; (\ell \in {\mathbb N})$. It depends on the problem which expression, \eqref{sol2} or \eqref{sol3}, is easer to compute. For all examples considered in this paper, Eq.~\eqref{sol2} seems easer to compute. Using Eq.~\eqref{dispersion}, one can easily obtain
\be
{\cal F}^{-1}[ e^{s_\sigma (k) t} ] = \frac{ 1 }{ \sqrt{4\pi t} } \exp[ - \frac{ (\sigma t+iz)^2}{4t} ] ,
\label{fourier_exp}
\ee
which is useful when one uses expression \eqref{sol3}.
In principle, one can obtain the arbitrary-order solutions, $m_\ell (t,z)$ and $p_\ell(t,z)$, by computing the right-hand side of Eq.~\eqref{sol2} or \eqref{sol3} order by order. However, as the source term $\psi_\ell (t,z) $ becomes complicated as $\ell$ increases, it is fare to say that to obtain the solutions analytically until arbitrary order is impossible in general. In addition, when the initial condition is a complicated function, even the first-order solution can be impossible to obtain analytically. Namely, the inverse Fourier transformation in Eq.~\eqref{sol2} cannot be performed analytically in such a case. In the rest of this section, we shall consider two examples of initial conditions, for which the first few-order solutions are analytically obtainable.
\subsection{Gaussian wave packet}
\label{sec:gauss}
As the first example, we adopt the Gaussian wave packet as the initial perturbation given to the asymptotically flat black brane. For this perturbation, the inverse Fourier transformation at the first order in Eqs.~\eqref{sol2} and \eqref{sol3} can be computed analytically. Since the unperturbed black brane is unstable, the perturbation of course grows unboundedly and nothing unexpected happens in this sense. However, one can see how to use the general results obtained in the previous section and the validity of the method. In particular, comparing the perturbative solution with a full numerical solution, the perturbative solution turns out to be effective even for finite-amplitude dynamics. In other words, the convergence of $\epsilon$-expansion \eqref{expansion1} and \eqref{expansion2} is rapid enough at least for this example.
Let us assume that the situation where the black brane is given $O(\epsilon)$-perturbation taking form of a Gaussian wave packet,
\begin{gather}
m_1(0,z) = \frac{ \beta }{ \sqrt{2\pi} b } \exp [ - \frac{ (z-z_0)^2 }{ 2b^2 } ],
\label{ic_gauss1}
\\
p_1 (0,z) = m_1'(0,z),
\label{ic_gauss2}
\end{gather}
where $\beta,\; b \; (>0)$, and $z_0$ are real constants. Obviously, $b$ and $z_0$ parameterize how much the wave packet spatially extends and the central position of the wave packet, respectively. $\beta$ is just a normalization constant to give $ \int_{-\infty}^\infty m_1(0,z) dz = \beta $. The initial perturbation of $p_1$ is given by the spatial derivative of $m_1$ for simplicity, though their initial conditions can be independent in nature.
The Fourier transformation of the above initial conditions are
\begin{gather}
\bar{m}_1(0,k)
=
\beta \exp [ -\frac{ b^2 k^2 }{2} - ik z_0 ],
\\
\bar{p}_1(0,k) = ik \bar{m}_1(0,k).
\end{gather}
\subsubsection{First-order solutions}
Since we have no source term at $O(\epsilon)$, $\psi_1(t,z) \equiv 0$, we see from Eq.~\eqref{sol2} that what to compute is the inverse Fourier transformation of initial spectra, $\bar{m}_1(0,k)$ and $\bar{p}_1(0,k)$, multiplied by $e^{s_\sigma(k) t}$. One can compute such quantities as
\begin{align}
{\cal F}^{-1}[ e^{s_\sigma (k) t} \bar{m}_1(0,k)]
&=
\frac{ \beta }{ \sqrt{ 2\pi ( b^2+2t ) } }
\exp
[
\frac{ t^2-( z-z_0 )^2 }{ 2( b^2+2t ) }
+
i\sigma \frac{ t ( z-z_0 ) }{ b^2+2t }
],
\\
{\cal F}^{-1}[ e^{s_\sigma (k) t} \bar{p}_1(0,k)]
&=
- \frac{ \beta [ (z-z_0) - i \sigma t ] }{ \sqrt{ 2\pi ( b^2+2t )^3 } }
\exp
[
\frac{ t^2-( z-z_0 )^2 }{ 2( b^2+2t ) }
+
i\sigma \frac{ t ( z-z_0 ) }{ b^2+2t }
].
\end{align}
Substituting these results into Eq.~\eqref{sol2}, we obtain the first-order solutions
\begin{align}
\left(
\begin{array}{c}
m_1 (t,z) \\
p_1 (t,z) \\
\end{array}
\right)
=
\beta \sqrt{ \frac{ (b^2+3t)^2+(z-z_0)^2 }{ 2\pi (b^2+2t)^3 } }
\exp [ \frac{ t^2-( z-z_0 )^2 }{ 2( b^2+2t ) } ]
\left(
\begin{array}{c}
\displaystyle \cos [\frac{ t(z-z_0) }{ b^2+2t } + \Theta ] \\
\displaystyle - \sin [ \frac{ t(z-z_0) }{ b^2+2t } + \Theta ] \\
\end{array}
\right),
\label{mp1_gauss}
\end{align}
where
\begin{align}
\cos \Theta
:=
\frac{ b^2+3t }{ \sqrt{ (b^2+3t)^2+(z-z_0)^2 } },
\;\;\;
\sin \Theta
:=
\frac{ z-z_0 }{ \sqrt{ (b^2+3t)^2+(z-z_0)^2 } }.
\end{align}
We believe that this is the first example describing the non-trivial evolution of GL instability analytically in time domain, which is realized by virtue of the large-$D$ method and perturbation theory developed in this paper.
Like the diffusion phenomenon of a Gaussian wave packet according to an ordinary diffusion equation, solutions \eqref{mp1_gauss} have temporal decay factor, which behaves as $ \frac{1}{\sqrt{ b^2+2t }} $, and spatial decay factor $\exp[ - \frac{ (z-z_0)^2 }{ 2(b^2+2t) } ]$. The sinusoidal parts represent spatial oscillation with time-dependent wavelength $ \frac{2\pi (b^2+2t)}{t} $, which interestingly asymptotes to a universal value $4\pi$ as $t \to +\infty$.
What crucially different from the ordinary diffusion is that the solutions temporally grow exponentially due to factor $ \exp[ \frac{ t^2 }{ 2(b^2+2t) } ] $.
It is stressed that this exponential growing happens eventually irrespective of $b$, which characterizes the extension of the initial wave packet. Substituting $z=z_0$ into Eq.~\eqref{mp1_gauss}, we can see the time dependence of the peak height,
\be
m_1(t,z_0)
=
\beta \sqrt{ \frac{ (b^2+3t)^2 }{ 2\pi (b^2+2t)^3 } } \exp[ \frac{t^2}{2(b^2+2t)} ].
\ee
For $ b >\sqrt{3}$, this is monotonically increasing in time. For $ 0<b<\sqrt{3} $, although it is initially decreasing, it turns increasing at $t = \frac13 ( 3-2b^2 + \sqrt{ b^4-3b^2+9 } )$ to diverge eventually.
Thus, while the GL instability is generally said to be a long-wavelength instability, the initial perturbation taking form of a Gaussian wave packet necessarily grows exponentially however the `scale' of perturbation $b$ is small. The reason is that the Fourier spectrum of any Gaussian wave packet necessarily contains the GL mode $k \in (-1,1) \setminus \{ 0 \}$ (see Sec.~\ref{sin_1st}). This is quite reasonable but might be a somewhat interesting point.
Three-dimensional plots of $1+m_1(t,z)$ and $p_1(t,z)$ are presented in Figs.~\ref{fig:gauss}(a) and \ref{fig:gauss}(b), respectively. One can observe the growth and oscillation described above. In addition, snapshots of $1+m_1(t,z)$ and $p_1(t,z)$ at selected moments, compared with numerical solutions, are presented in Figs.~\ref{fig:gauss}(c) and \ref{fig:gauss}(d), respectively. Compared with the full numerical solutions, which are obtained by directly solving original equations \eqref{eom1} and \eqref{eom2} with the same initial conditions, {\it i.e.}, $m(0,z)=1+m_1(0,z)$ and $p(0,z)=p_1(0,z)$, one can observe that the first-order solutions almost completely capture the qualitative features of the full solution during the time domain considered. Note that the deviation from the full solution, however, becomes large as the time proceeds, which results in the divergence of $m$ and $p$.
\subsubsection{Notes on second-order perturbation}
\label{sec:note_gauss}
The comparison between the first-order solution and full solution above tells us that $O(\epsilon^2)$ perturbations are negligible during the amplitudes of $m$ and $p$ are $O(1)$ in the current example despite the ordinary expectation that the perturbation becomes invalid for such a large amplitude. We will see in Sec.~\ref{sec:ads} that $O(\epsilon)$ approximation is more accurate for the Gaussian perturbation to the asymptotically AdS brane than the present case.
For the aim to see the non-linear effects at the second order, it is natural to assume that the initial perturbation at the second order vanishes, $m_2(0,z)=p_2(0,z)=0$. The reason is that $m_2(t,z)$ and $p_2(t,z)$ are composed of two independent parts as seen in Eq.~\eqref{sol2}: one is the contribution from initial perturbation $m_2(0,z)$ and $p_2(0,z)$, and the other is that from the source term $\psi_2(t,z)$. The former clearly has the same time dependence as the first term from Eq.~\eqref{sol2}. Namely, if we prepare the Gaussian wave packet as the initial condition of second-order perturbation, the second-order perturbation evolves in the exactly same way as the $O(\epsilon)$ perturbation described above. Only the latter, the contribution from the source term, can have a different time dependence from the first-order perturbation. This will be seen explicitly in Sec.~\ref{sec:sin}.
From the reason described above, we should assume that the initial perturbations vanish at $O(\epsilon^2)$, $m_2(0,z)=p_2(0,z)=0$. Then, we see from Eq.~\eqref{sol2} that what to compute at $O(\epsilon^2)$ is only the inverse Fourier transformation of the convolution between $e^{s_\sigma(k)t}$ and the spectrum of source term $\bar{\psi}_2 (t,k)$. Unfortunately, however, such a convolution in the present example involves the Gauss error function, not written in terms of elementary functions. Thus, it seems difficult to obtain the second-order solutions analytically, and therefore we stop the analysis on this example here.
\begin{figure}[tb]
\begin{center}
\begin{minipage}[c]{0.9\textwidth}
\linespread{0.85}
\begin{center}
\setlength{\tabcolsep}{ 10 pt }
\begin{tabular}{ cc }
(a) & (b) \\
\includegraphics[height=4.5cm]{fig2a_gauss3dm1.eps} &
\includegraphics[height=4.5cm]{fig2b_gauss3dp1.eps} \\
(c) & (d) \\
\includegraphics[height=4cm]{fig2c_gauss2dmComp.eps} &
\includegraphics[height=4cm]{fig2d_gauss2dpComp.eps} \\
\end{tabular}
\caption{{\small {\sf Three-dimensional plots of (a) $1+m_1(t,z)$ and (b) $p_1(t,z)$, given by Eqs.~\eqref{mp1_gauss} with $\beta=b=1, \; z_0=0$. The comparison between first-order solution (c) $1+m_1(t,z)$ (resp.\ (d) $p_1(t,z)$) and non-perturbative solution $m(t,z)$ (resp.\ $p(t,z)$) obtained by numerically solving Eq.~\eqref{eom1} and \eqref{eom2}. The blue-dashed curve represents initial configuration $m(0,z)$ (resp.\ $p(0,z)$). The green, red, and black solid curves represent the first-order solutions at $t=3.3$, $6.7$, and $10$, respectively. The (green, red, and black) dashed curves represent the full numerical solutions at the corresponding time.}}}
\label{fig:gauss}
\end{center}
\end{minipage}
\end{center}
\end{figure}
\subsection{Superposed sinusoidal waves}
\label{sec:sin}
As the second example, we consider the situation where the black brane is initially given an $O(\epsilon)$-perturbation being a superposition of an arbitrary number of sinusoidal waves. This example is simple but interesting enough to see what happens in the non-linear regimes.
We set the following initial conditions
\begin{gather}
m_1 (0,z) =\sum_{n=1}^N a_n \cos (k_n z + \varphi_n),
\label{ic1}
\\
p_1 (0,z) = m_1'(0,z),
\label{ic2}
\\
m_\ell (0,z) = p_\ell (0,z) = 0,
\;\;\;
\forall \ell \geq 2,
\label{ic3}
\end{gather}
where $a_n,$ $k_n$, and $\varphi_n \; (n=1,2,\cdots, N)$ are real constants. It is noted that the right-hand side of Eq.~\eqref{ic1} is not written as the general form of Fourier series expansion of a periodic function. However, choosing appropriate wave number $k_n$ and phase $\varphi_n$, and taking summation over $n$ from $0$ to infinity, rather than from 1 to $N$, Eq.~\eqref{ic1} can cover the Fourier series expansion of arbitrary piecewise continuous periodic function. The assumption that the second and higher order perturbations vanish initially are adopted from the same reason as the previous example in Sec.~\ref{sec:gauss}.
The Fourier transformations of the above initial configurations are
\begin{gather}
\bar{m}_1 (0,k)
=
\pi \sum_{n=1}^N a_n
[ e^{ i\varphi_n } \delta (k-k_n) + e^{- i \varphi_n } \delta (k+k_n) ],
\\
\bar{p}_1(0,k) = ik \bar{m}_1(0,k),
\\
\bar{m}_\ell (0,k) = \bar{p}_\ell (0,k) = 0,
\;\;\;
\forall \ell \geq 2.
\end{gather}
In the rest of this section, we shall compute the right-hand side of Eq.~\eqref{sol2} order by order for these initial conditions.
\subsubsection{First-order solutions}
\label{sin_1st}
Since we have no source term at $O(\epsilon)$, $ \psi_1 \equiv 0 $, we see from Eq.~\eqref{sol2} that what to compute is only the inverse Fourier transformation of the initial spectra, $\bar{m}_1(0,k)$ and $\bar{q}_1 (0,k)$, multiplied by $ e^{s_\sigma (k) t} $. These are easily computed to give
\begin{align}
{\cal F}^{-1}[ e^{ s_\sigma (k) t } \bar{m}_1 (0,k) ]
=&
\frac12 \sum_{n=1}^N a_n
[
e^{ s_\sigma (k_n) t } e^{ i( k_n z + \varphi_n ) }
+
e^{ s_{- \sigma} (k_n) t } e^{ - i( k_n z + \varphi_n ) }
],
\label{Fem}
\\
{\cal F}^{-1}[ e^{ s_\sigma (k) t } \bar{p}_1 (0,k) ]
=&
\frac{i}{2} \sum_{n=1}^N k_n a_n
[
e^{ s_\sigma (k_n) t } e^{ i( k_n z + \varphi_n ) }
-
e^{ s_{- \sigma} (k_n) t } e^{ - i( k_n z + \varphi_n ) }
],
\label{Fep}
\end{align}
where we have used $s_\sigma (-k) = s_{-\sigma} (k)$. Substituting these results into Eq.~\eqref{sol2}, we obtain the first-order solutions,
\begin{align}
m_1(t,z)
=&
\frac12 \sum_{n=1}^N a_n
[
(1 + k_n) e^{ s_+ (k_n) t } + (1 - k_n) e^{ s_- (k_n) t }
] \cos ( k_n z + \varphi_n ),
\label{m1}
\\
p_1(t,z)
=&
- \frac12 \sum_{n=1}^N a_n
[
(1 + k_n) e^{ s_+ (k_n) t } - (1 - k_n) e^{ s_- (k_n) t }
] \sin ( k_n z + \varphi_n ).
\label{p1}
\end{align}
Equations~\eqref{m1} and \eqref{p1} represent $O(\epsilon)$ approximate time evolution of the initial perturbation, which takes the form of superposed sinusoidal waves~\eqref{ic1} and \eqref{ic2}. Since the initial conditions \eqref{ic1} and \eqref{ic2} are quite general, so are the solutions \eqref{m1} and \eqref{p1}.
Since we are considering linear equations of motion, there is no mode-mode coupling appearing in non-linear regime, and therefore Eqs.~\eqref{m1} and \eqref{p1} have simple interpretation. The factor of $\cos(k_n z+\varphi_n)$ in Eq.~\eqref{m1} represents the time-dependent amplitude of the initially given mode $\cos(k_n z+\varphi_n)$. Each mode evolves independently according to its growth or damping rate determined by $e^{s_+(k_n)t}$ and $e^{s_-(k_n)t}$. From the concrete form of $s_\pm(k)$ in Eq.~\eqref{dispersion}, one can see that if $k_n \in (-1,0)$ (resp.\ $k_n \in (0,1)$), such a mode grows exponentially due to $e^{s_-(k_n)t}$ (resp.\ $e^{s_+ (k_n)t}$), which represents the GL instability.
\subsubsection{Second-order solutions}
Since we assume that the initial perturbations vanish at $O(\epsilon^2)$, $ m_2(0,z)=p_2(0,z)=0 $, we see from Eq.~\eqref{sol2} that what to compute is the inverse Fourier transformation of the convolution between $ e^{ s_\sigma (k) t } $ and the Fourier spectrum of source term $\bar{\psi}_2(t,k)$. Using Eqs.~\eqref{source2} and \eqref{p1}, such a quantity is computed and written down in a simple form as
\begin{align}
{\cal F}^{-1}[ e^{ s_\sigma (k) t } \ast & \bar{ \psi }_2 (t,k) ]
=
\frac{i}{8}
\sum_{n=1}^N \sum_{n'=1}^N a_n a_{n'} k_{n'}
\nn
\\
&
\times
\Big(
C_{nn'}^{(\sigma)(+)} e^{i[ (k_n+k_{n'})z+(\varphi_n + \varphi_{n'}) ] }
+
C_{nn'}^{(\sigma)(-)} e^{i[ (k_n-k_{n'})z+(\varphi_n - \varphi_{n'}) ]}
\nn
\\
&
-
C_{nn'}^{(-\sigma)(-)} e^{-i[ (k_n-k_{n'})z+(\varphi_n - \varphi_{n'}) ]}
-
C_{nn'}^{(-\sigma)(+)} e^{-i[ (k_n+k_{n'})z+(\varphi_n + \varphi_{n'}) ]}
\Big)
\label{convo}
\end{align}
by defining a function of time,
\begin{align}
C_{nn' }^{(\sigma)(\sigma') }
:=&
\frac{ ( 1+k_n )( 1+k_{n'} ) }{ s_+(k_n) + s_+(k_{n'}) - s_\sigma ( k_n + \sigma' k_{n'} )}
( e^{ [ s_+(k_n) + s_+(k_{n'}) ] t } - e^{ s_\sigma (k_n +\sigma' k_{n'}) t } )
\nn
\\
-&
\frac{ ( 1+k_n )( 1-k_{n'} ) }{ s_+(k_n) + s_-(k_{n'}) - s_\sigma ( k_n +\sigma' k_{n'} )}
( e^{ [ s_+(k_n) + s_-(k_{n'}) ] t } - e^{ s_\sigma (k_n +\sigma' k_{n'}) t } )
\nn
\\
-&
\frac{ ( 1-k_n )( 1+k_{n'} ) }{ s_-(k_n) + s_+(k_{n'}) - s_\sigma( k_n +\sigma' k_{n'} )}
( e^{ [ s_-(k_n) + s_+(k_{n'}) ] t } - e^{ s_\sigma (k_n +\sigma' k_{n'}) t } )
\nn
\\
+&
\frac{ ( 1-k_n )( 1-k_{n'} ) }{ s_-(k_n) + s_-(k_{n'}) - s_\sigma ( k_n +\sigma' k_{n'} )}
( e^{ [ s_-(k_n) + s_-(k_{n'}) ] t } - e^{ s_\sigma (k_n +\sigma' k_{n'}) t } ).
\label{c}
\end{align}
Substituting the above result \eqref{convo} into Eq.~\eqref{sol2}, we obtain the second-order solutions,
\begin{align}
m_2(t,z)
=
\frac18 \sum_{ n =1 }^n \sum_{n'=1}^N a_n a_{n'} k_{n'}
\Big(
[ & C_{nn'}^{(+)(+)} - C_{nn'}^{(-)(+)} ]
\cos [ (k_n+k_{n'})z + (\varphi_n + \varphi_{n'}) ]
\nn
\\
+&
[ C_{nn'}^{(+)(-)} - C_{nn'}^{(-)(-)} ] \cos [ (k_n-k_{n'})z + (\varphi_n - \varphi_{n'}) ]
\Big),
\label{m2_pre}
\\
p_2(t,z)
=
- \frac18 \sum_{ n =1 }^n \sum_{n'=1}^N a_n a_{n'} k_{n'}
\Big(
[ & C_{nn'}^{(+)(+)} + C_{nn'}^{(-)(+)} ]
\sin [ (k_n+k_{n'})z + (\varphi_n + \varphi_{n'}) ]
\nn
\\
+&
[ C_{nn'}^{(+)(-)} + C_{nn'}^{(-)(-)} ] \sin [ (k_n-k_{n'})z + (\varphi_n - \varphi_{n'}) ]
\Big) .
\label{p2_pre}
\end{align}
Note that $m(t,z) = 1+m_1(t,z)+m_2(t,z)$ and $p(t,z) = p_1(t,z)+p_2(t,z)$ with Eqs.~\eqref{m1}, \eqref{p1}, \eqref{m2_pre}, and \eqref{p2_pre}, represent $O(\epsilon^2)$ approximate time evolution of the initial perturbation, which takes the form of superposed sinusoidal waves~\eqref{ic1}, \eqref{ic2}, and \eqref{ic3}. Since the initial conditions \eqref{ic1} and \eqref{ic2} are quite general, so are these approximate solutions.
Since the initial perturbations are assumed to vanish at $O(\epsilon^2)$, $m_2(0,z)=p_2(0,z)=0$, the above $O(\epsilon^2)$ solutions contain only the contribution from the source term $\psi_2=-2p_1p_1'$. If one prepares for non-vanishing initial conditions at the second order, its contribution is simply added to the above solution, but such a contribution will exhibit no interesting behavior since it has the time dependence similar to that of $O(\epsilon)$ solution.
In general, the multiple summation of any quantity with two indices $\sum_{n,n'} T_{nn'}$ can be decomposed as $\sum_{n,n'} T_{nn'} = \sum_{n} T_{nn} + \sum_{n<n'} ( T_{nn'} +T_{n'n} )$. Here, $ \sum_{n<n'} $ represents the summation over all $n$ and $n'$ satisfying $1 \leq n<n' \leq N$. Using this decomposition, one can decompose the multiple summation in Eqs.~\eqref{m2_pre} and \eqref{p2_pre} as
\begin{align}
& m_2(t,z)
\nn
=
\\
& \frac18 \sum_{n=1}^N a_n^2 k_n
\Big(
[ C_{nn}^{(+)(+)} - C_{nn}^{ (-)(+) } ] \cos [ 2(k_n z+\varphi_n) ]
+
[ C_{nn}^{(+)(-)} - C_{nn}^{ (-)(-) } ]
\Big)
\nn
\\
+&
\frac18 \sum_{ n < n' } a_n a_{n'}
\Big(
k_{n'}[ C_{nn'}^{(+)(+)} - C_{nn'}^{(-)(+)} ] + k_{n}[ C_{n'n}^{(+)(+)} - C_{n'n}^{(-)(+)} ]
\Big) \cos [ (k_n+k_{n'})z + (\varphi_n + \varphi_{n'}) ]
\nn
\\
+&
\frac18 \sum_{ n < n' } a_n a_{n'}
\Big(
k_{n'}[ C_{nn'}^{(+)(-)} - C_{nn'}^{(-)(-)} ] + k_{n}[ C_{n'n}^{(+)(-)} - C_{n'n}^{(-)(-)} ]
\Big) \cos [ (k_n-k_{n'})z + (\varphi_n - \varphi_{n'}) ],
\label{m2}
\\
&
p_2(t,z)
=
\nn
\\
&
-\frac18 \sum_{n=1}^N a_n^2 k_n [ C_{nn}^{(+)(+)} + C_{nn}^{(-)(+)} ] \sin [ 2(k_n z+\varphi_n) ]
\nn
\\
&
-\frac18 \sum_{n<n'} a_n a_{n'}
\Big(
k_{n'} [ C_{nn'}^{(+)(+)} + C_{nn'}^{(-)(+)} ] + k_{n} [ C_{n'n}^{(+)(+)} + C_{n'n}^{(-)(+)} ]
\Big)
\sin [ (k_n+k_{n'}) z +( \varphi_n+\varphi_{n'} ) ]
\nn
\\
&
-\frac18 \sum_{n<n'} a_n a_{n'}
\Big(
k_{n'} [ C_{nn'}^{(+)(-)} + C_{nn'}^{(-)(-)} ] - k_{n} [ C_{n'n}^{(+)(-)} + C_{n'n}^{(-)(-)} ]
\Big)
\sin [ (k_n - k_{n'}) z +( \varphi_n - \varphi_{n'} ) ].
\label{p2}
\end{align}
The first term of Eqs.~\eqref{m2} and \eqref{p2} represents the self-interference of each mode $k_n$ ($ n = 1,2,\cdots, N $). On the other hand, the second and third terms represent the interference between $k_n$ and $k_{n'}$ ($n < n'$).
The non-linear source term involves the mode-mode coupling, which is absent at the linear order. This coupling excites the terms of $\cos[(k_n \pm k_{n'})z]$ and $\sin[(k_n \pm k_{n'})z]$ in Eqs.~\eqref{m2_pre} and \eqref{p2_pre}. For example, let us see the structure of $m_2(t,z)$. From Eqs.~\eqref{c} and \eqref{m2_pre}, one can see that both $\cos[(k_n + k_{n'})z]$ and $\cos[(k_n - k_{n'})z]$ terms involve the following three kind of time dependence,
\begin{gather}
e^{ [ s_+(k_n) + s_+(k_{n'}) ] t },
\;\;\;
e^{ [ s_+(k_n) + s_-(k_{n'}) ] t} ,
\;\;\;
e^{ [ s_-(k_n) + s_-(k_{n'}) ] t } .
\label{factor1}
\end{gather}
In addition, one can see that $\cos[(k_n + k_{n'})z]$ and $\cos[(k_n - k_{n'})z]$ terms involve
\begin{gather}
e^{ s_+ (k_n + k_{n'}) t },
\;\;\;
e^{ s_- (k_n + k_{n'}) t }
\;\;\;
\mbox{and}
\;\;\;
e^{ s_+ (k_n - k_{n'}) t },
\;\;\;
e^{ s_- (k_n - k_{n'}) t },
\label{factor2}
\end{gather}
respectively. Thus, the second-order solutions exhibit a variety of dispersion given by the exponents of quantities \eqref{factor1} and \eqref{factor2}.
\begin{figure}[bt]
\begin{center}
\begin{minipage}[c]{0.9\textwidth}
\linespread{0.85}
\begin{center}
\setlength{\tabcolsep}{ 10 pt }
\begin{tabular}{ cc }
(a) & (b) \\
\includegraphics[height=4.5cm]{fig3a_sin3dm.eps} &
\includegraphics[height=4.5cm]{fig3b_sin3dp.eps} \\
(c) & (d) \\
\includegraphics[height=4cm]{fig3c_sin2dm.eps} &
\includegraphics[height=4cm]{fig3d_sin2dp.eps} \\
\end{tabular}
\caption{{\small {\sf Three-dimensional plots of (a) $1+m_1(t,z)+m_2(t,z)$ and (b) $p_1(t,z)+p_2(t,z)$, given by Eqs.~\eqref{m1}, \eqref{p1}, \eqref{m2}, and \eqref{p2} with $N=2, \; a_1=a_2=1, \; k_1 = 1.3, \; k_2=1.2, \; \varphi_1=\varphi_2=0$. Snapshots of (c) $1+m_1(t,z)+m_2(t,z)$ and (d) $p_1(t,z)+p_2(t,z)$ at $t=0$ (blue dotted), $t=8$ (green dashed), $t=28$ (red dot-dashed), and $t=40$ (black solid).}}}
\label{fig:sin}
\end{center}
\end{minipage}
\end{center}
\end{figure}
\subsubsection{Notes on Gregory-Laflamme instability}
Let us consider the meaning to investigate the higher-order perturbations from the stability point of view. The asymptotically flat black brane we consider here is essentially unstable. Namely, as seen in Sec.~\ref{sin_1st}, if the initial perturbation contains any mode of which wave number $k_n \in (-1,1) \setminus \{ 0 \}$, such a mode grows unboundedly. However, we consider the black brane in the large-$D$ limit, namely, above the critical dimension~\cite{Sorkin:2004qq}. Thus, the GL instability initially grows but it gradually damps in non-linear regimes, and eventually the horizon converges to non-uniform configuration~\cite{Emparan:2015gva}.
It is pointed out that the second-order perturbation cannot stabilize the first-order instability because the first-order perturbation is treated as the fixed background, which appears as the source term, when we solve the second order. Nevertheless, one might expect if there appears the sign of the convergence to the non-uniform horizon at the second order. Although such a sign can appear at the second order, we cannot catch the sign in the result \eqref{m2} and \eqref{p2} unfortunately.
On the other hand, a black brane that is linearly stable can become unstable at the second order. If the initial perturbation does not contain any unstable mode, the initial perturbation will damp exponentially at linear level, as seen in the results of Sec.~\ref{sin_1st}. However, the second-order perturbation involves various time dependence as seen in Eqs.~\eqref{factor1} and \eqref{factor2}. In order to see directly this situation, let us focus on a simple case as follow.
Suppose that the initial perturbation is the superposition of two modes $k_1$ and $k_2$ both of which are stable modes, $k_1> k_2 > 1$. In addition, assume that their difference is smaller than unity, $k_1 - k_2 \in (0,1)$. In this case, the term of $ C^{(+)(-)}_{21} \cos[ (k_1-k_2)z +(\varphi_1-\varphi_2)] $ in Eq.~\eqref{m2} includes terms having growing factor $ e^{s_+(k_1-k_2)t} $, as seen from Eq.~\eqref{c}. Thus, the perturbation does not grow at $O(\epsilon)$ but does at $O(\epsilon^2)$.
The above phenomenon is an interesting aspect of the GL instability, which was revealed for the first time by the present non-linear perturbation theory in time domain. It is intuitively understandable. The superposition of the two modes forms the beat. For simplicity, assume $ a_1=a_2 \; (\neq 0)$ in Eq.~\eqref{ic1}, then the superposed wave is written as
\be
2 a_1 \cos [ \frac{ (k_1+k_2)z + (\varphi_1+\varphi_2) }{2} ]
\cos [ \frac{ (k_1-k_2)z + (\varphi_1-\varphi_2) }{2} ].
\ee
This exhibits the fast spatial oscillation with the large wave number $\frac{k_1+k_2}{2}$ which is enveloped by the slow oscillation with the small wave number $\frac{k_1-k_2}{2}$, which is called the beat phenomenon especially when the difference of the wave numbers is rather small $k_1-k_2 \ll k_1+k_2 $. This slow oscillation is nothing but the origin of the GL instability at the second order. In Fig.~\ref{fig:sin}, we present the three-dimensional plots of $m(t,z) = 1 + m_1(t,z) + m_2(t,z)$ and $ p_1(t,z) + p_2(t,z) $ and their snapshots at selected moments. One can observe that the beat formed by the superposition of two modes at $t=0$. As soon as the dynamics starts, such an initial wave rapidly damps as predicted by the first-order perturbation. As the time proceeds, however, the waves of which scale is the same order as that of the beat begin to grow and eventually diverge.
\section{Asymptotically AdS black branes}
\label{sec:ads}
In this section, we consider the non-linear perturbation of the asymptotically AdS black branes in the large-$D$ limit. The governing equations of motion are almost the same as those in the asymptotically flat case except for a sign of one term. Thus, the formulation of the perturbation theory proceeds completely in parallel with the asymptotically flat case.
What different from the asymptotically flat black brane in the previous section is that the AdS black branes are stable: they are not suffered from the GL instability at least linearly. In addition, the gravitational phenomena in AdS background are interpreted as corresponding phenomena in the dual field theories via the AdS/CFT dictionary, and therefore have much more applications than the asymptotically flat case. In fact, we will apply the result of general perturbation theory to the problem of shock-wave propagation, which has been discussed in the context of AdS/CFT~\cite{Herzog:2016hob}, in Secs.~\ref{sec:shock} and \ref{sec:sin_ads} in addition to the Gaussian wave packet and general superposed sinusoidal waves.
\subsection{Perturbation equations and general form of solutions}
\label{sec:formalism_ads}
For the asymptotically AdS neutral black branes in the large-$D$ limit of general relativity, the mass and momentum distribution, $m(t,z)$ and $p(t,z)$, obey Eq.~\eqref{eom1} and the following equation~\cite{Emparan:2015gva,Herzog:2016hob} with the domain~\eqref{domain},
\be
(\pd_t - \pd_z^2 ) p + \pd_z m = -\pd_z ( \frac{p^2}{m} ).
\label{eom2_ads}
\ee
Substituting the expansion~\eqref{expansion1} and \eqref{expansion2} into Eqs.~\eqref{eom1} and \eqref{eom2_ads}, we obtain Eq.~\eqref{peom1} and
\be
\dot{p}_\ell - p_\ell'' + m_\ell' = \psi_\ell,
\label{expansion2_ads}
\ee
where the source term $\psi_\ell$ is the same as those in the asymptotically Minkowski case, Eqs.~\eqref{source1}--\eqref{source3}.
Performing the Laplace and Fourier transformation in Eqs.~\eqref{peom1} and \eqref{expansion2_ads}, we obtain a couple of algebraic equations, which is written as Eq.~\eqref{peom3} with ${\bm A}$ replaced by the following matrix,
\be
{\bm D}
:=
\left(
\begin{array}{cc}
s+k^2 & ik \\
ik & s+k^2 \\
\end{array}
\right).
\label{D}
\ee
The inverse matrix ${\bm D}^{-1}$ is decomposed into two parts,
\begin{gather}
{\bm D}^{-1}
=
\sum_{\sigma=+,-} \frac{1}{s-{\mathsf s}_\sigma (k)} {\bm E}_\sigma,
\label{Dinverse}
\\
{\bm E}_\sigma
:=
\frac12
\left(
\begin{array}{cc}
1 & -\sigma 1 \\
-\sigma 1 & 1 \\
\end{array}
\right),
\label{E}
\\
{\mathsf s}_\sigma (k) := k (\sigma i - k).
\label{dispersion_ads}
\end{gather}
After this decomposition, one can perform the inverse Laplace transformation ${\cal L}^{-1}$ in Eq.~\eqref{sol} with ${\bm A}$ replaced by ${\bm D}$. Then, the general form of solution is given by
\begin{align}
\left(
\begin{array}{c}
m_\ell(t,z) \\
p_\ell(t,z) \\
\end{array}
\right)
&=
\sum_{\sigma=+,-}
{\bm E}_\sigma
\left(
\begin{array}{c}
{\cal F}^{-1} [ e^{ {\mathsf s}_\sigma (k) t } \bar{m}_\ell (0,k) ]\\
{\cal F}^{-1} [ e^{ {\mathsf s}_\sigma (k) t } \bar{p}_\ell (0,k) + e^{ {\mathsf s}_\sigma (k) t } \ast \bar{ \psi }_\ell (t,k) ]\\
\end{array}
\right)
\label{sol2_ads}
\\
&=
\sum_{\sigma=+,-}
{\bm E}_\sigma
\left(
\begin{array}{c}
{\cal F}^{-1}[ e^{{\mathsf s}_\sigma (k) t} ] \star m_\ell (0,z) \\
{\cal F}^{-1}[ e^{{\mathsf s}_\sigma (k) t} ] \star p_\ell (0,z) + {\cal F}^{-1}[ e^{{\mathsf s}_\sigma (k) t} ] \star \ast \psi_\ell (t,z) \\
\end{array}
\right).
\label{sol3_ads}
\end{align}
While it depends on the chosen initial condition which expression between \eqref{sol2_ads} and \eqref{sol3_ads} is easier to compute, Eq.~\eqref{sol2_ads} is solely used in the rest of this paper. Using Eq.~\eqref{dispersion_ads}, the inverse Fourier transformation of $ e^{{\mathsf s}_\sigma (k) t } $ is easily computed as
\be
{\cal F}^{-1} [ e^{{\mathsf s}_\sigma (k) t} ]
=
\frac{1}{ \sqrt{ 4\pi t } } \exp[ -\frac{( \sigma t+z )^2 }{ 4t } ].
\label{fourier_exp_ads}
\ee
This will be useful when one uses expression~\eqref{sol3_ads}.
\subsection{Gaussian wave packet}
\label{sec:gauss_ads}
As in Sec.~\ref{sec:gauss}, we investigate the Gaussian wave packet as the initial perturbation given to the asymptotically AdS black brane. Compared with the full-order numerical solution, the linear-order approximation turns out to be rather accurate approximation in this case.
\begin{figure}[tb]
\begin{center}
\begin{minipage}[c]{0.9\textwidth}
\linespread{0.85}
\begin{center}
\setlength{\tabcolsep}{ 10 pt }
\begin{tabular}{ cc }
(a) & (b) \\
\includegraphics[height=4.5cm]{fig4a_gauss3dm1AdS.eps} &
\includegraphics[height=4.5cm]{fig4b_gauss3dp1AdS.eps} \\
(c) & (d) \\
\includegraphics[height=4cm]{fig4c_gauss2dmCompAdS.eps} &
\includegraphics[height=4cm]{fig4d_gauss2dpCompAdS.eps} \\
\end{tabular}
\caption{{\small {\sf Three-dimensional plots of (a) $1+m_1(t,z)$ and (b) $p_1(t,z)$, given by Eq.~\eqref{mp1_gauss_ads} with $\beta=b=1, \; z_0=0$. The comparison between first-order solution (c) $1+m_1(t,z)$ (resp.\ (d) $p_1(t,z)$) and full solution $m(t,z)$ (resp.\ $p(t,z)$) obtained by solving Eqs.~\eqref{eom1} and \eqref{eom2_ads} numerically. Snapshots of (c) $1+m_1(t,z)$ and (d) $p_1(t,z)$ at $t=0$ (blue-dashed), $t=0.67$ (green-solid), $t=2.0$ (red-solid), and $t=10$ (black-solid). Non-perturbative solutions obtained by solving Eqs.~\eqref{eom1} and \eqref{eom2_ads} numerically are drawn by (green, red, and black) dashed curves too, but can be hardly distinguished from the first-order solutions.}}}
\label{fig:gaussAdS}
\end{center}
\end{minipage}
\end{center}
\end{figure}
\subsubsection{First-order solutions}
For the initial perturbation given by the Gaussian wave packet, Eqs.~\eqref{ic_gauss1} and \eqref{ic_gauss2}, one can compute the following quantities
\begin{align}
{\cal F}^{-1}[ e^{{\mathsf s}_\sigma (k) t} \bar{m}_1 (0,k) ]
&=
\frac{\beta}{\sqrt{ 2\pi ( b^2+2t ) }}
\exp
[
- \frac{ t^2 + (z-z_0)^2 }{ 2(b^2+2t) }
-
\sigma \frac{ t(z-z_0) }{ b^2+2t }
],
\\
{\cal F}^{-1}[ e^{{\mathsf s}_\sigma (k) t} \bar{p}_1 (0,k) ]
&=
-\frac{\beta ( z-z_0+\sigma t ) }{\sqrt{ 2\pi ( b^2+2t )^3 }}
\exp
[
- \frac{ t^2 + (z-z_0)^2 }{ 2(b^2+2t) }
-
\sigma \frac{ t(z-z_0) }{ b^2+2t }
].
\end{align}
Substituting these quantities into Eq.~\eqref{sol2_ads}, we obtain the first-order solution,
\be
\left(
\begin{array}{c}
m_1 (t,z) \\
p_1 (t,z) \\
\end{array}
\right)
=
\beta
\sqrt{
\frac{ (b^2+3t)^2 - (z-z_0)^2 }{ 2\pi (b^2+2t)^3 }
}
\exp[ - \frac{ t^2 + (z-z_0)^2 }{ 2(b^2+2t) } ]
\left(
\begin{array}{c}
\displaystyle \cosh[ \frac{ t(z-z_0) }{ b^2+2t } - \Xi ] \\
\displaystyle \sinh[ \frac{ t(z-z_0) }{ b^2+2t } - \Xi ] \\
\end{array}
\right),
\label{mp1_gauss_ads}
\ee
where
\be
\cosh \Xi
:=
\frac{ b^2+3t }{ \sqrt{ (b^2+3t)^2-(z-z_0)^2 } },
\;\;\;
\sinh \Xi
:=
\frac{ z-z_0 }{ \sqrt{ (b^2+3t)^2-(z-z_0)^2 } }.
\ee
Like the diffusion phenomenon of a Gaussian wave packet according to an ordinary diffusion equation, solutions \eqref{mp1_gauss_ads} have temporal decay factor, which behaves as $ \frac{1}{\sqrt{ b^2+2t }} $, and spatial decay factor $\exp[ - \frac{ (z-z_0)^2 }{ 2(b^2+2t) } ]$. What crucially different from the ordinary diffusion is that the solution temporally damps rapidly due to factor $ \exp[ -\frac{ t^2 }{ 2(b^2+2t) } ] $.
Three-dimensional plots of $1+m_1(t,z)$ and $p_1(t,z)$ are presented in Figs.~\ref{fig:gaussAdS}(a) and \ref{fig:gaussAdS}(b), respectively. One can observe the fast damping described above. In addition, snapshots of $1+m_1(t,z)$ and $p_1(t,z)$ at selected moments, compared with numerical solutions, are presented in Figs.~\ref{fig:gaussAdS}(c) and \ref{fig:gaussAdS}(d), respectively. Compared with the full numerical solutions, which are obtained by directly solving original equations \eqref{eom1} and \eqref{eom2_ads} with the same initial conditions, {\it i.e.}, $m(0,z)=1+m_1(0,z)$ and $p(0,z)=p_1(0,z)$, one can observe that the first-order solution completely captures the full solution throughout the time domain considered. In other words, the higher-order perturbations are negligible, meaning that the $\epsilon$-expansion Eqs.~\eqref{expansion1} and \eqref{expansion2} converges rapidly for this example.
\subsection{Shock wave}
\label{sec:shock}
Here, let us consider the step-function like shock as the initial perturbation to the asymptotically AdS black brane. The propagation of this kind of shock is known as the Riemann problem in fluid mechanics. This classic problem attracts attentions recently in relativistic hydrodynamics since it makes us understand the non-equilibrium physics of quantum field theories. See the introduction of Ref.~\cite{Herzog:2016hob} for a brief but nice review for the recent development.
Assume that the black brane is given $O(\epsilon)$ perturbation as follows,
\begin{gather}
m_1(0,z) = \alpha \; {\rm sgn}(z),
\;\;\;
p_1(0,z) = 0,
\label{shockic}
\end{gather}
where $\alpha$ is a real constant and sgn denotes the sign function,
\be
{\rm sgn}(z) :=
\begin{cases}
-1 & (z<0) \\
0 & (z=0) \\
+1 & (z>0) \\
\end{cases}.
\label{sgn}
\ee
The assumption that $p_1$ initially vanishes is adopted to reproduce a situation considered in Ref.~\cite{Herzog:2016hob} (see the left panel of Fig.~5 in \cite{Herzog:2016hob}). The Fourier transformation of the above initial conditions are
\be
\bar{m}_1 (0,k) = - \frac{ 2i \alpha }{ k },
\;\;\;
\bar{p}_1 (0,k) = 0.
\label{mpbar_shock_ads}
\ee
\begin{figure}[tb]
\begin{center}
\begin{minipage}[c]{0.9\textwidth}
\linespread{0.85}
\begin{center}
\setlength{\tabcolsep}{ 10 pt }
\begin{tabular}{ cc }
(a) & (b) \\
\includegraphics[height=3.5cm]{fig5a_shock3dm.eps} &
\includegraphics[height=3.5cm]{fig5b_shock3dp.eps} \\
(c) & (d) \\
\includegraphics[height=4cm]{fig5c_shock2dmComp.eps} &
\includegraphics[height=4cm]{fig5d_shock2dpComp.eps} \\
\end{tabular}
\caption{{\small {\sf Three dimensional plots of (a) $1+m_1(t,z)$ and (b) $p_1(t,z)$, given by Eqs.~\eqref{shockm1} and \eqref{shockp1} with $\alpha=-1/2$, respectively. The comparison between first-order solution (c) $1+m_1(t,z)$ (resp.\ (d) $p_1(t,z)$) and full-order numerical solution $m(t,z)$ (resp.\ $p(t,z)$), obtained by solving Eqs.~\eqref{eom1} and \eqref{eom2_ads}. The blue-dashed curve represents initial configuration $m(0,z)$ (resp.\ $p(0,z)$). The green, red, and black solid curves represent the first-order solutions at $t=7.3$, $14$, and $22$, respectively. The (green, red, and black) dashed curve represents the full numerical solution at the corresponding time.}}}
\label{fig:shock}
\end{center}
\end{minipage}
\end{center}
\end{figure}
\subsubsection{First-order solutions}
Using Eqs.~\eqref{dispersion_ads} and \eqref{mpbar_shock_ads}, one obtains
\be
{\cal F}^{-1}[ e^{ {\mathsf s}_{\sigma} (k)t } \bar{m}_1 (0,k)]
=
\alpha \; {\rm erf}( \frac{ \sigma t + z }{ 2\sqrt{t} } ),
\;\;\;
{\cal F}^{-1}[ e^{ {\mathsf s}_{\sigma} (k)t } \bar{p}_1 (0,k)] = 0,
\label{Fmp_shock}
\ee
where ${\rm erf}(x):=\frac{2}{\sqrt{\pi}} \int_0^x e^{-x^2}dx$ is the Gauss error function. Here, we have used the following formula
\be
\int_{-\infty}^\infty \frac{ e^{-a(k-ib)^2 } }{ k } dk
=
i \pi e^{ab^2} {\rm erf} (\sqrt{a} b^2),
\;\;\;
a>0, \; b \in {\mathbb R}.
\ee
Substituting Eq.~\eqref{Fmp_shock} into Eq.~\eqref{sol2_ads}, one obtains the first-order solution,
\begin{align}
m_1(t,z)
&=\frac{\alpha}{2}
\left[
{\rm erf}(\frac{ t+z }{ 2\sqrt{t} }) - {\rm erf}(\frac{ t-z }{ 2\sqrt{t} } )
\right],
\label{shockm1}
\\
p_1(t,z)
&= - \frac{\alpha}{2}
\left[
{\rm erf}(\frac{ t-z }{ 2\sqrt{t} }) + {\rm erf}(\frac{ t+z }{ 2\sqrt{t} })
\right].
\label{shockp1}
\end{align}
Using the fact that the Gauss error function is an odd function, one can immediately show that $m_1(t,-z)=-m_1(t,z)$ and $p_1(t,-z)=p_1(t,z)$ hold. Namely, $m_1(t, z)$ and $p_1(t,z)$ are spatially odd and even functions, respectively.
In Fig.~\ref{fig:shock}, three-dimensional plots of $1+m_1(t,z)$ and $p_1(t,z)$, where the parameter is chosen as $\alpha=-1/2$, and their snapshots at selected moments are presented, compared with the non-perturbative numerical solutions obtained by directly solving Eqs.~\eqref{eom1} and \eqref{eom2_ads} with the initial condition similar to Eq.~\eqref{shockic}. Since the sign function is difficult to treat in numerical computations, we replace ${\rm sgn}(z)$ by $ \tanh ( c z )$ with a large positive number $c$ (say, $c=50$), based on the fact that $\lim_{c \to +\infty} \tanh(cz)={\rm sgn}(z)$.
In Fig.~\ref{fig:shock}, one can see that as soon as the dynamics begins, the discontinuity separates into two shock fronts moving to left and right, where the former and latter are called the rarefaction wave and shock wave, respectively~\cite{Herzog:2016hob}. The point is that the rarefaction and shock is interpolated by an expanding plateau region with a non-zero constant flux, called the non-equilibrium steady state (NESS).
While it is interesting that we obtained the semi-analytic results \eqref{shockm1} and \eqref{shockp1} describing the shock propagation and NESS, they are not satisfactory from some viewpoints. In Fig.~\ref{fig:shock}, one can observe that the full solution becomes asymmetric with respect to $z \to -z$, though the linear solutions \eqref{shockm1} and \eqref{shockp1} continue to be symmetric. Thus, for example, the value of $m$ at the NESS, which is always unity at $O(\epsilon)$, deviates between the first order and full solutions. These suggest that the higher-order perturbations are necessary to fill the above gaps. It seems impossible, however, to obtain the second-order solutions analytically since the first-order solution involves the error function. Thus, we will return to the Riemann problem in the next section as the example of the superposed sinusoidal waves.
\subsection{Superposed sinusoidal waves}
\label{sec:sin_ads}
We consider the initial condition which is the superposition of sinusoidal waves like Eqs.~\eqref{ic1}--\eqref{ic3}. For later purpose, however, let us assume the $O(\epsilon)$ initial momentum vanishes
\be
p_1 (0,z) = 0,
\label{ic4}
\ee
which is assumed instead of Eq.~\eqref{ic2}. The first- and second-order solutions satisfying initial conditions~\eqref{ic1}--\eqref{ic3} are presented in Appendix~\ref{superpose_ads2}.
\subsubsection{First-order solutions}
For the initial perturbation which is the superposition of sinusoidal waves \eqref{ic1} and \eqref{ic4}, we obtain
\begin{gather}
{\cal F}^{-1}[ e^{ {\mathsf s}_\sigma (k) t } \bar{m}_1 (0,k) ]
=
\frac12 \sum_{n=1}^N a_n
[
e^{ {\mathsf s}_\sigma (k_n) t } e^{ i( k_n z + \varphi_n ) }
+
e^{ {\mathsf s}_{- \sigma} (k_n) t } e^{ - i( k_n z + \varphi_n ) }
],
\\
{\cal F}^{-1}[ e^{ {\mathsf s}_\sigma (k) t } \bar{p}_1 (0,k) ]
= 0,
\end{gather}
where we have used $ {\mathsf s}_\sigma (-k) = {\mathsf s}_{-\sigma} (k)$.
Substituting these results into Eq.~\eqref{sol2_ads} and using the concrete form of the dispersion relation ${\mathsf s}_{\sigma} (k)$, we obtain the first-order solutions,
\begin{align}
m_1(t,z)
=&
\sum_{n=1}^N
a_n e^{-k_n^2 t}
\cos ( k_n t )
\cos ( k_n z + \varphi_n ),
\label{m1_ads}
\\
p_1(t,z)
=&
\sum_{n=1}^N
a_n e^{-k_n^2 t}
\sin ( k_n t )
\sin ( k_n z + \varphi_n ).
\label{p1_ads}
\end{align}
These results \eqref{m1_ads} and \eqref{p1_ads} tell us that the initial perturbation necessarily exhibits the damped oscillation for arbitrary non-zero wave number $k_n \in {\mathbb R} \setminus \{ 0 \}$ ($n =1,2,\cdots , N$), showing the black brane to be linearly stable.
\subsubsection{Second-order solutions}
Since we assume that the initial values vanish at $O(\epsilon^2)$, $ m_2(0,z)=p_2(0,z)=0 $, we see from Eq.~\eqref{sol2_ads} that what to compute is the inverse Fourier transformation of the convolution between $ e^{ {\mathsf s}_\sigma (k) t } $ and the Fourier spectrum of source term $\bar{\psi}_2(t,k)$. Using Eqs.~\eqref{source2} and \eqref{p1_ads}, such a quantity is computed and written down in a simple form as
\begin{align}
{\cal F}^{-1}[ e^{ {\mathsf s}_\sigma (k) t } \ast & \bar{ \psi }_2 (t,k) ]
=
- \frac{i}{8}
\sum_{n=1}^N \sum_{n'=1}^N a_n a_{n'} k_{n'}
\nn
\\
&
\times
\Big(
F_{nn'}^{(\sigma)(+)} e^{i[ (k_n+k_{n'})z+(\varphi_n + \varphi_{n'}) ] }
+
F_{nn'}^{(\sigma)(-)} e^{i[ (k_n-k_{n'})z+(\varphi_n - \varphi_{n'}) ]}
\nn
\\
&
-
F_{nn'}^{(-\sigma)(-)} e^{-i[ (k_n-k_{n'})z+(\varphi_n - \varphi_{n'}) ]}
-
F_{nn'}^{(-\sigma)(+)} e^{-i[ (k_n+k_{n'})z+(\varphi_n + \varphi_{n'}) ]}
\Big)
\label{convo_ads}
\end{align}
by defining a function of time,
\begin{align}
F_{nn' }^{(\sigma)(\sigma') }
:=&
\frac{ 1 }{ {\mathsf s}_+(k_n) + {\mathsf s}_+(k_{n'}) - {\mathsf s}_\sigma ( k_n + \sigma' k_{n'} )}
( e^{ [ {\mathsf s}_+(k_n) + {\mathsf s}_+(k_{n'}) ] t } - e^{ {\mathsf s}_\sigma (k_n +\sigma' k_{n'}) t } )
\nn
\\
-&
\frac{ 1 }{ {\mathsf s}_+(k_n) + {\mathsf s}_-(k_{n'}) - {\mathsf s}_\sigma ( k_n +\sigma' k_{n'} )}
( e^{ [ {\mathsf s}_+(k_n) + {\mathsf s}_-(k_{n'}) ] t } - e^{ {\mathsf s}_\sigma (k_n +\sigma' k_{n'}) t } )
\nn
\\
-&
\frac{ 1 }{ {\mathsf s}_-(k_n) + {\mathsf s}_+(k_{n'}) - {\mathsf s}_\sigma( k_n +\sigma' k_{n'} )}
( e^{ [ {\mathsf s}_-(k_n) + {\mathsf s}_+(k_{n'}) ] t } - e^{ {\mathsf s}_\sigma (k_n +\sigma' k_{n'}) t } )
\nn
\\
+&
\frac{ 1 }{ {\mathsf s}_-(k_n) + {\mathsf s}_-(k_{n'}) - {\mathsf s}_\sigma ( k_n +\sigma' k_{n'} )}
( e^{ [ {\mathsf s}_-(k_n) + {\mathsf s}_-(k_{n'}) ] t } - e^{ {\mathsf s}_\sigma (k_n +\sigma' k_{n'}) t } ).
\label{f}
\end{align}
Substituting the above result \eqref{convo_ads} into Eq.~\eqref{sol2_ads}, we have the second-order solutions as
\begin{align}
m_2(t,z)
=
\frac{i}{8} \sum_{ n =1 }^n \sum_{n'=1}^N a_n a_{n'} k_{n'}
\Big(
[ & F_{nn'}^{(+)(+)} - F_{nn'}^{(-)(+)} ]
\cos [ (k_n+k_{n'})z + (\varphi_n + \varphi_{n'}) ]
\nn
\\
+&
[ F_{nn'}^{(+)(-)} - F_{nn'}^{(-)(-)} ] \cos [ (k_n-k_{n'})z + (\varphi_n - \varphi_{n'}) ]
\Big),
\label{m2_pre_ads}
\\
p_2(t,z)
=
\frac18 \sum_{ n =1 }^n \sum_{n'=1}^N a_n a_{n'} k_{n'}
\Big(
[ & F_{nn'}^{(+)(+)} + F_{nn'}^{(-)(+)} ]
\sin [ (k_n+k_{n'})z + (\varphi_n + \varphi_{n'}) ]
\nn
\\
+&
[ F_{nn'}^{(+)(-)} + F_{nn'}^{(-)(-)} ] \sin [ (k_n-k_{n'})z + (\varphi_n - \varphi_{n'}) ]
\Big) .
\label{p2_pre_ads}
\end{align}
Note that $m(t,z) = 1+m_1(t,z)+m_2(t,z)$ and $p(t,z) = p_1(t,z)+p_2(t,z)$ with Eqs.~\eqref{m1_ads}, \eqref{p1_ads}, \eqref{m2_pre_ads}, and \eqref{p2_pre_ads}, represent $O(\epsilon^2)$ approximate time evolution of the initial perturbation, which takes the form of superposed sinusoidal waves~\eqref{ic1}, \eqref{ic4}, and \eqref{ic3}. These approximate solutions are rather general in the sense that initial condition \eqref{ic1} is general.
Since the initial perturbations are assumed to vanish at $O(\epsilon^2)$, $m_2(0,z)=p_2(0,z)=0$, the above solutions contain only the contribution from the source term $\psi_2=-2p_1p_1'$. If one prepares for non-vanishing initial conditions at the second order, its contribution is simply added to the above solution, but such a contribution will exhibit no interesting behavior since it has the time dependence similar to that in $O(\epsilon)$ solution.
\subsubsection{Shock wave}
\begin{figure}[tb]
\begin{center}
\begin{minipage}[c]{0.9\textwidth}
\linespread{0.85}
\begin{center}
\setlength{\tabcolsep}{ 10 pt }
\begin{tabular}{ cc }
(a) & (b) \\
\includegraphics[height=4cm]{fig6a_riemann3dm.eps} &
\includegraphics[height=4cm]{fig6b_riemann3dp.eps} \\
(c) & (d) \\
\includegraphics[height=4cm]{fig6c_riemann2dmComp.eps} &
\includegraphics[height=4cm]{fig6d_riemann2dpComp.eps} \\
\end{tabular}
\caption{{\small {\sf Three-dimensional plots of full-order numerical solution (a) $m(t,z)$ and (b) $p(t,z)$, obtained by solving Eqs.~\eqref{eom1} and \eqref{eom2_ads} with initial condition Eq.~\eqref{sgn} where ${\rm sgn}(z)$ is replaced by finite Fourier series of ${\rm sgn}_L(z)$ given by Eqs.~\eqref{sgnL1}--\eqref{sgnL3}. The comparison between second-order solution (c) $1+m_1(t,z)+m_2(t,z)$ (resp.\ (d) $p_1(t,z)+p_2(t,z)$) and full-order numerical solution $m(t,z)$ (resp.\ $p(t,z)$), obtained by solving Eqs.~\eqref{eom1} and \eqref{eom2_ads}. The blue-dashed curve represents initial configuration $m(0,z)$ (resp.\ $p(0,z)$). The green, red, and black solid curves represent the second-order solutions at $t=83.3$, $167$, and $250$, respectively. The (green, red, and black) dashed curve represents the full numerical solution at the corresponding time.}}}
\label{fig:riemann}
\end{center}
\end{minipage}
\end{center}
\end{figure}
We re-investigate the Riemann problem in Sec.~\ref{sec:shock}, by choosing appropriate parameters $(a_n, k_n, \varphi_n)$ in initial condition~\eqref{ic1}. In order to do so, the sign function, ${\rm sgn}(z)$, in initial condition~\eqref{shockic} is replaced by the following function,
\be
{\rm sgn}_L(z) =
\begin{cases}
-1 & (-L < z<0) \\
0 & (z = 0) \\
+1 & (0 < z < L) \\
\end{cases},
\ee
where $L$ is a positive constant and the periodic extension to entire ${\mathbb R}$ with period $2L$ is assumed. Since this function has the following Fourier series expansion,
\be
{\rm sgn}_L(z)
=
\sum_{n=1}^\infty \frac{ 4 }{ (2n-1)\pi } \sin[ \frac{(2n-1)\pi }{L} z ],
\label{sgnL1}
\ee
the parameters in initial condition~\eqref{ic1} should be
\be
a_n = \frac{4}{(2n-1)\pi},
\;\;\;
k_n = \frac{(2n-1)\pi}{L},
\;\;\;
\theta_n = -\frac{\pi}{2}
\label{sgnL2}
\ee
for $n=1,2,\cdots, \; N$ with the limit $N \to \infty$. Taking large $L$ and focusing on the spatial region around the center $z=0$, there must be no difference in the dynamics during a finite interval of time between ones using ${\rm sgn}(z)$ and ${\rm sgn}_L(z)$, due to the (non-relativistic) causality encoded in the equations of motion.
In Figs.~\ref{fig:riemann}(a) and \ref{fig:riemann}(b), we present the three-dimensional plots of full-order numerical solutions of $m(t,z)$ and $p(t,z)$, respectively, starting from the initial condition $m(0,z)=1+m_1(0,z)$ and $p(0,z)=0$, where $m_1(0,z)$ is given by Eqs.~\eqref{ic1} and \eqref{sgnL2}. The rest parameters and the cutoff of $N$ are chosen as follow,
\be
\alpha = - \frac12,
\;\;\;
L=1000,
\;\;\;
N=45.
\label{sgnL3}
\ee
In Figs.~\ref{fig:riemann}(c) and \ref{fig:riemann}(d), we compare these full solutions with the second-order solutions, {\it i.e.}, $1+m_1(t,z)+m_2(t,z)$ and $p_1(t,z)+p_2(t,z)$ provided by Eqs.~\eqref{m1_ads}, \eqref{p1_ads}, \eqref{m2_pre_ads}, and \eqref{p2_pre_ads} with initial conditions given by \eqref{ic1}, \eqref{sgnL2}, and \eqref{sgnL3}.
Unlike the $O(\epsilon)$ approximation in Sec.~\ref{sec:shock}, the $O(\epsilon^2)$ approximation presented here captures the spatially asymmetric features under the reflection $z \to -z$ of full solution. Furthermore, the error of values of $m$ and $p$ at the plateau of NESS between the $O(\epsilon^2)$ solution and full solutions are within a few percent ($1.2 \%$ and $5.0 \%$ for $m$ and $p$, respectively, for the above choice of parameters), which were inadequately large for the $O(\epsilon)$ approximation in Sec.~\ref{sec:shock}. This is by virtue of the second-order solutions, which appropriately take into account the `back-reaction' of the first-order solution through the source term.
Due to the Gibbs phenomenon, there appears the oscillation near the jump in the initial shock. Such oscillations stem from the artificial cutoff of the Fourier series expansion. As the time proceeds, the `horns' as the remnant of Gibbs oscillations can be observed to propagate near the front of rarefaction and shock waves, and to be amplified. This artifact due to the cutoff will not be a problem when one treats the solution exactly.
\section{Conclusion}
\label{sec:conc}
In the large-$D$(dimension) limit of general relativity, the Einstein equations describing the horizon dynamics of asymptotically flat (resp.\ AdS) black branes are written in the form of coupled diffusion equations \eqref{eom1} and \eqref{eom2} (resp.\ \eqref{eom1} and \eqref{eom2_ads})~\cite{Emparan:2015gva,Herzog:2016hob}. While these equations are much simpler than the original Einstein equations, there is a non-linear term, making it difficult to solve exactly. Therefore, we have formulated the perturbation theory in this paper making it possible to obtain analytic results on the black-brane dynamics.
The metric functions $m(t,z)$ and $p(t,z)$, which represent the mass and momentum distributions in the direction of horizon $z$, respectively, were expanded around a uniform black-brane solution with formal small parameter $\epsilon$ as Eqs.~\eqref{expansion1} and \eqref{expansion2}. Then, the perturbative equations of motion were obtained, and solved order by order using the Laplace and Fourier transformation with respect to $t$ and $z$, respectively. As the result, the general form of solutions $m_\ell (t,z)$ and $p_\ell(t,z)$ in the flat (resp.~AdS) background were obtained~\eqref{sol2} (resp.~\eqref{sol2_ads}), in which the inverse Fourier transformation of initial spectra $\bar{m}_\ell (0,k)$ and $\bar{p}_\ell (0,k)$, and the spectrum of source term $\bar{\psi}_\ell(t,k)$, being a polynomial of lower-order perturbations, were left to be computed.
As the example of initial conditions of perturbation, the Gaussian wave packet was considered for both the asymptotically flat and AdS black branes in Secs.~\ref{sec:gauss} and \ref{sec:gauss_ads}, respectively, and the first-order solutions were written down explicitly. The resulting dynamics from this initial condition was not surprising itself. Namely, the wave pack grows and damps rapidly for the asymptotically flat and AdS black branes, respectively, which is expected from their known stability. A remarkable point revealed by this example is that the first-order solution captures the features of full-order solution rather accurately even for a finite amplitude. Thus, the convergence of expansion by $\epsilon$ ({\it i.e.},\ amplitude) is rather rapid and can be used for finite-amplitude perturbations for a certain class of problems.
Only for the asymptotically AdS black brane, the step-function like initial condition was considered in order to investigate the shock propagation in Sec.~\ref{sec:shock}, and the first-order solution was explicitly written down. While the solution captures the emergence and propagation of NESS (non-equilibrium steady state) qualitatively, the first-order solution is not enough to reproduce the properties of NESS such as the values of metric functions and asymmetry of the full solution.
The discretely superposed sinusoidal waves, which can be the Fourier series expansion of an arbitrary piecewise continuous periodic function, were considered for both the asymptotically flat and AdS black branes in Secs.~\ref{sec:sin} and \ref{sec:sin_ads}, respectively, and the first- and second-order solutions were written down explicitly. For the black brane in the flat background, the non-trivial feature of GL (Gregory-Laflamme) instability was revealed. Namely, the mode-mode coupling at the second order can make the perturbation grows even if the initial perturbation damps at the first order. For the black brane in the AdS background, the shock propagation considered in Sec.~\ref{sec:shock} was re-investigated. Thanks to the second-order contribution, the values and asymmetric features of the full solution were reproduced, illustrating the usefulness of the formalism in this paper.
There are many things to do by applying and generalizing the formulation presented in this paper. For instance, (i) it would be interesting to investigate further why the sign of convergence to non-uniform black string (NUBS) of asymptotically flat black string does not appear even in the second-order perturbation. (ii) One is able to investigate the various type of shock-wave propagation such as those discussed in Ref.~\cite{Herzog:2016hob} if one slightly changes the ansatz of expansion \eqref{expansion1} and \eqref{expansion2}, which is straightforward. (iii) Including $1/D$ corrections and charges of background black branes~\cite{Emparan:2015hwa,Emparan:2016sjk,Rozali:2016yhw} would increase the problems to be worked by our formalism.
\subsection*{Acknowledgments}
The author would like to thank R.\ Emparan, R.\ Suzuki, and K.\ Tanabe for useful discussion and comments during The Spanish-Portuguese Relativity Meetings held at Lisbon (12--15th, Sep.~2016), and T.~Torii during his stay at Akita (12--17th, Feb.\ 2017). This work was supported by JSPS KAKENHI Grant Number 15K05086.
|
1,477,468,751,433 | arxiv | \section{Introduction}
Let $n, k$ be positive integers such that $k\leq n$ and let $[n]=\{1,2,\ldots, n\}$. A family of $k$-subsets of $[n]$ is said to be \emph{intersecting} if the intersection of any two $k$-subsets in the family is non-empty. The Erd{\H o}s-Ko-Rado (EKR) theorem is a classical result in extremal set theory. It states that when $k<n/2$ any intersecting family of $k$-subsets has size at most ${n-1 \choose k-1}$; equality holds if and only if the family consists of all $k$-subsets of $[n]$ containing a fixed element of $[n]$ (cf. \cite{ekr}). In this paper, we focus on EKR type problems for permutation groups. In particular, for any odd prime power $q$, we consider the natural right action of $PSL(2,q)$, the 2-dimensional projective special linear group over the finite field $\mathbb{F}_q$, on the set of points of $PG(1,q)$, the projective line over $\mathbb{F}_q$.
Let $X$ be a finite set and $G$ a finite group acting on $X$. A subset $S$ of $G$ is said to be an {\it intersecting family} if for any $g_1,g_2 \in S$ there exists an element $x \in X$ such that $x^{g_1}= x^{g_2}$, i.e., $g_1g_2^{-1}$ stablizes some $x\in X$. In the context of EKR-type theorems, the following problems about intersecting families in $G$ are of interest:
\begin{enumerate}[I]
\item (Upper Bound) What is the maximum size of an intersecting family?
\item (Characterization) What is the structure of intersecting families of maximum size?
\end{enumerate}
Extensive research has been done to solve the above problems for different groups. In 1977, Deza and Frankl \cite{Frankl1} solved Problem I for the symmetric group $S_n$ acting on $[n]$. They proved that any intersecting family of $S_n$ has size at most $(n-1)!$. In fact, this upper bound is tight because any coset of a point stabilizer in $S_n$ is an intersecting family of size precisely $(n-1)!$. They conjectured these sets are the only intersecting families of size $(n-1)!$. This conjecture was proved to be true, independently, by Cameron and Ku \cite{Cameron1} and Larose and Malvenuto \cite{Larose1}.
In \cite{Karen1}, Meagher and Spiga studied Problem I and II for the group $PGL(2,q)$ acting on the set of points of the projective line $PG(1,q)$. These authors proved that the maximum size of an intersecting family in $PGL(2,q)$ is $q(q-1)$. Furthermore, they also solved the characterization problem: Every intersecting family of maximum size in $PGL(2,q)$ is a coset of a point stabilizer. In \cite{Karen2}, they went one step further to solve Problem I and II for the group $PGL(3,q)$ acting on the set of points of the projective plane $PG(2,q)$.
In this paper we study Problem II for the group $PSL(2,q)$ acting on $PG(1,q)$, where $q$ is an odd prime power. Here we only consider the $q$ odd case since if $q$ is a power of two, we have $PSL(2,q)=PGL(2,q)$, and both Problem I and II were solved in \cite{Karen1}. It is known, from the combined results of \cite{Karen3, Karen1}, that the maximum size of an intersecting family in $PSL(2,q)$ is $q(q-1)/2$. (In fact, in a recent paper \cite{mst}, it is proved that if $G\leq S_n$ is a 2-transitive group, then the maximum size of an intersecting family in $G$ is $|G|/n$. That is, the maximum size of an intersecting family is the cardinality of a point stabilizer.) However, it is only a conjecture that all intersecting families of maximum size are cosets of point stabilizers when $q>3$. (See the second part of Conjecture 1 in \cite{Karen1}.) In this paper, we prove that the second part of Conjecture 1 in \cite{Karen1} is true for all odd prime powers $q>3$. \color{black}
\begin{theorem}\label{psl_teo1}
Let $S$ be an intersecting family in $PSL(2,q)$ of maximum size, where $q>3$ is an odd prime power. Then $S$ is a coset of a point stabilizer.
\end{theorem}
Note that when $q=3$, we have $PSL(2,q)\cong A_4$, and the action of $PSL(2,q)$ on the projective line $PG(1,q)$ is equivalent to the (natural) action of $A_4$ on $\{1,2,3,4\}$; in this case, it was pointed out in \cite{kuw} that the set $S=\{(1), (123), (234)\}$ (we are using cycle notation for permutations), is an intersecting family of maximum size in $A_4$, but $S$ is not a coset of any point stablizer. To prove Theorem \ref{psl_teo1} we apply a general method for solving Problem II for some $2$-transitive groups. This technique was described by Ahmadi and Meagher in \cite{Karen3} and they called it ``The Module Method''. This method reduces the characterization of intersecting families of maximum size to the computation of the $\mathbb{C}$-rank of a matrix which we define below.
\begin{definition}
Let $X$ be a finite set and $G$ a finite group acting on $X$. An element $g\in G$ is said to be a \emph{derangement} if its action on $X$ is fixed-point-free. The {\it derangement matrix} of $G$ acting on $X$ is the $(0,1)$-matrix $M$, whose rows are indexed by the derangements of $G$, whose columns are indexed by the ordered pairs of distinct elements in $X$, and for any derangement $g \in G$ and $(a,b) \in X\times X$ with $a \neq b$, the $(g, (a,b))$-entry of $M$ is defined by
\[
M(g, (a,b)) = \left\lbrace \begin{array}{ll}
1, & \mbox{ if }a^g=b,\\
0, & \mbox{otherwise.}
\end{array} \right.
\]
\end{definition}
The Module Method states that, under certain conditions, if the rank of the derangement matrix $M$ of $G$ acting on $X$ is equal to $(|X|-1)(|X|-2)$, then the cosets of point stabilizers are the only intersecting families of maximum size in $G$. This technique has been applied to show that the cosets of point stabilizers are the only intersecting families of maximum size for the symmetric group \cite{Karen4}, the alternating group \cite{Karen5}, $PGL(2,q)$ \cite{Karen1}, and many other groups \cite{Karen3}.
Thus, in order to prove Theorem \ref{psl_teo1} by applying the Module Method, it is
enough to show that the rank of the derangement matrix $M$ of $PSL(2,q)$ acting on $PG(1,q)$ is equal to $q(q-1)$. Therefore, Theorem \ref{psl_teo1} follows directly from the next theorem.
\begin{theorem}\label{psl_teo2}
Let $M$ be the derangement matrix of $PSL(2,q)$ acting on $PG(1, q)$, where $q>3$ is an odd prime power. Then the $\mathbb{C}$-rank of $M$ is $q(q -1)$.
\end{theorem}
Exactly the same statement for $PGL(2,q)$ is proved in \cite[Prop. 9]{Karen1},
so we must first examine why the proof does not immediately carry over to
$PSL(2,q)$. In \cite{Karen1} the matrix $M^{\top}M$ represents a certain $PGL(2,q)$-module
endomorphism of a permutation module. The main calculation is to show,
for each irreducible constituent character of this module, that
the image of $M^{\top}M$ is not annihilated by the corresponding central idempotent.
Consequently, the image also contains the character as a constituent,
and the rank result follows due to the fact that the module in question is almost multiplicity-free, in the sense that, with one exception, each irreducible constituent character occurs with multiplicity one. If one attempts to follow the same
procedure for $PSL(2,q)$ one runs immediately into the problem that the
$PSL(2,q)$-constituents of the permutation module have high multiplicity.
Fortunately, this obstacle can be sidestepped by observing that although
we are working in $PSL(2,q)$, our sets and permutation modules admit the action
of $PGL(2,q)$, and for the larger group the permutation module has the
property of being almost multiplicity-free. A more serious
difficulty arises when one attempts to show that the central idempotents
have nonzero images in the permutation module. As for $PGL(2,q)$, the problem
boils down to showing that certain sums of character values are not zero.
For $PGL(2,q)$, these sums could be estimated by elementary arguments.
However, the sums for $PSL(2,q)$ appear to be much harder to deal with, and
our proof proceeds by reformulating the sums
as character sums over finite fields and applying some deep results
on hypergeometric functions over finite fields.
The finite field character sums which appear are Legendre and Soto-Andrade sums (see Section \ref{psl_ls_sums}). \color{black} This is not a surprise; it is well known that these sums appear in connection with the complex representation theory of $PGL(2,q)$ \cite{Kable}. To prove that these character sums are not equal to zero the following facts will be crucial:
\begin{enumerate}
\item The Legendre and Soto-Andrade sums (see Definitions \ref{def1} and \ref{def1_1}) on $\mathbb{F}_q$ form an orthogonal basis in the inner product space $\ell_2(\mathbb{F}_q,m)$ \cite{Kable}, where $m$ is the measure assigning mass $q+1$ to the points $\pm1$ and mass $1$ to all other points.
\item The Legendre sums may be expressed in terms of hypergeometric functions over finite fields (see Section \ref{psl_hyp_sum}). These functions were introduced by Greene in \cite{Greene1} and Katz in \cite{Katz1} and since that time they have been extensively studied \cite{Ono1, LLong, Kable}.
\end{enumerate}
The rest of this paper is organized as follows. In Section 2, we provide some basic results about the character table of $PGL(2,q)$, Legendre and Soto-Andrade sums, and hypergeometric functions over finite fields. In Section 3, we show that the rank of the derangement matrix $M$ is equal to the dimension of the image of a $PGL(2,q)$-module homomorphism. We use this fact to reduce the problem of computing the rank of $M$ to that of showing some explicit character sums over $PGL(2,q)$ are not equal to zero. In Section $4$, we find some formulas to express those character sums over $PGL(2, q)$ in terms of Legendre and Soto-Andrade sums. In Section 5, we prove Theorem \ref{psl_teo2}. In Section 6, we conclude with some remarks and open problems.
\section{Background}
We start by recalling standard facts about the
groups $PGL(2,q)$ and $PSL(2,q)$ and their complex characters, introducing our notation in the process. We shall assume that the reader is
familiar with the general terminology and basic results
from the representation theory of finite groups over the complex field, as can be found
in many textbooks, and we shall use \cite{Serre} for specific references when
necessary.
\subsection{The groups $PGL(2,q)$ and $PSL(2,q)$}
Let $\mathbb{F}_q$ be the finite field of size $q$ and $\mathbb{F}_{q^2}$ its unique quadratic extension. We denote by $\mathbb{F}_q^*$ and $\mathbb{F}_{q^2}^*$ the multiplicative groups of $\mathbb{F}_q$ and $\mathbb{F}_{q^2}$, respectively.
Let $GL(2,q)$ be the group of all invertible $2 \times 2$ matrices over $\mathbb{F}_q$
and $SL(2,q)$ the subgroup of all invertible $2 \times 2$ matrices with determinant $1$.
The center $Z(GL(2,q))$ of $GL(2,q)$ consists of all non-zero scalar matrices
and we define $PGL(2,q)= GL(2,q)/Z(GL(2,q))$ and $PSL(2,q)=SL(2,q)/\left (SL(2,q) \cap Z(GL(2,q))\right )$. If $q$ is odd then $PSL(2,q)$ is a subgroup of $PGL(2,q)$ of index $2$, while if $q$ is even then $PGL(2,q)= PSL(2,q)$.
We denote by $PG(1,q)$ the set of $1$-dimensional subspaces of the space
$\mathbb{F}_q^2$ of row vectors of length 2. Thus, $PG(1,q)$ is a projective line over $\mathbb{F}_q$ and its elements are called projective points. An easy computation shows that $PG(1,q)$ has cardinality $q+1$. From the above definitions, it is clear that the
$GL(2,q)$-action on $\mathbb{F}_q^2$ by right multiplication induces a natural right
action of the groups $PGL(2,q)$ and $PSL(2,q)$ on $PG(1,q)$.
The action of the subgroup $PSL(2,q)$ is $2$-transitive, that is,
given any two ordered pairs of distinct points there is a group element sending
the first pair to the second. The action of $PGL(2,q)$ is {\it sharply
$3$-transitive}, that is, given any two ordered triples of distinct points
there is a unique group element sending the first triple to the second.
\color{black}
\subsection{The character table of $PGL(2,q)$}\label{psl_ct}
We assume in this section and throughout this paper that $q$ is an odd prime power. We briefly describe the character table of $PGL(2,q)$. We refer the reader to \cite{Pia1} for a complete study of the complex irreducible characters of $PGL(2,q)$. We start by describing its conjugacy classes. By abuse of notation we will denote the elements of $PGL(2,q)$ by $2\times 2$ matrices with entries from $\mathbb{F}_q$.
First note that, the elements of $PGL(2,q)$ can be collected into four sets: The set consisting of the identity element only; the set consisting of the non-scalar matrices with only one eigenvalue in $\mathbb{F}_q$; the set consisting of matrices with two distinct eigenvalues in $\mathbb{F}_q$; and the set of matrices with no eigenvalues in $\mathbb{F}_q$. Recall that the elements of $PGL(2,q)$ are projective linear transformations so if $\{x_1,x_2\}$ are eigenvalues of some $g \in PGL(2,q)$ then $\{ax_1,ax_2\}$ are also eigenvalues of $g$ for any $a \in \mathbb{F}_q^*$. Hence, the eigenvalues of elements in $PGL(2,q)$ are defined up to multiplication by elements of $\mathbb{F}_q^*$.
The identity of $PGL(2,q)$, denoted by $I$, defines a conjugacy class of size $1$. Every non-identity element of $PGL(2,q)$ having only one eigenvalue in $\mathbb{F}_q^*$ is conjugate to
\[
u= \left(\begin{array}{cc}
1 & 1\\
0 & 1
\end{array}\right).
\]
The conjugacy class of $u$ contains $q^2-1$ elements. The elements having two distinct eigenvalues in $\mathbb{F}_q$ are conjugate to
\[
d_x=\left(\begin{array}{cc}
x & 0\\
0 & 1
\end{array}\right)
\]
for some $x \in \mathbb{F}^*_q\setminus\{1\}$. Moreover, $d_x$ and $d_y$ are conjugate if and only if $x=y$ or $x=y^{-1}$. The size of the conjugacy class containing $d_x$ is $q(q+1)$ for $x \in \mathbb{F}^*_q\setminus\{\pm 1\}$ and $q(q+1)/2$ for $x=-1$. Finally, the elements of $PGL(2,q)$ with no eigenvalues in $\mathbb{F}^*_q$ are conjugate to
\[
v_r=\left(\begin{array}{cc}
0 & 1\\
-r ^{1+q} & r + r^q
\end{array}\right)
\]
for some $r \in \mathbb{F}^*_{q^2} \setminus \mathbb{F}^*_q$. The matrices $v_r$ have eigenvalues $\{r, r^q\}$. Hence, $v_{r_1}$ and $v_{r_2}$ lie in the same conjugacy class if and only if $r_1 \mathbb{F}^*_q = r_2 \mathbb{F}^*_q$ or $r_1 \mathbb{F}^*_q = r_2^{-1} \mathbb{F}^*_q$. The size of the conjugacy class containing $v_r$ is $q(q-1)$ if $r \in \mathbb{F}^*_{q^2} \setminus(\mathbb{F}^*_q \cup i\mathbb{F}^*_q)$ and $q(q-1)/2$ if $r \in i\mathbb{F}^*_q$, where $i$ is an element of $\mathbb{F}^*_{q^2}\setminus\mathbb{F}^*_q$ such that $i^2 \in \mathbb{F}^*_q$.
The complex irreducible characters of $PGL(2,q)$ are described in Table \ref{table1}. They also come in four families. First the characters $\lambda_1$ and $\lambda_{-1}$ correspond to representations of degree $1$. Here $\lambda_1$ is the principal character and the values of $\lambda_{-1}$ depend on a function $\delta$ which is defined as follows: $\delta(x)=1$ if $d_x \in PSL(2,q)$ and $\delta(x)=-1$ otherwise, similarly, $\delta(r)=1$ if $v_r \in PSL(2,q)$ and $\delta(r)=-1$ otherwise.
Secondly, the characters $\psi_1$ and $\psi_{-1}$ correspond to representations of degree $q$. The character $\psi_1$ is the standard character which is an irreducible character of $PGL(2,q)$. Thus, for every $g \in PGL(2,q)$, the value of $\psi_{1}(g)$ is equal to the number of projective points fixed by $g$ in $PG(1,q)$ minus $1$. The values of $\psi_{-1}$ depend on the function $\delta$ defined above.
The third family is known as the cuspidal characters of $PGL(2,q)$. They correspond to representations of degree $q-1$ and their values depend on multiplicative characters of $\mathbb{F}_{q^2}$. In fact, the label $\beta$ in Table \ref{table1} runs through all homomorphism $\beta: \mathbb{F}_{q^2}^*/ \mathbb{F}_{q}^* \rightarrow \mathbb{C}^*$ of order greater than $2$ up to inversion. Note that every $\beta$ corresponds to a unique multiplicative character of $\mathbb{F}_{q^2}$ which is trivial on $\mathbb{F}_{q}^*$.
Finally, the fourth family of irreducible characters is known as the principal series of $PGL(2,q)$. These characters correspond to representations of degree $q+1$ and their values depend on multiplicative characters of $\mathbb{F}_{q}$. In fact, the label $\gamma$ in Table \ref{table1} runs through all the homomorphism $\gamma : \mathbb{F}_{q}^* \rightarrow \mathbb{C}^*$ of order greater than 2 up to inversion.
Throughout this paper we denote by $\Gamma$ and $B$ a fixed selection of characters $\gamma$ and $\beta$, as defined above, up to inversion of size $(q-3)/2$ and $(q-1)/2$, respectively. Therefore, the principal series and cuspidal irreducible characters of $PGL(2,q)$ are given by $\{\nu_{\gamma}\}_{\gamma \in \Gamma}$ and $\{\eta_{\beta}\}_{\beta \in B}$, respectively.
\begin{table}
\caption{Character table of $PGL(2,q)$}\label{table1}
\begin{center}
\begin{tabular}{ | c | c | c | c | c | c | c | }
\hline
& $I$ & $u$ & $d_{x}$ & $d_{-1}$ & $v_{r}$ & $v_{i}$ \\ \hline
$\lambda_1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ \\ \hline
$\lambda_{-1}$ & $1$ & $1$ & $\delta(x)$ & $\delta(-1)$ & $\delta(r)$ & $\delta(i)$ \\ \hline
$\psi_1$ & $q$ & $0$ & $1$ & $1$ & $-1$ & $-1$\\ \hline
$\psi_{-1}$ & $q$ & $0$ & $\delta(x)$ & $\delta(-1)$ & $-\delta(r)$ & $-\delta(i)$\\ \hline
$\eta_{\beta}$ & $q-1$ & $-1$ & $0$ & $0$ & $-\beta(r)- \beta(r^q)$ & $-2\beta(i)$ \\ \hline
$\nu_{\gamma}$ & $q+1$ & $1$ & $\gamma(x) + \gamma(x^{-1})$ & $2\gamma(-1)$ & $0$ & $0$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\subsection{Hypergeometric functions over finite fields}\label{psl_hyp_sum}
A (generalized) hypergeometric function with parameters $a_i,b_j$ is defined by
$$\pFq{n+1}{n}{a_1&a_2&\cdots&a_{n+1}}{&b_1&\cdots&b_n}{x}=\sum_{k\ge 1} \frac{(a_1)_k\cdots (a_{n+1})_k}{(b_1)_k\cdots(b_n)_k}\frac{x^k}{k!},$$ where $a_0=1$ and for $k\ge 1$, $(a)_k=a(a+1)\cdots(a+k-1)$ is called the Pochhammer symbol.
Hypergeometric functions over finite fields were introduced independently by John Greene \cite{Greene1} and Nicholas Katz \cite{Katz1}. Note that the two definitions differ only in a normalizing factor for cases related to our discussion.
In this section and throughout this paper we denote by $\epsilon$ and $\phi$ the trivial and quadratic multiplicative characters of $\mathbb{F}_q$, respectively. Also throughout this paper \color{black} we adopt the convention of extending multiplicative characters by declaring them to be zero at $0\in \mathbb{F}_q$. For any multiplicative character $\gamma$, we use $\overline \gamma$ to denote its complex conjugation. A Gauss sum of $\gamma$ is defined by $g(\gamma):=\sum_{x\in\mathbb{F}_q}\gamma(x)\theta(x)$ where $\theta$ is any nontrivial additive character of $\mathbb{F}_q$. Let $\gamma_0, \gamma_1, \gamma_2 $ be multiplicative characters of $\mathbb{F}_q$ and $x \in \mathbb{F}_q$. Greene defines the following finite field analogue of a hypergeometric sum
\begin{equation}\label{ecu13}
\hgq{\gamma_0}{\gamma_1}{\gamma_2}{x;q}:= \epsilon(x)\frac{\gamma_1\gamma_2(-1)}{q} \sum_{y \in \mathbb{F}_q} \gamma_1(y)(\gamma_2\gamma_1^{-1}) (1-y) \gamma_{0}^{-1}(1-xy).
\end{equation}
Since the seminal work of Greene and Katz a lot of work has been done on special functions over finite fields, in particular generalized hypergeometric functions. In this section, we recall some definitions and results that we will use later in this paper.
Following Greene \cite{Greene1}, we introduce other $_{n+1}\mathbb{F}_n$ functions inductively as follows. For multiplicative characters $A_0,A_1, \ldots, A_n$ and $B_1,\ldots, B_n$ of $\mathbb{F}_q$ and $x \in \mathbb{F}_q$, define
\begin{multline*}
\pFFq{n+1}{n}{A_0 & A_1 & \cdots & A_n}{ & B_1 & \cdots & B_n}{x ; q}
:= \\ \frac{A_nB_n(-1)}{q} \sum_{y \in \mathbb{F}_q}\mbox{}
\pFFq{n}{n-1}{A_0 & A_1 & \cdots & A_{n-1}}{ & B_1 & \cdots & B_{n-1}}{x y ; q} A_n(y) \overline{A_n}B_n(1-y).
\end{multline*}
See \S 4.4 of \cite{LLong} for a comparison among different versions of finite field hypergeometric functions.
The following lemma is a generalization of Lemma 2.2 in \cite{Ono1}.
\begin{lemma}\label{lemma15}
For any non-trivial multiplicative character $\gamma$ of $\mathbb{F}_q$,
\[
q\pFFq{4}{3}{\gamma & \gamma^{-1} & \phi & \phi}{ & \epsilon & \epsilon & \epsilon}{1 ; q}
= \sum_{z \in \mathbb{F}_q} \phi(z) \hgq{\phi}{\phi}{\epsilon}{z;q} \hgq{\gamma}{\gamma^{-1}}{\epsilon}{z;q},
\] where $\phi(\cdot)$ denotes the quadratic character of $\mathbb{F}_q$.
\end{lemma}
\begin{proof}
The lemma follows from the recursive definition of $_{n+1}\mathbb{F}_n$. First,
\begin{eqnarray*}
q\pFFq{4}{3}{\gamma & \gamma^{-1} & \phi & \phi}{ & \epsilon & \epsilon & \epsilon}{1 ; q} & = & \phi(-1) \sum_{x \in \mathbb{F}_q^*} \phi(x)\phi(1-x) \hgthree{\gamma}{\gamma^{-1}}{\phi}{\epsilon}{\epsilon}{ x ;q} \\
& = & \frac{1}{q} \sum_{x \in \mathbb{F}_q^*} \sum_{y \in \mathbb{F}_q^*} \phi(x) \phi(1-x) \phi(y) \phi(1-y) \hgq{\gamma}{\gamma^{-1}}{\epsilon}{xy;q}.
\end{eqnarray*}
Now replacing $xy$ by $z$,
\begin{equation*}
q\pFFq{4}{3}{\gamma & \gamma^{-1} & \phi & \phi}{ & \epsilon & \epsilon & \epsilon}{1 ; q} = \frac{1}{q} \sum_{x \in \mathbb{F}_q^*} \sum_{z \in \mathbb{F}_q^*} \phi(1-x) \phi(1-z/x) \phi(z)\hgq{\gamma}{\gamma^{-1}}{\epsilon}{z;q}.
\end{equation*}Letting $w=1/x$ and using \eqref{ecu13}
we get,
\begin{eqnarray*}
q\pFFq{4}{3}{\gamma & \gamma^{-1} & \phi & \phi}{ & \epsilon & \epsilon & \epsilon}{1 ; q} & = & \sum_{z \in \mathbb{F}_q^*} \frac{1}{q} \sum_{w \in \mathbb{F}_q^*} \phi(1 - \frac{1}{w}) \phi(1-zw) \phi(z) \hgq{\gamma}{\gamma^{-1}}{\epsilon}{z;q} \\
& = & \sum_{z \in \mathbb{F}_q^*} \frac{1}{q} \sum_{w \in \mathbb{F}_q^*} \phi(-1)\phi(w)\phi(1-w) \phi(1-zw) \phi(z) \hgq{\gamma}{\gamma^{-1}}{\epsilon}{z;q} \\
& = & \sum_{z \in \mathbb{F}_q}\phi(z) \hgq{\phi}{\phi}{\epsilon}{z;q} \hgq{\gamma}{\gamma^{-1}}{\epsilon}{z;q}.
\end{eqnarray*}
\end{proof}
Like their classical counterparts hypergeometric functions over finite fields satisfy many transformation formulas \cite{LLong, Greene1}. In particular, the next one will be useful for our purpose.
\begin{lemma}\label{lemma12}
(Greene, \cite{Greene1}) For $x \in \mathbb{F}_q$ with $x \neq 0$ we have,
\[
\hgq{\phi}{\phi}{\epsilon}{x;q} = \phi(x) \hgq{\phi}{\phi}{\epsilon}{\frac{1}{x};q}.
\]
\end{lemma}
\begin{proposition}\label{prop:15} Let $n=2,3,4$ or $6$, $\mathbb{F}_q$ be any finite field of size $q$ that is congruent to $1\mod n$, and $\gamma$ be any order $n$ multiplicative character of $\mathbb{F}_q$. Then $$\left |q^3 \cdot \pFFq{4}{3}{\gamma&\gamma^{-1}&\phi&\phi}{&\epsilon &\epsilon &\epsilon}{1;q}+\phi(-1)\gamma(-1)q\right |\le 2q^{3/2}.$$ \end{proposition}
\begin{proof}This proposition is a corollary of Theorem 2 of \cite{LTYW}. The background is about character sums in the perspective of hypergeometric motives \cite{BCM,Katz1, RV2} and we will only point out how to obtain our claim. Under the assumption on $n$, the choice of $\gamma$ is unique up to complex conjugation and $\gamma(-1)$ is independent of the choice of $\gamma$. For each $n$, let $\balpha=\{\frac 1n,\frac{n-1}n,\frac 12,\frac 12\}$ and $\bbeta=\{1,1,1,1\}$ and $\omega$ be any order $(q-1)$ multiplicative character of $\mathbb{F}_q$. Thus either $\gamma$ or $\gamma^{-1}$ is $\omega^{(q-1)/n}$. The normalized Katz version of hypergeometric sum is defined as (see Definition 1.1 of \cite{BCM})
\begin{equation}\label{eq:H}
H_q(\balpha,\bbeta;\lambda):=\frac1{1-q}\sum_{k=0}^{q-2}
\prod_{\alpha\in \balpha}\frac{g(\omega^{k+(q-1)\alpha})}{g(\omega^{(q-1)\alpha})} \prod_{\beta\in \bbeta}\frac{g(\omega^{-k-(q-1)\beta})}{g(\omega^{-(q-1)\beta})}\,\omega^k\bigl((-1)^m\lambda\bigr).
\end{equation} We take $\lambda=1$ here. Then the conversion between Greene and the normalized Katz versions of hypergeometric finite sums says
\begin{equation} -q^3 \cdot \pFFq{4}{3}{\gamma&\gamma^{-1}&\phi&\phi}{&\epsilon &\epsilon &\epsilon}{1;q}=H_q(\balpha,\bbeta;1),
\end{equation}independent of the choice of $\gamma$. Then Theorem 2 in \cite{LTYW} implies that there are two imaginary quadratic algebraic integers $A_{1,q}$ and $A_{2,q}$ (depending on both $n$ and $q$) both of complex absolute values $q^{3/2}$ such that $H_q(\balpha,\bbeta;1)=\phi(-1)\gamma(-1)q+A_{1,q}+A_{2,q}$. Thus $$\left |q^3 \cdot \pFFq{4}{3}{\gamma&\gamma^{-1}&\phi&\phi}{&\epsilon &\epsilon &\epsilon}{1;q}+\phi(-1)\gamma(-1)q\right |=|A_{1,q}+A_{2,q}|\le 2q^{3/2}.$$
The proof is now complete.
\end{proof}
\subsection{The vector space $\ell^2(\mathbb{F}_q,m) $}\label{psl_ls_sums}
Let $m: \mathbb{F}_q \rightarrow \mathbb{C}$ be $m(x)= 1 +q D_1(x) + q D_{-1}(x)$ where $D_{a}(x)$ is $1$ if $x=a$ and $0$ otherwise. We denote by $\ell^2(\mathbb{F}_q,m) $ the vector space of complex-valued functions on $\mathbb{F}_q$ equipped with the Hermitian form
\[
\langle f_1, f_2 \rangle :=\sum_{x \in \mathbb{F}_q} f_1(x) \overline{f_2(x)} m(x).
\]
Note that the following character sums are elements of $\ell^2(\mathbb{F}_q,m)$.
\begin{definition}\label{def1}
For any multiplicative character $\gamma$ of $\mathbb{F}_q$, the \emph{Legendre sum} with respect to $\gamma$ is defined as
\[
P_{\gamma} (a) := \frac{1}{q} \sum_{x \in \mathbb{F}_q^*} \gamma(x)\phi(x^2-2ax+1), \quad \mbox{for all } a\in \mathbb{F}_q.
\]
\end{definition}
\begin{definition}\label{def1_1}
For any multiplicative character $\beta$ of $\mathbb{F}_{q^2}$, the \emph{Soto-Andrade sum} with respect to $\beta$ is defined as
\[
R_{\beta} (a) := \frac{1}{q(q-1)} \sum_{r \in \mathbb{F}_{q^2}^*} \beta(r) \phi((r+r^q)^2 - 2(a+1)r^{1+q}), \quad \mbox{for all } a\in \mathbb{F}_q.
\]
\end{definition}
The Legendre and Soto-Andrade sums have appeared several times in the literature in connection with the irreducible representations of $PGL(2,q)$ \cite{Kable}. In fact, we will encounter them in Section 4 in our study of some character sums over $PGL(2,q)$. In this section, we recall some properties of these sums that will be useful for us in the coming sections.
The next lemma shows that the Legendre and Soto-Andrade sums form an orthogonal basis of $\ell^2(\mathbb{F}_q,m)$.
\begin{lemma}\label{lemma20}
(Kable, \cite{Kable}) The set
\[
\mathfrak{L} :=\left\lbrace P_{\epsilon} - \frac{q-1}{q}, P_{\phi}, P_{\gamma}, R_{\beta} : \mbox{ } \gamma \in \Gamma, \beta \in B \right\rbrace
\]
is an orthogonal basis for the space $\ell^2(\mathbb{F}_q,m) $, where $\Gamma$ and $B$ were defined in the end of Section \ref{psl_ct} with $|\Gamma|=\frac{q-3}{2}$ and $|B|=\frac{q-1}{2}$. \color{black} The square norm of the elements of this basis are as follows:
\begin{eqnarray*}
\left\Vert P_{\epsilon} - \frac{q-1}{q} \right\Vert^2 & = &\frac{q^2-1}{q}, \\
\Vert P_{\phi} \Vert^2 & = &\frac{q^2-1}{q^2}, \\
\Vert P_{\gamma} \Vert^2 & = &\frac{q-1}{q}, \\
\Vert R_{\beta} \Vert^2 & = &\frac{q+1}{q}.
\end{eqnarray*}
\end{lemma}
If we normalize the basis given by Lemma \ref{lemma20} then we can easily obtain an orthonormal basis of $\ell^2(\mathbb{F}_q,m)$. We denote the elements of this orthonormal basis by $\{ P_{\epsilon}', P_{\phi}', P_{\gamma}', R_{\beta}' : \mbox{ } \gamma \in \Gamma, \beta \in B\}$.
The next lemmas list some elementary properties of the Legendre and Soto-Andrade sums that we will need later. Lemma \ref{lemma17} implies that the Legendre sum with respect to the trivial character is easy to evaluate. This is not true for Legendre sums with respect to characters of higher orders. On the other hand, Lemma \ref{lemma18} shows that the Legendre and Soto-Andrade sums are easy to evaluate at $\pm 1$.
\begin{lemma}\label{lemma17}
The values of the Legendre sum with respect to $\epsilon$ are,
\[
P_{\epsilon}(a)= \left\lbrace \begin{array}{ll}
\frac{q-2}{q}, & \mbox{ if }a = \pm 1,\\
-\frac{2}{q}, & \mbox{ if }a \neq \pm1.
\end{array}\right.
\]
\end{lemma}
\begin{lemma}\label{lemma18}
Let $\gamma$ and $\beta$ be characters from the sets $\Gamma$ and $B$, respectively. Then $P_{\gamma}(1) = -1/q$ and $R_{\beta}(1)= 1/q$. Moreover,
\[
P_{\gamma}(-1) = - \frac{\gamma(-1)}{q}, \quad R_{\beta} (-1) = -\frac{\beta(i)}{q}
\]
where $i \in \mathbb{F}_{q^2}^*\setminus \mathbb{F}_q$ such that $i^2 \in \mathbb{F}_q^*$.
\end{lemma}
\begin{lemma}\label{lemma21}
The values of the Legendre and Soto-Andrade sums are real numbers. Moreover, for every $\gamma \in \Gamma$, $\beta \in B$ and $a \in \mathbb{F}_q$ we have
\[
P_{\gamma^{-1}} (a) = P_{\gamma}(a) \quad \mbox{ and } \quad R_{\beta^{-1}} (a) = R_{\beta}(a).
\]
\end{lemma}
The following result establishes a relation between Legendre sums and hypergeometric sums over finite fields. This fact will be crucial later in this paper.
\begin{lemma}\label{lemma13}
(Kable, \cite{Kable}) If $\gamma$ is a nontrivial character of $\mathbb{F}_q$ and $a \in \mathbb{F}_q\setminus \{\pm 1\}$ then
\[
P_{\gamma} (a) = \hgq{\gamma}{\gamma^{-1}}{\epsilon}{\frac{1-a}{2};q}.
\]
\end{lemma}
\section{A $PGL(2,q)$-module homomorphism}
In this section we show that the rank of the derangement matrix $M$ of $PSL(2,q)$ is equal to the dimension of the image of a certain $PGL(2,q)$-module homomorphism. Actually, we will show that $N = M^{\top}M$ is a matrix representation of a $PGL(2,q)$-module homomorphism. We will use this fact to compute the rank of $M$.
\subsection{The matrix $N$}
We identify the points of the projective line $PG(1,q)$ with elements of the set $\mathbb{F}_q \cup \lbrace \infty \rbrace$, by letting $a \in \mathbb{F}_q$ denote the point
spanned by $(1,a)\in \mathbb{F}_q^2$ and denoting by $\infty$ the point spanned by $(0,1)$.
We consider the natural right action of $PGL(2,q)$ on $PG(1,q)$. Let $a \in \mathbb{F}_q \cup \lbrace \infty \rbrace$ and $g \in PGL(2,q)$. We use $a^g$ to denote the element in $PG(1,q)$ obtained by applying $g$ to $a$. The action of $PGL(2,q)$ on $PG(1,q)$ is faithful. Hence, we can associate with each element of $PGL(2,q)$ a permutation of the $q+1$ elements of $PG(1,q)$. Moreover, recall that an element $g \in PGL(2,q)$ is said to be a {\it derangement} if its associated permutation is fixed-point-free.
\begin{definition}\label{def_N}
Let $\Omega$ be the set of ordered pairs of distinct projective points in $PG(1,q)$. The matrix $N$ is a $q(q+1)$ by $q(q+1)$ matrix whose rows and columns are both indexed by the elements of $\Omega$; for any $(a,b), (c, d) \in \Omega$ we define
\[
N_{(a,b),(c,d)} {:=} \mbox{ the number of derangements of } PSL(2,q) \mbox{ sending } a \mbox{ to } b \mbox{ and } c \mbox{ to } d.
\]
\end{definition}
Note that the above definition of $N$ agrees with our former definition, $N=M^{\top}M$. Hence, basic linear algebra implies that $\mbox{rank}_{\mathbb{C}}(M)=\mbox{rank}_{\mathbb{C}}(N)$. The next lemma gives information about the entries of $N$.
\begin{lemma}\label{lemma1}
Let $a,b,c,d \in \mathbb{F}_q \cup \lbrace \infty \rbrace$. Then,
\begin{enumerate}
\item $\displaystyle N_{(a,b),(a,b)} = \frac{(q-1)^2}{4}, \quad \forall (a,b) \in \Omega$.
\item $N_{(a,b),(c,d)}= 0$, if $a=c, b \neq d$ or $a \neq c, b = d$.
\item $N_{(a, b),(b, a)} = \left\lbrace \begin{array}{ll}
0, & \mbox{ if } q \equiv 1 \mbox{ mod } 4,\\
(q-1)/2, & \mbox{ if } q \equiv 3 \mbox{ mod } 4,
\end{array} \right. \quad \forall (a,b) \in \Omega $.
\item
\begin{enumerate}
\item $N_{(0, \infty),(1,0)} = \left\lbrace \begin{array}{ll}
(q-1)/4, & \mbox{ if } q \equiv 1 \mbox{ mod } 4,\\
(q-3)/4, & \mbox{ if } q \equiv 3 \mbox{ mod } 4.
\end{array} \right. $
\item $\displaystyle N_{(0,\infty),(1,d)} = \frac{q-3}{4} - \frac{\phi(1-d)}{2} - \frac{1}{4} \sum_{x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2 - 4d), \quad \forall d \neq 0,1, \infty.$
\end{enumerate}
\end{enumerate}
Moreover, the value of $N_{(a,b),(c,d)}$ for any $(a,b),(c,d) \in \Omega$ is given by one of the above expressions.
\end{lemma}
\begin{proof}
Let $g$ be an arbitrary element in $PGL(2,q)$. Note that for every $h \in PSL(2,q)$ sending $a$ to $b$ and $c$ to $d$, the element $g^{-1}h g \in PSL(2,q)$ sends $a^g$ to $b^g$ and $c^g$ to $d^g$. Hence the entries of $N$ satisfy the following property
\begin{equation}\label{ecu1}
N_{(a,b),(c,d)} = N_{(a^g,b^g),(c^g,d^g)},
\end{equation}
because $PSL(2,q)$ is a normal subgroup of $PGL(2,q)$ and the set of derangements in $PSL(2,q)$ is closed under conjugation.
To prove Lemma \ref{lemma1} we proceed case by case.
\begin{itemize}
\item \textbf{Case 1}.
\hspace{0.5cm}Recall that $N_{(a,b),(a,b)}$ is the number of derangements in $PSL(2,q)$ sending $a$ to $b$. From Equation (\ref{ecu1}) and the $2$-transitivity of $PGL(2,q)$ we conclude that $N_{(a,b),(a,b)}= N_{(c,d),(c,d)}$ for any $(a,b), (c,d) \in \Omega$. The total number of derangements in $PSL(2,q)$ is $q(q-1)^2/4$ and this number can also be written as
\[
\frac{q(q-1)^2}{4} = \sum_{\substack{b \in PG(1,q) \\ b \neq a} } N_{(a,b),(a,b)}, \quad \mbox{for any fixed }a \in PG(1,q),
\]
which implies that $N_{(a,b),(a,b)}= (q-1)^2/4$ for every $(a,b) \in \Omega$.
\item \textbf{Case 2}.
\hspace{0.5cm}Every element of $PSL(2,q)$ is related to a permutation of projective points in $PG(1,q)$. This implies $N_{(a,b)(a,d)}=0$ and $N_{(a,b)(c,b)}=0$ whenever $b \neq d$ and $a \neq c$.
\item \textbf{Case 3}.
\hspace{0.5cm}Using the $2$-transitivity of $PGL(2,q)$ and Equation (\ref{ecu1}) we can assume without loss of generality that $a=0$ and $b=\infty$. The elements $g_{\lambda} \in PSL(2,q)$ sending $0$ to $\infty$ and $\infty$ to $0$ are of the form
\[
g_{\lambda} := \left( \begin{array}{cc}
0 & \lambda \\
-\lambda^{-1} & 0
\end{array}\right), \quad \lambda \in \mathbb{F}_q^*.
\]
\hspace{0.5cm}This representation of elements in $PSL(2,q)$ is redundant because $g_{\lambda}$ and $g_{-\lambda}$ represent the same element of $PSL(2,q)$. Let $\xi$ be an element in $\mathbb{F}_q^* $ such that $\langle \xi\rangle = \mathbb{F}_q^* $. Hence, the set $\{ g_{\lambda} : \lambda=\xi^i, \quad i=1, \ldots, (q-1)/2\}$ corresponds precisely to the $(q-1)/2$ elements in $PSL(2,q)$ sending $0$ to $\infty$ and $\infty$ to $0$.
\hspace{0.5cm}Recall that $g_{\lambda}$ is a derangement if and only if its eigenvalues are not in $\mathbb{F}_q$. Thus, $g_{\lambda}$ is a derangement if and only if its characteristic polynomial,
\[
p_{\lambda}(t) :=\det \left\vert \begin{array}{cc}
-t & \lambda\\
-\lambda^{-1} & -t
\end{array} \right\vert = t^2 + 1,
\]
is irreducible over $\mathbb{F}_q$.
\hspace{0.5cm} If $q \equiv 1 \pmod 4$ then $-1$ is a square in $\mathbb{F}_q$. Thus, $p_{\lambda}(t)$ is reducible for every $\lambda \in \mathbb{F}_q^*$. Hence $N_{(a,b),(b,a)} =N_{(0,\infty),(\infty,0)} = 0$ in this case. On the other hand, if $q \equiv 3 \pmod 4$ then $-1$ is not a square in $\mathbb{F}_q$. This implies that $p_{\lambda}(t)$ is irreducible for every $\lambda \in \mathbb{F}_q^*$. Therefore, $N_{(a,b),(b,a)} =N_{(0,\infty),(\infty,0)} = (q-1)/2$.
\item \textbf{Case 4}.
\hspace{0.5cm}Every element of $PSL(2,q)$ sending $0$ to $\infty$ and $1$ to $d$ is of the form
\[
g_{\lambda} :=\left( \begin{array}{cc}
0 & -\lambda \\
\lambda^{-1} & \lambda^{-1}d + \lambda
\end{array}\right), \quad \lambda \in \mathbb{F}_q^* .
\]
Again note that $g_{\lambda}$ and $g_{-\lambda}$ represent the same element of $PSL(2,q)$. The matrix $g_{\lambda}$ is a derangement if and only if its characteristic polynomial,
\[
p_{\lambda}(t) :=\det \left\vert \begin{array}{cc}
-t & -\lambda\\
\lambda^{-1} & \lambda^{-1}d + \lambda -t
\end{array} \right\vert = t^2 - (\lambda^{-1}d + \lambda)t+1,
\]
is irreducible over $\mathbb{F}_q$. To compute $N_{(0,\infty) (1,d)}$ it is enough to count the number of values of $\lambda$ such that $p_{\lambda}(t)$ is reducible.
\hspace{0.5cm}If $p_{\lambda}(t)$ is reducible then there exist $x$ and $y$ in $\mathbb{F}_q^*$ such that
\[
p_{\lambda}(t) = t^2 - (\lambda^{-1}d + \lambda)t+ 1 = (t - x)(t - y) = t^2 - (x+y)t + xy.
\]
Hence, $xy=1$ and $x+y= \lambda^{-1}d + \lambda$. Assume without loss of generality that $y=x^{-1}$. If there exist values of $\lambda$ such that $g_{\lambda}$ has eigenvalues $\{x,x^{-1}\}$, then they have to satisfy the following quadratic equation
\begin{equation}\label{ecu2}
\lambda^2 - (x + x^{-1})\lambda + d =0.
\end{equation}
\begin{itemize}
\item \textbf{Case 4 (a)}:
\hspace{0.5cm}If we assume $d=0$ then $\lambda=0$ is a solution of Equation (\ref{ecu2}), however, that solution is not admissible by the definition of $g_{\lambda}$. Hence, we just consider the solution $\lambda = x + x^{-1}$ for every $x \in \mathbb{F}_q^*$. Moreover, note that $x$ and $x^{-1}$ generate the same value of $\lambda$. In fact, we can relate to each set $\{x, x^{-1}\}$ a unique value of $\lambda$.
\hspace{0.5cm}Let $q \equiv 1 \pmod 4$ and $k \in \mathbb{F}_q^*$ be an element of order $4$. Note that the set $\{k, k^{-1}\}$ does not generate any admissible value of $\lambda$. Thus, the number of values of $\lambda$ such that $p_{\lambda}(t)$ is reducible is $(q-1)/2$. Therefore,
\[
N_{(0, \infty),(1,0)} = \frac{1}{2} \left( q-1 - \frac{q-1}{2} \right) = \frac{q-1}{4}.
\]
\hspace{0.5cm}On the other hand, if $q \equiv 3 \pmod 4$ then $\mathbb{F}_q^*$ does not have an element of order 4. This implies that every set $\{x, x^{-1}\} \subset \mathbb{F}_q^*$ generates an admissible value of $\lambda$. Thus, the number of values for $\lambda$ such that $p_{\lambda}(t)$ is reducible is $(q+1)/2$ and $N_{(0,\infty),(1,0)}=(q-3)/4$.
\item \textbf{Case 4 (b)}:
\hspace{0.5cm}The number of solutions of Equation (\ref{ecu2}) in $\mathbb{F}_q$ is given by $1 + \phi((x + x^{-1})^2 -4d)$. In this case, $x$ and $x^{-1}$ leads to the same value of $\lambda$. Thus, the number of values of $\lambda \in \mathbb{F}_q^*$ such that $p_{\lambda}(t)$ is reducible is
\[
2(1+\phi(1-d)) + \frac{1}{2} \sum_{\substack{x \in \mathbb{F}_q^* \\ x \neq 1,-1 }} (1 + \phi((x + x^{-1} )^2 - 4d)).
\]
Therefore, for $d\neq 0,1,\infty$,
\begin{eqnarray*}
N_{(0, \infty),(1,d)} & = & \frac{1}{2} \left\{ (q-1) - \left[ 2(1+\phi(1-d)) + \frac{1}{2} \sum_{\substack{x \in \mathbb{F}_q^* \\ x \neq 1,-1 }} (1 + \phi((x + x^{-1} )^2 - 4d)) \right] \right\} \\
& = & \frac{q-3}{4} - \frac{\phi(1-d)}{2} - \frac{1}{4} \sum_{\substack{x \in \mathbb{F}_q^* \\ x \neq 1,-1 }} \phi((x + x^{-1} )^2 - 4d)
\end{eqnarray*}
which gives the desired formula for $N_{(0, \infty),(1,d)}$.
\end{itemize}
\end{itemize}
\end{proof}
\begin{corollary}\label{usefulcor}
Let $d \in \mathbb{F}_q$, $d\neq 0,1$. The number of derangements of $PSL(2,q)$ sending $0$ to $\infty$ and $1$ to $d$ can be expressed in terms of the Legendre sum with respect to $\phi$. Specifically,
\begin{equation}\label{ecu10}
N_{(0,\infty),(1,d)} = \frac{q-1}{4} - \frac{\phi(1-d)}{2}-\frac{q}{4} P_{\phi}(2d-1).
\end{equation}
\end{corollary}
\begin{proof}
To prove this corollary, we compute
\begin{eqnarray*}
\sum_{x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2- 4d )& = & \sum_{x \in \mathbb{F}_q^*} \phi(x^2)\phi((x + x^{-1})^2- 4d ) \\
& = & \sum_{x \in \mathbb{F}_q^*} \phi( x^4 - 2(2d-1)x^2 + 1).\end{eqnarray*}
Next we replace $x^2$ by $y$. If $y\in\mathbb{F}_q^*$ is not a square, then $1+\phi(y)=0$; on the other hand, if $y\in \mathbb{F}_q^*$ is a square, then $x^2=y$ has $1+\phi(y)=2$ solutions. It follows that
\begin{eqnarray*}
\sum_{x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2- 4d ) & = & \sum_{y \in \mathbb{F}_q^*} (1 + \phi(y)) \phi(y^2 - 2(2d-1)y + 1)\\
& = & \sum_{ y \in \mathbb{F}_q^*} \phi(y^2 - 2(2d-1)y + 1) + \sum_{y \in \mathbb{F}_q^*} \phi(y) \phi(y^2 - 2(2d-1)y + 1)\\
& = & -1 + \sum_{y \in \mathbb{F}_q} \phi(y^2 - 2(2d-1)y + 1) + q P_{\phi} (2d-1).
\end{eqnarray*}
Applying Theorem 5.48 from \cite{FiniteFields} it follows that,
\[
\sum_{y \in \mathbb{F}_{ q }} \phi(y^2 - 2(2d-1)y + 1) = -1.
\]
Thus, the above computations imply that
\begin{equation}\label{ecu_extra1}
\sum_{x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2- 4d ) = -2 + q P_{\phi} (2d-1).
\end{equation}
Now, Corollary \ref{usefulcor} follows from part 4(b) of Lemma \ref{lemma1} and Equation (\ref{ecu_extra1}).
\end{proof}
\subsection{A permutation $PGL(2,q)$-module}
In this section we define a $PGL(2, q)$-module $V$ and a $PGL(2, q)$-module homomorphism $T_N$ from $V$ to $V$. We use the subscript $N$ to emphasize that $N$ is the matrix associated with $T_N$ with respect to a certain basis of $V$.
Recall that we denote by $\Omega$ the set of ordered pairs of distinct projective points in $PG(1,q)$. Let $V$ be the $\mathbb{C}$-vector space spanned by the vectors $\{e_{\omega}\}_{\omega \in \Omega}$. The dimension of $V$ is $q(q+1)$.
We define a right action of $PGL(2,q)$ on the basis $\{e_{\omega}\}$ of $V$. Specifically, if $\omega=(a,b)$ then
$$e_{\omega} \cdot g = e_{\omega^g} = e_{(a^g,b^g)}$$
for any $g \in PGL(2,q)$. Thus, $V$ is a right permutation $PGL(2, q)$-module. The next lemma shows that $V$ has a very simple decomposition into irreducible modules; apart from $V_{\lambda_{-1}}$ and $V_{\psi_{1}}$ each irreducible module of $PGL(2,q)$ appears exactly once.
Let $(\chi,\psi)$ denote the inner product of the characters $\chi$ and $\psi$ of $PGL(2,q)$ (see \cite[Section 2.3]{Serre}).
\begin{lemma}\label{lemma2}
Let $V_{\chi}$ denote an irreducible module of $PGL(2,q)$ with character $\chi$. Then the decomposition of V into irreducible constituents is given by,
\[
V \cong V_{\lambda_1} \oplus 2V_{\psi_1} \oplus V_{\psi_{-1}} \oplus \bigoplus_{\beta \in B} V_{\eta_{\beta}} \oplus \bigoplus_{\gamma \in \Gamma } V_{\nu_{\gamma}}.
\]
\end{lemma}
\begin{proof}
Let $\pi$ be the character afforded by the $PGL(2,q)$-module $V$. By definition we have
\[
\pi(g) := |\{ \omega \in \Omega : \omega^g = \omega\}|
\]
hence the character $\pi$ has an easy description given by the following table
\begin{center}
\begin{tabular}{ c | c c c c c }
& 1 & $u$ & $d_x$ & $v_r$ \\ \hline
$\pi$ & $q(q+1)$ & 0 & 2 & 0 \\
\end{tabular}.
\end{center}
Now let $V_{\chi}$ be an irreducible representation of $PGL(2,q)$ and $\chi$ its irreducible character. It is known (\cite[ Chapter 2, Theorem 4]{Serre}) that the multiplicity of $V_{\chi}$ in $V$ is equal to the character inner product $(\pi, \chi)$.
Thus, the lemma follows by direct calculation using the character table of $PGL(2,q)$.
\end{proof}
For $a,b\in PG(1,q)$ with $a \neq b$, consider the following vectors in V,
\begin{eqnarray}
l_{a,b} & {:=} & \sum_{\substack{p \in PG(1,q) \\ p \neq a,b}} (e_{(a,p)} - e_{(b,p)} ) + e_{(a,b)} - e_{(b,a)} \label{v_ecu4},\\
r_{a,b} & {:=} & \sum_{\substack{p \in PG(1,q) \\ p \neq a,b}} (e_{(p,a)} - e_{(p,b)} ) + e_{(b,a)} - e_{(a,b)} \label{v_ecu5}.
\end{eqnarray}
We use these vectors to define the following vector subspaces of $V$,
\[
V_1 := \mbox{span}_{\mathbb{C}}\{ l_{a,b} : a,b \in PG(1,q), a \neq b \} \quad \mbox{and} \quad V_2 := \mbox{span}_{\mathbb{C}}\{ r_{a,b} : a,b \in PG(1,q), a \neq b \}.
\]
In fact, the next lemma shows that $V_1$ and $V_2$ are $PGL(2,q)$-submodules of $V$.
\begin{lemma}\label{lemma4}
The vector subspaces $V_1$ and $V_2$ satisfy the following properties:
\begin{enumerate}
\item $\dim_{\mathbb{C}}(V_1) = \dim_{\mathbb{C}}(V_2) = q$,
\item $V_1 \cap V_2 = \{ 0 \}$,
\item $V_1$ and $V_2$ are $PGL(2,q)$-submodules of $V$,
\item $V_1 \cong V_2 $ as $PGL(2,q)$-modules.
\end{enumerate}
\end{lemma}
\begin{proof}
Note that the vectors defined in Equations (\ref{v_ecu4}) and (\ref{v_ecu5}) satisfy the following relations,
\[
l_{a,b} - l_{a,c} = l_{c,b} \quad \mbox{ and } r_{a,b} - r_{a,c} = r_{c,b}
\]
for all $a,b,c \in PG(1,q)$ with $a \neq b \neq c$. Hence, fixing $a \in PG(1,q)$ we see that $\{ l_{a,b} : b \in PG(1,q), b \neq a \}$ and $\{ r_{a,b}: b \in PG(1,q), b \neq a\}$ are basis for $V_1$ and $V_2$, respectively.
To prove the conclusion in part (2) we proceed by contradiction. Assume there exists $v \in V_1 \cap V_2$ with $v \neq 0$. Hence we can write
\begin{equation}\label{ecu6}
v = \sum_{\substack{p \in PG(1,q) \\ p \neq a}} \alpha_p l_{a,p} = \sum_{\substack{p \in PG(1,q)\\ p \neq a}} \beta_p r_{a,p}
\end{equation}
where not all $\alpha_p$ and $\beta_p$ are equal to zero.
For a fixed $b \in PG(1,q)$, the vector $l_{a,b}$ is the only one in the set $\{l_{a,p}\}_{p \in PG(1,q), p \neq a}$ that contains $e_{(b,a)}$. On the other hand, every vector of the form $r_{a,p}$ with $p \neq a$ contains $e_{(b,a)}$. Therefore, using Equation (\ref{ecu6}) we get
\[
\alpha_b = \sum_{\substack{p \in PG(1,q)\\ p \neq a}} \beta_p,
\]
which implies that the values of the coefficients $\alpha_p$ in Equation (\ref{ecu6}) are all the same. Analogously, we can show that the values $\beta_p$ in Equation (\ref{ecu6}) are the same. Thus, we can rewrite Equation (\ref{ecu6}) as follows,
\[
\alpha \sum_{\substack{p \in PG(1,q) \\ p \neq a}} l_{a,p} = \beta \sum_{\substack{p \in PG(1,q)\\ p \neq a}} r_{a,p}
\]
where $\alpha = \sum_{p\neq a} \beta_p$ and $\beta= \sum_{p\neq a} \alpha_p$. This implies that $\alpha = q \beta = q^2 \alpha$, a contradiction, because $q$ is not equal to one.
To prove part (3) it is enough to note that $l_{a,b} \cdot g= l_{a^g,b^g}$ and $r_{a,b} \cdot g = r_{a^g, b^g}$ for all $a,b \in PG(1,q)$ with $a \neq b$. For part (4) consider the function $\theta$ from $V_1$ to $V_2$ defined by $\theta(l_{a,b})=r_{a,b}$ for all $a,b \in PG(1,q)$ with $a \neq b$; we extend the definition of $\theta$ to all elements of $V_1$ linearly. Now, from the definition of $\theta$ we see that clearly
\[
\theta( l_{(a,b)} \cdot g) = \theta(l_{(a,b)}) \cdot g
\]
for all $g \in PGL(2,q)$ and $(a,b) \in \Omega$. Therefore, $\theta$ is a $PGL(2,q)$-module isomorphism. This completes the proof of part (4).
\end{proof}
\begin{lemma}\label{3_lemma1}
The submodules $V_1$ and $V_2$ are isomorphic to $V_{\psi_1}$.
\end{lemma}
\begin{proof}
This result follows directly from Lemmas \ref{lemma2} and \ref{lemma4}. If we consider the decomposition of $V$ into irreducible constituents, we note that each irreducible representation appears only once, except for $V_{\psi_1}$ . Therefore, because $V_1$ is isomorphic to $V_2$, we must have $V_{\psi_1} \cong V_1 \cong V_2$.
\end{proof}
We now define a linear transformation $T_N$ from $V$ to $V$. We first define $T_N$ on the basis $\{e_{\omega}\}_{\omega \in \Omega}$ of $V$ by
\[
T_N(e_{(a,b)}) := \sum_{\omega \in \Omega} N_{\omega, (a,b)} e_{\omega}
\]
for any $(a,b) \in \Omega$, and then extend the definition of $T_N$ to all elements of $V$ linearly. It follows from the definition of $T_N$ that $N$ is the matrix associated with $T_N$ with respect to the basis $\{e_{\omega}\}_{\omega \in \Omega}$ of $V$. Therefore, the dimension of the image of $T_N$ is equal to the rank of the derangement matrix $M$ of $PSL(2,q)$ acting on $PG(1,q)$.
\begin{lemma}\label{lemma3}
The linear transformation $T_N$ defined above is a $PGL(2,q)$-module homomorphism from $V$ to $V$.
\end{lemma}
\begin{proof}
To prove the lemma we have to show that the linear transformation $T_N$ respects the action of $PGL(2,q)$ on $V$; that is, for each $g \in PGL(2,q)$ and each $(a,b) \in \Omega$,
\begin{equation}\label{ecu3}
T_N( e_{(a,b)} \cdot g) = T_N(e_{(a,b)}) \cdot g.
\end{equation}
First, consider the left hand side of Equation (\ref{ecu3}). From the definition of $T_N$ it follows that
\[
T_N( e_{(a,b)} \cdot g) = T_N(e_{(a^g,b^g)}) = \sum_{\omega \in \Omega} N_{\omega, (a^g,b^g)} e_{\omega}.
\]
Now, note that the right hand side of Equation (\ref{ecu3}) can be written as
\[
T_N(e_{(a,b)}) \cdot g = \sum_{\omega \in \Omega} N_{\omega, (a,b)} e_{\omega^g} = \sum_{\omega^{g^{-1}} \in \Omega} N_{\omega^{g^{-1}}, (a,b)} e_{\omega}.
\]
Furthermore, recall that $N_{(a,b),(c,d)} = N_{(a^g,b^g),(c^g,d^g)}$ for all $g \in PGL(2,q)$. Therefore,
\[
\sum_{\omega^{g^{-1}} \in \Omega} N_{\omega^{g^{-1}}, (a,b)} e_{\omega} = \sum_{\omega^{g^{-1}} \in \Omega} N_{\omega, (a^g,b^g)} e_{\omega} =\sum_{\omega \in \Omega} N_{\omega, (a^g,b^g)} e_{\omega}
\]
which implies that Equation (\ref{ecu3}) holds. This completes the proof of the lemma.
\end{proof}
\subsection{The image of $T_N$}\label{psl_im_TN}
Recall that the rank of the derangement matrix $M$ of $PSL(2,q)$ acting on $PG(1,q)$ is equal to the dimension of the image of $T_N$. Since $T_N$ is a $PGL(2, q)$-module homomorphism (Lemma \ref{lemma3}) we can use some tools from representation theory to compute the dimension of the image of $T_N$. We start by observing that the submodules $V_1$ and $V_2$ are in the kernel of $T_N$.
\begin{lemma}\label{3_lemma2}
The subspaces $V_1$ and $V_2$ lie in the kernel of $T_N$.
\end{lemma}
\begin{proof}
First, recall that the derangement matrix $M$ is a $q(q-1)^2/4$ by $(q+1)q$ matrix whose rows are indexed by the derangements of $PSL(2,q)$ and whose columns are indexed by elements of $\Omega$. For any derangement $g \in PSL(2,q)$ and $(a,b) \in \Omega$ we have
\[
M(g, (a,b)) {:=} \left\lbrace \begin{array}{cl}
1, & \mbox{ if }a^g = b,\\
0, & \mbox{ otherwise.}
\end{array} \right.
\]
Furthermore, also by definition, we have $N=M^{\top}M$. Thus, the lemma follows from the following observation
\[
Ml_{a,b}=0 \quad \mbox{and} \quad Mr_{a,b}=0 \quad \mbox{for all }a,b \in PG(1,q),\mbox{ with }a\neq b,
\]
and the fact that for a fixed $a \in PG(1,q)$ the sets $\{l_{a,b}: b \in PG(1,q), b\neq a \}$ and $\{r_{a,b}: b \in PG(1,q), b\neq a \}$ are basis of $V_1$ and $V_2$, respectively.
\end{proof}
From Lemma \ref{3_lemma1} and \ref{3_lemma2}, we conclude that the restriction of $T_N$ to $2V_{\psi_1}$ is the zero map. It follows that the dimension of the image of $T_N$ is at most $q(q-1)$. Now, we consider the restriction of $T_N$ onto the other irreducible constituents of $V$. To do that we apply Schur's lemma.
Let $\chi$ be the irreducible character corresponding to an irreducible representation of $PGL(2,q)$ appearing as a constituent of $V$. Schur's lemma implies that,
\[
T_N(V_{\chi}) \cong V_{\chi} \quad \mbox{ or } \quad T_N(V_{\chi})= \{0\}.
\]
Thus, either the dimension of the restriction of $T_N$ to $V_{\chi}$ is zero or is equal to the dimension of $V_{\chi}$. Hence,
to study the image of $V_{\chi}$ under $T_N$ for any $\chi \in \{ \lambda_1, \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$ we proceed in the following way:
\begin{enumerate}
\item Consider the vector $e_{(0,\infty)} \in V$.
\item Project $e_{(0,\infty)}$ onto $V_{\chi}$ using the following scalar multiple of a central primitive idempotent
\[ E_{\chi} := \sum_{g \in PGL(2,q)} \chi(g^{-1}) g. \]
Therefore, the projection of $e_{(0,\infty)}$ onto $V_{\chi}$ is equal to
\[ E_{\chi}(e_{(0,\infty)}) = \sum_{g \in PGL(2,q) } \chi(g^{-1}) e_{(0^g,\infty^g)} = \sum_{(a,b) \in \Omega} \left[ \sum_{0^g=a,\infty^g=b} \chi(g^{-1}) \right] e_{(a,b)}.
\]
where $g$ in the inner sum runs over all elements in $PGL(2,q)$ sending $0$ to $a$ and $\infty$ to $b$.
\item To prove that $T_N(V_{\chi}) \cong V_{\chi}$ it is enough to show that the $(0,\infty)$ coordinate of $T_N(E_{\chi}(e_{(0,\infty)}))$ is not equal to zero. This is equivalent to showing that the following character sum is not equal to zero:
\begin{equation}\label{ecu7}
T_{N,\chi} := T_N(E_{\chi}(e_{(0,\infty)}))_{(0,\infty)} = \sum_{(a,b) \in \Omega} \left[ \sum_{0^g=a,\infty^g=b} \chi(g^{-1}) \right] N_{(0,\infty),(a,b)},
\end{equation}
where $g$ in the inner sum runs over all elements in $PGL(2,q)$ sending $0$ to $a$ and $\infty$ to $b$.
\end{enumerate}
Therefore, we get the following lower bound on the rank of the derangement matrix $M$,
\begin{equation}\label{INEQU}
\sum_{\chi} \dim(V_{\chi}) \leq \mbox{rank}(M),
\end{equation}
where $\chi$ in the sum on the left hand side of (\ref{INEQU}) runs through $\{ \lambda_1, \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$ such that $T_{N,\chi} \neq 0$. In particular, if $T_{N,\chi}$ is not zero for all $\chi \in \{ \lambda_1, \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$ then the rank of the derangement matrix $M$ is equal to $q(q-1)$. We conclude that to prove Theorem \ref{psl_teo2}, it is enough to show that the values of the character sums $T_{N,\chi}$ with $\chi \in \{ \lambda_1, \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$ are not equal to zero. This will be our objective in the next two sections.
\section{The character sums $\displaystyle \sum_{0^g = \infty, \infty^g=0} \chi(g^{-1})$ and $\displaystyle \sum_{0^g = \infty, 1^g=d} \chi(g^{-1})$}
The sums $T_{N,\chi}$ are character sums over $PGL(2,q)$. In general, it is not easy to get tight bounds on the values of characters sums over non-abelian groups. Fortunately, the close relationship between the irreducible characters of $PGL(2,q)$ and the multiplicative characters of $\mathbb{F}_q$ and $\mathbb{F}_{q^2}$ allows us to conclude in Section 5 that the expressions $T_{N,\chi}$ are not equal to zero. In this section, we show that we can express the sums $T_{N,\chi}$ in terms of characters sums over finite fields for every $\chi \in \{ \lambda_1, \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$.
First, we consider $T_{N,\chi}$ when $\chi= \lambda_1$. In this case, we know that $\lambda_1(g)=1$ for any $g \in PGL(2,q)$. Moreover, there are precisely $q-1$ elements of $PGL(2,q)$ sending $0$ to $a$ and $\infty$ to $b$ for any $a,b \in PG(1,q)$. Therefore, we can compute (\ref{ecu7}) explicitly for $\chi = \lambda_1$:
\[
T_{N,\lambda_1} = (q-1) \sum_{(a,b) \in \Omega} N_{(0,\infty)(a,b)}= (q-1)(q+1)\frac{(q-1)^2}{4},
\]
where we have used Lemma \ref{lemma1} to obtain the last equality. Thus, from the analysis given in Section \ref{psl_im_TN} we conclude that $T_N(V_{\lambda_1}) \cong V_{\lambda_1}$.
The other irreducible characters of $PGL(2,q)$ are not so easy to handle. The next lemma gives an expression for $T_{N,\chi}$ with $\chi \in \{ \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$ which will be helpful to write Equation (\ref{ecu7}) in terms of character sums over finite fields.
\begin{lemma}\label{lemma7}
Let $\chi$ be any irreducible character of $PGL(2,q)$ from the set $ \{ \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$. Let $h$ be the unique element of $PGL(2,q)$ sending $0$ to $0$, $1$ to $\infty$, and $\infty$ to $1$. If $q \equiv 1 \mbox{ mod }4$ then
\[
T_{N,\chi} = \frac{(q-1)^3}{4} - \frac{q-1}{2} \sum_{0^g = \infty, \infty^g=0} \chi(g^{-1}) + (q-1) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \left[ \sum_{0^g=\infty, 1^g =b^h} \chi(g^{-1}) \right] N_{(0,\infty),(1,b)},
\]
and if $q \equiv 3 \mbox{ mod }4$ then
\[
T_{N,\chi} = \frac{(q-1)^3}{4} + \sum_{0^g = \infty, \infty^g=0} \chi(g^{-1}) + (q-1) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \left[ \sum_{0^g=\infty, 1^g =b^h} \chi(g^{-1}) \right] N_{(0,\infty),(1,b)}.
\]
\end{lemma}
\begin{proof}
We start by presenting some results on character sums over $PGL(2,q)$ that we will need.
We denote by $PGL(2,q)_{0,\infty}$ the subgroup of $PGL(2,q)$ fixing $0$ and $\infty$. Analogously, $PGL(2,q)_0$ denotes the subgroup of $PGL(2,q)$ fixing $0$. Applying the Frobenius Reciprocity Theorem \cite[Chapter 7, Theorem 13]{Serre}, we have:
\[
( \mbox{Res}(\chi) , 1 )_{PGL(2,q)_{0,\infty}} = ( \chi , \pi )_{PGL(2,q)} \quad \mbox{and} \quad ( \mbox{Res}(\chi) , 1 )_{PGL(2,q)_{0}} = ( \chi , \lambda_1 + \psi_1 )_{PGL(2,q)}
\]
where $\pi$ is the permutation character defined in the proof of Lemma \ref{lemma2} and $1$ is the trivial character of the groups $PGL(2,q)_{0,\infty}$ and $PGL(2,q)_{0}$, respectively. Using these equalities and the decomposition of $\pi$ in terms of irreducible characters (which was given in Lemma \ref{lemma2}), we evaluate the following character sums:
\begin{equation*}
\sum_{0^g = 0, \infty^g=\infty} \chi(g^{-1}) = (q-1) ( \mbox{Res}(\chi) , 1 )_{PGL(2,q)_{0,\infty}} = (q-1) ( \chi , \pi )_{PGL(2,q)} = q-1,
\end{equation*}
and
\begin{equation*}
\sum_{0^g = 0} \chi(g^{-1}) = q(q-1) ( \mbox{Res}(\chi) , 1 )_{PGL(2,q)_{0}} = q(q-1) ( \chi , \lambda_1 + \psi_1 )_{PGL(2,q)} = 0.
\end{equation*}
Note that $\chi(kgk^{-1})= \chi(g)$ for any $k \in PGL(2,q)$ since $\chi$ is a character, hence a class function. This fact implies many relations between character sums over $PGL(2,q)$. In particular,
\begin{equation}\label{ecu_extra2}
\sum_{a^g =b} \chi(g^{-1}) = \sum_{(a^k)^g =(b^k)^g} \chi(g^{-1}),
\end{equation}
and
\begin{equation}\label{ecu_extra3}
\sum_{a^g = b, c^g=d} \chi(g^{-1}) = \sum_{(a^k)^g = b^k, (c^k)^g=d^k} \chi(g^{-1}).
\end{equation}
We claim that $\sum_{0^g =\infty} \chi(g^{-1}) =0$. To prove this claim, recall that $\chi$ is a non-trivial character of $PGL(2,q)$. Therefore,
\[
0= \sum_{g \in PGL(2,q)} \chi(g^{-1}) = \sum_{0^g = 0} \chi(g^{-1}) + \sum_{\substack{a \in PG(1,q)\\a \neq 0}} \sum_{0^g = a} \chi(g^{-1}).
\]
Since $\sum_{0^g = 0} \chi(g^{-1})=0$, we conclude that
\[
0 = \sum_{\substack{a \in PG(1,q)\\a \neq 0}} \sum_{0^g = a} \chi(g^{-1})= q \sum_{0^g = \infty} \chi(g^{-1}),
\]
where Equation (\ref{ecu_extra2}) is used to obtain the last equality.
Moreover, it follows from the above equations and the $2$-transitivity of the action of $PGL(2,q)$ on $PG(1,q)$ that
\[
\sum_{\infty^g =\infty} \chi(g^{-1}) =0 \quad \mbox{and} \quad \sum_{\infty^g = 0} \chi(g^{-1}) = 0.
\]
Now, we are ready to prove Lemma \ref{lemma7}. From Equation (\ref{ecu7}) and Lemma \ref{lemma1} we get,
\begin{eqnarray*}
T_{N,\chi} & = & \frac{(q-1)^2}{4} \sum_{0^g = 0, \infty^{g}=\infty} \chi(g^{-1}) + \left[ \sum_{0^g = \infty, \infty^g=0} \chi(g^{-1}) \right] N_{(0,\infty),(\infty,0)} \\
& & + \sum_{b \in \mathbb{F}_q^*} \left[ \sum_{0^g = \infty, \infty^g=b} \chi(g^{-1}) \right] N_{(0, \infty),(\infty,b)} + \sum_{a \in \mathbb{F}_q^*} \left[ \sum_{0^g = a, \infty^g=0} \chi(g^{-1}) \right] N_{(0, \infty),(a,0)} \\
& & + \sum_{\substack{a,b \in \mathbb{F}_q^* \\ a \neq b}} \left[ \sum_{0^g = a, \infty^g=b} \chi(g^{-1}) \right] N_{(0, \infty),(a,b)}.
\end{eqnarray*}
First, assume that $q \equiv 1 \pmod 4$. From Lemma \ref{lemma1} it follows that $$N_{(0,\infty),(\infty,b)} =N_{(0,\infty),(a,0)} = (q-1)/4$$ for all $a,b \in \mathbb{F}_q^*$, and $$N_{(0, \infty),(\infty, 0)} = 0.$$ Hence, using the above analysis we can write,
\begin{eqnarray*}
\sum_{b \in \mathbb{F}_q^*} \left[ \sum_{0^g = \infty, \infty^g=b} \chi(g^{-1}) \right] N_{(0, \infty),(\infty,b)} & = & \frac{q-1}{4} \sum_{b \in \mathbb{F}_q^*} \left[ \sum_{0^g = \infty, \infty^g=b} \chi(g^{-1}) \right]\\
& = & \frac{q-1}{4} \left[\sum_{0^g =\infty} \chi(g^{-1}) - \sum_{0^g = \infty, \infty^g=0} \chi(g^{-1}) \right]\\
& = & - \frac{(q-1)}{4}\sum_{0^g = \infty, \infty^g=0} \chi(g^{-1}),
\end{eqnarray*}
and using the same ideas we get
\[
\sum_{a \in \mathbb{F}_q^*} \left[ \sum_{0^g = a, \infty^g=0} \chi(g^{-1}) \right] N_{(0, \infty),(a,0)} = - \frac{(q-1)}{4}\sum_{0^g = \infty, \infty^g=0} \chi(g^{-1}).
\]
Let $a,b \in \mathbb{F}_q^*$ with $a \neq b$. Using the $3$-transitivity of the action of $PGL(2,q)$ on $PG(1,q)$ and (\ref{ecu1}) we conclude that $N_{(0,\infty),(a,b)}= N_{(0,\infty)(1,b^k)}$ where $k \in PGL(2,q)$ is the unique element sending $0$ to $0$, $\infty$ to $\infty$ and $a$ to $1$. Moreover, applying Equation (\ref{ecu_extra3}) we obtain
\[
\sum_{0^g = a, \infty^g=b} \chi(g^{-1}) = \sum_{0^g = 1, \infty^g=b^k} \chi(g^{-1}).
\]
Putting all these facts together we conclude that
\begin{eqnarray*}
\sum_{\substack{a,b \in \mathbb{F}_q^* \\ a \neq b}} \left[ \sum_{0^g = a, \infty^g=b} \chi(g^{-1}) \right] N_{(0, \infty),(a,b)} & = & (q-1) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \left[ \sum_{0^g=1, \infty^g =b} \chi(g^{-1}) \right] N_{(0,\infty),(1,b)} \\
& = & (q-1) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \left[ \sum_{0^g=\infty, 1^g =b^h} \chi(g^{-1}) \right] N_{(0,\infty),(1,b)}.
\end{eqnarray*}
Thus, Lemma \ref{lemma7} is proved for the case where $q \equiv 1 \mbox{ mod }4$. Similar computations work for the case when $q \equiv 3 \mbox{ mod }4$.
\end{proof}
It follows from Lemma \ref{lemma7} that we can write $T_{N, \chi}$ in terms of the character sums
\[
\sum_{0^g = \infty, \infty^g=0} \chi(g^{-1}) \quad \mbox{ and } \quad \sum_{0^g = \infty, 1^g=d} \chi(g^{-1}).
\]
The next four lemmas show that these character sums can be written in terms of character sums over finite fields for all $\chi \in \{ \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$.
\begin{lemma}\label{lemma8}
Let $i$ be an element of $\mathbb{F}_{q^2}^* \setminus \mathbb{F}_q^*$ such that $i^2\in \mathbb{F}_q^*$. Then,
\begin{eqnarray*}
\sum_{0^g = \infty, \infty^g=0} \psi_{-1}(g^{-1}) & = & \phi(-1) (q-1), \\
\sum_{0^g = \infty, \infty^g=0} \nu_{\gamma}(g^{-1}) & = & \gamma(-1)(q-1) \quad \mbox{ for all } \gamma \in \Gamma,\\
\sum_{0^g = \infty, \infty^g=0} \eta_{\beta}(g^{-1}) & = & -\beta(i)(q-1) \quad \mbox{ for all } \beta \in B.
\end{eqnarray*}
\end{lemma}
\begin{proof}
The elements in $PGL(2,q)$ sending $0$ to $\infty$ and $\infty$ to $0$ are of the form,
\[
g_{\lambda} :=\left( \begin{array}{cc}
0 & \lambda\\
1 & 0
\end{array}\right) \quad \mbox{ with }\lambda \in \mathbb{F}_q^*.
\]
Note that the characteristic polynomial of $g_{\lambda}$ is $p_{\lambda}(t) := t^2 - \lambda$.
To evaluate the character sums in this lemma we need to know to which conjugacy classes the elements $g_{\lambda}$ belong.
First, recall that the eigenvalues of $g_{\lambda}$ are defined up to multiplication by an element of $\mathbb{F}_q^*$. Now, if $\lambda$ is a square in $\mathbb{F}_q^*$ then $p_{\lambda}(t)$ is reducible and $g_{\lambda}$ has eigenvalues $\pm \sqrt{\lambda} \in \mathbb{F}_q^*$. This implies that $g_{\lambda}$ lies in the conjugacy class $d_{-1}$ whenever $\lambda$ is a square. On the other hand, if $\lambda$ is not a square the roots of $p_{\lambda}(t)$ lie on $\mathbb{F}_{q^2}^*$ and they correspond to elements of order $2$ in $\mathbb{F}_{q^2}^*/\mathbb{F}_{q}^*$. Therefore, whenever $\lambda$ is not a square we see that $g_{\lambda}$ lies on the conjugacy class $v_i$.
Since there are equal number of squares and nonsquares in $\mathbb{F}_q^*$, the lemma follows from the character table of $PGL(2,q)$.
\end{proof}
\begin{lemma}\label{lemma9}
For every $\gamma \in \Gamma$ and $d \in \mathbb{F}_q^*\setminus \{1\}$ we have
\[
\sum_{0^g = \infty, 1^g=d} \nu_{\gamma}(g^{-1}) = q P_{\gamma}(2d-1) .
\]
\end{lemma}
\begin{proof}
The elements in $PGL(2,q)$ sending $0$ to $\infty$ and $1$ to $d$ are of the form,
\[
g_{\lambda} := \left( \begin{array}{cc}
0 & \alpha \lambda\\
\alpha &\alpha(d - \lambda)
\end{array}\right) \quad \mbox{ with }\lambda, \alpha \in \mathbb{F}_q^*.
\]
To evaluate the sum in this lemma we need to know to which conjugacy classes these elements belongs. However, we need to do this just for those elements which are not derangements because $\nu_{\gamma}(g)=0$ if $g$ is a derangement.
Note that different values of $\alpha$ correspond to the same element $g_{\lambda}$ in $PGL(2,q)$. Indeed, as was remarked earlier the eigenvalues of $g_{\lambda}$ are defined up to scalar multiplication.
The characteristic polynomial of $g_{\lambda}$ is $p_{\lambda}(t) := t^2 - \alpha(d - \lambda) t - \alpha^2 \lambda$ and its eigenvalues are,
\[
\alpha \left( \frac{(d-\lambda) \pm \sqrt{(d - \lambda)^2 + 4\lambda})}{2} \right).
\]
Thus, if $\sqrt{(d - \lambda)^2 + 4\lambda} \in \mathbb{F}_q^*$ then there exists $\alpha \in \mathbb{F}_q^*$ such that the eigenvalues of $g_{\lambda}$ are $\{1,x\}$ for some $x \in \mathbb{F}_q^*$. This implies that $g_{\lambda}$ is contained in the same conjugacy class as $d_x$ (see Section \ref{psl_ct}). Here, we assume that $d_x$ with $x=1$ corresponds to the element $u \in PGL(2,q)$ defined in Section \ref{psl_ct}.
For a fixed $d \in \mathbb{F}_q^*\setminus \{1\}$ and $x \in \mathbb{F}_q^*$ we want to know for how many $\lambda \in \mathbb{F}_q^*$ there exists some $\alpha$ such that $g_{\lambda}$ has eigenvalues $\{1,x\}$. From the above analysis it is clear that $d,x, \alpha$ and $\lambda$ must satisfy the equation below:
\[
p_{\lambda}(t)= t^2 - \alpha(d - \lambda) t - \alpha^2 \lambda = (t-x)(t-1)=t^2 - (x+1)t +x.
\]
This implies that $\alpha$ satisfies the following quadratic equation,
\begin{equation*}
d\alpha^2 - (x + 1) \alpha + x = 0.
\end{equation*}
Therefore, given $x \in \mathbb{F}_q^*$ and $d \in \mathbb{F}_q^*\setminus \{1\}$, the number of values of $\lambda \in \mathbb{F}_q^*$ such that $g_{\lambda}$ is conjugate to $d_x$ is equal to
\[
1 + \phi((x+1)^2 - 4xd) \mbox{ if }x \neq -1 \quad \mbox{and} \quad
(1 + \phi((x+1)^2 - 4xd))/2 \mbox{ if }x = -1.
\]
Now using the above remarks and the character table of $PGL(2,q)$ we get
\begin{eqnarray}\label{ecu8}
\sum_{0^g = \infty, 1^g=d} \nu_{\gamma}(g) & = & (1+ \phi(1-d))\gamma(1)+ \left( \frac{1 + \phi(d)}{2}\right) (2\gamma(-1)) \\
& & + \frac{1}{2} \sum_{\substack{x \neq 1, -1 \\ x \in \mathbb{F}_q^*} } (1 + \phi((x+1)^2 - 4xd) ) (\gamma(x) + \gamma(x^{-1})) \nonumber
\end{eqnarray}
where the first two terms in the right hand side of Equation (\ref{ecu8}) corresponds to $x=1$ and $x=-1$. Furthermore, note that we have included a factor $\frac{1}{2}$ in front of the last expression in Equation (\ref{ecu8}). This occurs because every element $g_{\lambda}$ having eigenvalues $\{1,x\}$ also has eigenvalues $\{1, x^{-1}\}$. Hence, given $d \in \mathbb{F}_q^*\setminus \{1\}$, the elements $x$ and $x^{-1}$ are related to the same values of $\lambda$.
Simplifying the right hand side of Equation (\ref{ecu8}),
\begin{eqnarray*}
\sum_{0^g = \infty, 1^g=d} \nu_{\gamma}(g) & = & \sum_{x \in \mathbb{F}_q^*} \gamma(x) \phi(x^2-2(2d-1)x+1)\\
& = & q P_{\gamma}(2d-1).
\end{eqnarray*}
Finally, applying basic properties of characters and Lemma \ref{lemma21} we obtain
\[
\sum_{0^g = \infty, 1^g=d} \nu_{\gamma}(g^{-1}) = \overline{\sum_{0^g = \infty, 1^g=d} \nu_{\gamma}(g) } =q P_{\gamma^{-1}}(2d-1) = q P_{\gamma}(2d-1).
\]
The proof is now complete.
\end{proof}
\begin{lemma}\label{lemma10}
For every $\beta \in B$ and $d \in \mathbb{F}_q^*\setminus \{1\}$ we have,
\[
\sum_{0^g = \infty, 1^g=d} \eta_{\beta}(g^{-1}) = - q R_{\beta}(2d-1).
\]
\end{lemma}
\begin{proof}
Recall that all the elements in $PGL(2,q)$ sending $0$ to $\infty$ and $1$ to $d$ take the form,
\[
g_{\lambda} := \left( \begin{array}{cc}
0 & \alpha\lambda\\
\alpha & \alpha(d - \lambda)
\end{array}\right) \quad \mbox{ with }\lambda, \alpha \in \mathbb{F}_q^*.
\]
To evaluate the sum in this lemma we have to know to which conjugacy classes these elements belong. However, since $\eta_{\beta}(g)=0$ if $g$ has two fixed points, we will pay attention to derangements and the elements fixing one point only (see Section \ref{psl_ct}).
We know that if $r \in \mathbb{F}_{q^2}^* \setminus \mathbb{F}_{q}^*$ is an eigenvalue of $g_{\lambda}$ then $g_{\lambda}$ is a derangement with eigenvalues $\{r, r^q\}$ contained in the same conjugacy class as $v_r$. On the other hand, if $r \in \mathbb{F}_{q}^*$ is the only eigenvalue of $g_{\lambda}$ then this implies that $g_{\lambda}$ has exactly one fixed point and it is conjugated to $u$. In fact, when $r \in \mathbb{F}_q^*$ every element of the form $v_r$ is conjugated to $u$.
Fix $r \in \mathbb{F}_{q^2}^*$. We want to know for how many values of $\lambda \in \mathbb{F}_q^*$ there exists $\alpha$ such that $g_{\lambda}$ has eigenvalues $\{r, r^q\}$. From the characteristic polynomial of $g_{\lambda}$ the following equation is obtained
\[
t^2 - \alpha(d - \lambda) t - \alpha^2 \lambda =t^2 - (r + r^q)t +r^{q+1},
\]
which implies that $\alpha \in \mathbb{F}_q^*$ must satisfy the quadratic equation below
\begin{equation}\label{ecu9}
d\alpha^2 - (r + r^q) \alpha + r^{q+1}=0.
\end{equation}
Distinct solutions of Equation (\ref{ecu9}) generate distinct values of $\lambda$ unless $r \in i \mathbb{F}_q$ where $i$ is an element of $\mathbb{F}_{q^2}^* \setminus \mathbb{F}_{q}^*$ such that $i^2 \in \mathbb{F}_{q}^*$. Hence, given $r \in \mathbb{F}_{q^2}^*$ and $d \in \mathbb{F}_{q}^*\setminus \{1\}$, the number of $\lambda \in \mathbb{F}_q^*$ such that $g_{\lambda}$ is conjugated to $v_r$ is equal to:
\[
1 + \phi((r +r^q)^2 - 4dr^{q+1}) \mbox{ if } r \in \mathbb{F}_{q^2}^*\setminus i\mathbb{F}_{q}^* \quad \mbox{and} \quad
(1 + \phi((r+r^q)^2 - 4dr^{q+1}))/2 \mbox{ if }r \in i\mathbb{F}_{q}^*.
\]
Moreover, note that every element $g_{\lambda}$ having eigenvalues $\{r,r^q\}$ also has eigenvalues $\{ar, (ar)^q\}$ for any $a \in \mathbb{F}_{q}^*$. Thus, $r$ and $ar$ are related to the same values of $\lambda$ for every $a \in \mathbb{F}_q^*$. Therefore,
\begin{eqnarray*}
\sum_{0^g = \infty, 1^g=d} \eta_{\beta}(g^{-1}) & = & \frac{1}{q-1} \sum_{r \in \mathbb{F}_q^*} (1 + \phi((r +r^q)^2 - 4dr^{q+1}))(-\beta(1)) \\
& & + \frac{1}{q-1} \sum_{r \in i\mathbb{F}_q^*} \left(\frac{1 + \phi((r +r^q)^2 - 4dr^{q+1})}{2} \right) (-2\beta(i)) \\
& & + \frac{1}{2(q-1)} \sum_{r \in \mathbb{F}_{q^2}^*\setminus \{\mathbb{F}_{q}^* , i \mathbb{F}_q^*\} } (1 + \phi((r +r^q)^2 - 4dr^{q+1})) (-\beta(r) - \beta(r^q)) \\
& = & \frac{1}{2(q-1)} \sum_{r \in \mathbb{F}_{q^2}^*} \phi((r +r^q)^2 - 4dr^{q+1}) (-2\beta(r) ) \\
& = & -\frac{1}{q-1} \sum_{r \in \mathbb{F}_{q^2}^*} \phi((r +r^q)^2 - 4dr^{q+1}) \beta(r)
\end{eqnarray*}
Now, the lemma follows from Definition \ref{def1_1} and Lemma \ref{lemma21}.
\end{proof}
\begin{lemma}\label{lemma11}
For every $d \in \mathbb{F}_q^*\setminus \{1\}$ we have,
\[
\sum_{0^g = \infty, 1^g=d} \psi_{-1}(g) = q P_{\phi}(2d-1) .
\]
\end{lemma}
\begin{proof}
From the character table of $PGL(2,q)$ it follows that
\begin{equation}\label{ecu17}
\psi_{-1}(g) = \left\lbrace \begin{array}{ll}
0, & \mbox{ if } g \in u,\\
1, & \mbox{ if }g \in d_x \mbox{ and } d_x \subset PSL(2,q),\\
-1, & \mbox{ if }g \in d_x \mbox{ and } d_x \subset PGL(2,q) \setminus PSL(2,q),\\
-1, & \mbox{ if }g \in v_r \mbox{ and } v_r \subset PSL(2,q),\\
1, & \mbox{ if }g \in v_r \mbox{ and } v_r \subset PGL(2,q) \setminus PSL(2,q).\\
\end{array} \right.
\end{equation}
Thus, to evaluate the sum $\sum_g \psi_{-1}(g)$ we need to know: how many elements sending $0$ to $\infty$ and $1$ to $d$ belong to each of the five categories considered in (\ref{ecu17}). In fact, these counting problems follow from the proof of Case (4) of Lemma \ref{lemma1}.
For the sake of clarity, we recall some simple facts. There are $q-1$ elements in $PGL(2,q)$ sending $0$ to $\infty$ and $1$ to $d$, and half of them are in $PSL(2,q)$. It was proved by Meagher and Spiga \cite{Karen1} that if $1-d$ is a square in $\mathbb{F}_q^*$ then $(q-1)/2$ of these elements are derangements. On the other hand, if $1-d$ is not a square then $(q+1)/2$ of these elements are derangements.
First, assume that $1-d$ is a square. We can divide the $(q-1)/2$ elements of $PSL(2,q)$ sending $0$ to $\infty$ and $1$ to $d$ into three categories:
\begin{itemize}
\item $2$ fix just one point.
\item $\displaystyle \frac{1}{4} \sum_{ x \in \mathbb{F}_q^*, x \neq 1, -1 } (1 + \phi((x + x^{-1})^2 - 4d))$ fix exactly two points.
\item $\displaystyle \frac{q-5}{4} - \frac{1}{4} \sum_{ x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2 - 4d)$ are derangements.
\end{itemize}
A similar analysis can be carried out when $1-d$ is not a square. Specifically, from the $(q-1)/2$ elements of $PSL(2,q)$ sending $0$ to $\infty$ and $1$ to $d$,
\begin{itemize}
\item There are no elements fixing exactly one point.
\item $\displaystyle \frac{1}{4} \sum_{ x \in \mathbb{F}_q^*, x \neq 1, -1 } (1 + \phi((x + x^{-1})^2 - 4d))$ fix two points.
\item $\displaystyle \frac{q-1}{4} - \frac{1}{4} \sum_{ x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2 - 4d)$ are derangements.
\end{itemize}
Putting all the above remarks together and assuming that $1-d$ is a square we obtain,
\begin{eqnarray*}
\sum_{0^g = \infty, 1^g=d} \psi_{-1}(g) & = & \frac{1}{4} \sum_{ x \in \mathbb{F}_q^*, x \neq 1, -1 } (1 + \phi((x + x^{-1})^2 - 4d))\\
& & - \left( \frac{q-1}{2} -2 - \frac{1}{4} \sum_{x \in \mathbb{F}_q^*, x \neq 1, -1 } (1 + \phi((x + x^{-1})^2 - 4d)) \right)\\
& & - \left( \frac{q-5}{4} - \frac{1}{4} \sum_{ x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2 - 4d) \right)\\
& & + \left( \frac{q-1}{2} - \frac{q-5}{4} + \frac{1}{4} \sum_{ x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2 - 4d) \right) \\
& = & 2 + \sum_{x \in \mathbb{F}_q^*} \phi((x + x^{-1})^2- 4d )\\
& = & 2 + \sum_{x \in \mathbb{F}_q^*} \phi(x^2 - 2(2d-1)x + 1)(1 + \phi(x))\\
& = & q P_{\phi} (2d-1).
\end{eqnarray*}
Here the last equality above follows from Equation (\ref{ecu_extra1}).
The case where $(1-d)$ is not a square can be treated by similar computations. We omit the details.
\end{proof}
\section{The restriction of $T_N$ onto $V_{\psi_{-1}}$, $V_{\nu_{\gamma}}$ and $V_{\eta_{\beta}}$ }
In this section, we study the restriction of $T_N$ onto the irreducible constituents, $V_{\psi_{-1}}$, $\{V_{\nu_{\gamma}} \}_{\gamma \in \Gamma}$ and $\{V_{\eta_{\beta}}\}_{\beta \in B}$, of $V$. We start with a technical lemma that will be useful for studying the character sums $T_{N,\chi}$ with $\chi \in \{ \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$.
\begin{lemma}\label{lemma_extra}
Let $i$ be an element of $\mathbb{F}_{q^2}^* \setminus \mathbb{F}_q^*$ such that $i^2\in \mathbb{F}_q^*$. Then for all $\gamma \in \Gamma$,
\[
T_{N, \nu_{\gamma}} = \frac{(q-1)}{4} \left[ q^2 - 3q - \left( q + 1 \right)\gamma(-1) \phi(-1) - q^2 \sum_{ b\in \mathbb{F}_q^*, b \neq 1} P_{\gamma}(2b^h-1) P_{\phi}(2b-1) \right].
\]
Also, for all $\beta \in B$,
\[
T_{N,\eta_{\beta}} = \frac{(q-1)}{4} \left[ q^2 + q + \left( q + 1 \right)\beta(i) \phi(-1) + q^2\sum_{ b\in \mathbb{F}_q^*, b \neq 1} R_{\beta}(2b^h-1) P_{\phi}(2b-1) \right],
\]
and
\[
T_{N,\psi_{-1}} = \frac{(q-1)}{4} \left[ q^2 - 2 q -3 - q^2 \sum_{ b\in \mathbb{F}_q^*, b \neq 1} P_{\phi}(2b^h-1) P_{\phi}(2b-1) \right].
\]
\end{lemma}
\begin{proof}
We will prove that the expression for $T_{N, \nu_{\gamma}}$ holds for every $\gamma \in \Gamma$. The proofs for the characters sums $T_{N,\eta_{\beta}}$ and $T_{N,\psi_{-1}} $ are similar; we omit those details.
First, assume that $q \equiv 1 \mbox{ mod }4$. It follows from Lemma \ref{lemma7} that
\begin{eqnarray*}
T_{N,\nu_{\gamma}} & = &\frac{(q-1)^3}{4} - \frac{q-1}{2} \sum_{0^g = \infty, \infty^g=0} \nu_{\gamma}(g^{-1}) + (q-1) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \left[ \sum_{0^g=\infty, 1^g =b^h} \nu_{\gamma}(g^{-1}) \right] N_{(0,\infty),(1,b)}\\
& = & \frac{(q-1)^3}{4} - \frac{(q-1)^2}{2} \gamma(-1) + (q-1) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \left[ \sum_{0^g=\infty, 1^g =b^h} \nu_{\gamma}(g^{-1}) \right] N_{(0,\infty),(1,b)},
\end{eqnarray*}
where for the last equality we have applied Lemma \ref{lemma8}. Also, recall that $h \in PGL(2,q)$ is the unique element sending $0$ to $0$, $1$ to $\infty$ and $\infty$ to $1$.
Let us define
\[
S := \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \left[ \sum_{0^g=\infty, 1^g =b^h} \nu_{\gamma}(g^{-1}) \right] N_{(0,\infty),(1,b)}.
\]
Applying Corollary \ref{usefulcor} and Lemma \ref{lemma9} we obtain
\begin{eqnarray*}
S & = & \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} qP_{\gamma}(2b^h-1) \left( \frac{q-1}{4} - \frac{\phi(1-b)}{2} - \frac{1}{4} P_{\phi}(2b-1) \right)\\
& = & \frac{q(q-1)}{4} \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} P_{\gamma}(2b^h-1) - \frac{q}{2} \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \phi(1-b) P_{\gamma}(2b^h-1) - \frac{q^2}{4} \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} P_{\gamma}(2b^h-1)P_{\phi}(2b-1).
\end{eqnarray*}
We now simplify the first two character sums in the above expression for $S$.
The following computation uses the connection between Legendre sums and hypergeometric sums given by Lemma \ref{lemma13}. We have
\begin{eqnarray*}
\sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} P_{\gamma}(2b^h-1) & = & \sum_{\substack{a \in \mathbb{F}_q \\ a \neq \pm 1}} P_{\gamma}(a) \\
& = & \sum_{\substack{a \in \mathbb{F}_q \\ a \neq \pm 1}} \hgq{\gamma}{\gamma^{-1}}{\epsilon}{\frac{1-a}{2};q}.
\end{eqnarray*}
Now, using Greene's definition of hypergeometric sums given in Equation (\ref{ecu12}) we get
\begin{eqnarray*}
\sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} P_{\gamma}(2b^h-1) & = & \frac{\gamma^{-1}(-1)}{q} \sum_{\substack{a \in \mathbb{F}_q \\ a \neq \pm 1}} \sum_{x \in \mathbb{F}_q} \gamma^{-1}(x) \gamma(1-x) \gamma^{-1}\left(1-\frac{1}{2}(1-a)x \right) \\
& = & \frac{\gamma^{-1}(-1)}{q} \sum_{x \in \mathbb{F}_q^*} \gamma^{-1}(x) \gamma(1-x) \sum_{\substack{a \in \mathbb{F}_q \\ a \neq \pm 1}} \gamma^{-1}\left(1-\frac{1}{2}(1-a)x \right) \\
& = & \frac{\gamma^{-1}(-1)}{q} \sum_{x \in \mathbb{F}_q^*} \gamma^{-1}(x) \gamma(1-x) (-1 - \gamma^{-1}(1-x) )\\
& = & \frac{1}{q} (1 + \gamma(-1)).
\end{eqnarray*}
On the other hand, to compute the second sum we use the definition of Legendre sums given in Definition \ref{def1} and noting that $\phi(-1)=1$ when $q\equiv 1 \mod 4$,
\begin{eqnarray*}
\sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \phi(1-b) P_{\gamma}(2b^h-1) & = & \frac{1}{q} \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \phi(1-b) \sum_{x \in \mathbb{F}_q^*} \gamma(x) \phi(1+(2-4b^h)x + x^2 ) \\
& = & \frac{1}{q} \sum_{x \in \mathbb{F}_q^*} \gamma(x) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \phi((x+1)^2 - 4b^hx)\phi(b-1) \\
& = & \frac{1}{q} \sum_{x \in \mathbb{F}_q^*} \gamma(x) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \phi( (x-1)^2 b - (x+1)^2) \\&=& \frac{1}{q} \sum_{x \in \mathbb{F}_q^*, x\neq 1} \gamma(x) \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \phi( (x-1)^2 b - (x+1)^2) +\frac 1q \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} \phi(-4) \\
&=& \frac{1}{q} \sum_{x \in \mathbb{F}_q^*, x\neq 1} \gamma(x) (-\phi(-4x)-\phi(-(x+1)^2)) +\frac {q-2}q \\
& = & 1 + \frac{1}{q} \gamma(-1). \color{black}
\end{eqnarray*}
Putting all the above results together we have
\[
S= - \frac{(q-1)}{4} + \frac{(q-3)}{4}\gamma(-1) - \frac{q^2}{4} \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} P_{\gamma}(2b^h-1)P_{\phi}(2b-1),
\]
and plugging in $S$ into the expression for $T_{N,\nu_{\gamma}} $ we obtain
\[
T_{N,\nu_{\gamma}} = \frac{q-1}{4} \left[ q^2 - 3q - (q-1)\gamma(-1) - q^2 \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} P_{\gamma}(2b^h-1)P_{\phi}(2b-1)\right].
\]
The computations for the case $q \equiv 3 \mbox{ mod }4$ are very similar. In fact, the following expression is obtained for $T_{N,\nu_{\gamma}} $ assuming that $q \equiv 3 \mbox{ mod }4$,
\[
T_{N,\nu_{\gamma}} = \frac{q-1}{4} \left[ q^2 - 3q + (q-1)\gamma(-1) - q^2 \sum_{\substack{b \in \mathbb{F}_q^* \\ b \neq 1}} P_{\gamma}(2b^h-1)P_{\phi}(2b-1)\right].
\]
Finally, note that $\phi(-1)=1$ when $q \equiv 1 \mbox{ mod }4$ and $\phi(-1)=-1$ when $q \equiv 3 \mbox{ mod }4$. This fact completes the proof of the Lemma.
\end{proof}
From Schur's Lemma we know that the restriction of $T_N$ onto any irreducible module is an isomorphism or the zero map. The next theorem shows that the restriction of $T_N$ onto $V_{\eta_{\beta}}$ is a $PGL(2,q)$-module isomorphism for every $\beta \in B$.
For the proofs below, we will need the following function in $\ell^2(\mathbb{F}_q,m) $,
\[
\begin{array}{cccc}
f : & \mathbb{F}_q & \rightarrow & \mathbb{C} \\
& x & \mapsto & \phi(1-x)P_{\phi}(x)
\end{array}
\]
Note that the norm of $f$ is closely related to the norm of $P_{\phi}$,
\[
\Vert f \Vert^2 = \sum_{x \in \mathbb{F}_q} f(x)^2m(x) = \sum_{\substack{ x \in \mathbb{F}_q \\ x \neq 1 } } P_{\phi}(x)^2m(x) = \Vert P_{\phi}\Vert^2 - \frac{q+1}{q^2} = 1-\frac{1}{q}- \frac{2}{q^2},
\]
where we have used Lemma \ref{lemma20} in the last equality.
\begin{theorem}\label{teo2}
For every $\beta \in B$ we have
\[
T_N(V_{\eta_{\beta}}) \cong V_{\eta_{\beta}}.
\]
\end{theorem}
\begin{proof}
It suffices to show that $T_{N,\eta_{\beta}} \neq 0$ for all $\beta \in B$. From Lemma \ref{lemma_extra} it follows that
\begin{equation}\label{psl_exp1}
T_{N,\eta_{\beta}} = \frac{(q-1)}{4} \left[ q^2 + q + \left( q + 1 \right)\beta(i) \phi(-1) + q^2\sum_{ b\in \mathbb{F}_q^*, b \neq 1} R_{\beta}(2b^h-1) P_{\phi}(2b-1) \right],
\end{equation}
where $i\in \mathbb{F}_{q^2}^*\setminus \mathbb{F}_q^*$ such that $i^2\in \mathbb{F}_q^*$. We will show that the expression on the right hand side of Equation (\ref{psl_exp1}) is not equal to zero.
We claim that the character sum
\begin{equation}\label{ecu11}
\sum_{ b\in \mathbb{F}_q^*, b \neq 1} R_{\beta}(2b^h-1) P_{\phi}(2b-1)
\end{equation}
can be expressed in terms of the function $f$. Recall that $h$ is the unique element in $PGL(2,q)$ sending $0$ to $0$, $1$ to $\infty$ and $\infty$ to $1$. Hence, if $ b\in \mathbb{F}_q^* $ and $b \neq 1$ then $b^h \neq 0, 1, \infty$. Moreover, we have the following formula for $b^h$ when $b\in \mathbb{F}_q^*$ and $b \neq 1$,
\[
b^h= \frac{b}{b-1}
\]
which implies that $(b^h)^h=b$ for any $b \in \mathbb{F}_q$. Thus, we can rewrite the sum in (\ref{ecu11}) as,
\[
\sum_{ b\in \mathbb{F}_q^*, b \neq 1} R_{\beta}(2b^h-1) P_{\phi}(2b-1) = \sum_{ b\in \mathbb{F}_q^*, b \neq 1} P_{\phi}(2b^h-1) R_{\beta}(2b-1).
\]
Using the relation between Legendre sums and hypergeometric sums given by Lemma \ref{lemma13} and the transformation formula in Lemma \ref{lemma12}, the following expression for $P_{\phi}(2b^h-1)$ is obtained
\[
P_{\phi}(2b^h -1) =\hgq{\phi}{\phi}{\epsilon}{\frac{1}{1-b};q} = \phi(1-b) \hgq{\phi}{\phi}{\epsilon}{1-b;q} = \phi(1-b)P_{\phi}(2b-1),
\]
for $b \in \mathbb{F}_q$, $b \neq 0, 1$. Putting all the above remarks together we conclude that
\begin{eqnarray*}
\sum_{ b\in \mathbb{F}_q^*, b \neq 1} R_{\beta}(2b^h-1) P_{\phi}(2b-1) & = & \sum_{ b\in \mathbb{F}_q^*, b \neq 1} \phi(1-b) P_{\phi}(2b-1) R_{\beta}(2b-1) \\
& = & \phi(2) \sum_{ x\in \mathbb{F}_q, x \neq \pm 1} \phi(1-x) P_{\phi}(x) R_{\beta}(x)\\
& = & \phi(2) \left(1 +\frac{1}{q} \right)^{1/2} \langle f , R_{\beta}' \rangle - (q+1) \frac{\beta(i)\phi(-1) }{q^2}
\end{eqnarray*}
where $i$ is an element of $\mathbb{F}_{q^2}^*\setminus \mathbb{F}_q^*$ such that $i^2 \in \mathbb{F}_q^*$.
Therefore, plugging in the above expression into Equation (\ref{psl_exp1}), we can also express $T_{N,\eta_{\beta}}$ in terms of the function $f$,
\begin{equation}\label{ecu12}
T_{N,\eta_{\beta}}= \frac{q^2(q-1)}{4} \left[ 1+ \frac{1}{q} + \phi(2) \left(1 +\frac{1}{q} \right)^{1/2} \langle f , R_{\beta}' \rangle \right].
\end{equation}
Note that Equation (\ref{ecu12}) implies that if $|\langle f , R_{\beta}' \rangle| \leq 1$ then $T_{N,\eta_{\beta}} \neq 0$. We claim that $|\langle f , R_{\beta}' \rangle| \leq 1$ for every $\beta \in B$; note that the theorem follows from the validity of this claim.
Recall that $\{ P_{\epsilon}', P_{\phi}', P_{\gamma}', R_{\beta}' : \mbox{ } \gamma \in \Gamma, \beta \in B \}$ is an orthonormal basis of $\ell^2(\mathbb{F}_q,m)$. Thus, we can express $f$ in terms of this orthonormal basis,
\[
f= \langle f, P_{\epsilon}'\rangle P_{\epsilon}' + \langle f, P_{\phi}'\rangle P_{\phi}' +\sum_{\gamma} \langle f, P_{\gamma}' \rangle P_{\gamma}' + \sum_{\beta} \langle f, R_{\beta}'\rangle R_{\beta}'.
\]
Analogously, the squared norm of $f$ can also be expressed in terms of this orthonormal basis,
\[
\Vert f \Vert^2 = \langle f, P_{\epsilon}'\rangle^2 + \langle f, P_{\phi}'\rangle^2 + \sum_{\gamma} \langle f, P_{\gamma}' \rangle^2 + \sum_{\beta} \langle f, R_{\beta}'\rangle^2,
\]
where we have used the fact the coefficients in the expansion of $f$ are all real (cf. Lemma~\ref{lemma21}).
On the other hand, we know that the squared norm of $f$ is $1-1/q-2/q^2$. This implies that the square of every coefficient of the form $\langle f, g\rangle$ is less than 1 for all $g \in \{ P_{\epsilon}', P_{\phi}', P_{\gamma}', R_{\beta}' : \mbox{ } \gamma \in \Gamma, \beta \in B \}$. In particular, $\langle f, R_{\beta}'\rangle^2 \leq 1-1/q-2/q^2 $ for all $\beta \in B$. Thus, our claim is proved.
\end{proof}
Unfortunately, the argument used in the proof of Theorem \ref{teo2} cannot be applied to show that the restriction of $T_N$ onto the irreducible module $V_{\psi_{-1}}$ is a $PGL(2,q)$-module isomorphism. To deal with this case we exploit the connection between Legendre sums and Hypergeometric sums shown by Kable in \cite{Kable}.
\begin{lemma}\label{lem:<f,P_gamma>}Let $\gamma$ be a nontrivial multiplicative character of $\mathbb{F}_q$. Then $$\phi(2) q^2 \langle f , P_{\gamma} \rangle=
q^3 \pFFq{4}{3}{\gamma & \gamma^{-1} & \phi & \phi}{ & \epsilon & \epsilon & \epsilon}{1 ; q} + \phi(-1)\gamma(-1)q.$$\end{lemma}
\begin{proof}Applying Lemmas \ref{lemma15} and \ref{lemma13} we obtain,
\begin{eqnarray*}
\phi(2) q^2 \langle f , P_{\gamma} \rangle & = & \phi(2) q^2 \sum_{\substack{ x \in \mathbb{F}_q \\ x \neq \pm 1}} \phi(1-x) P_{\phi}(x)P_{\gamma}(x) + q^2 P_{\phi}(-1)P_{\gamma}(-1) m(-1) \\
& = & q^2 \sum_{\substack{ y \in \mathbb{F}_q^* \\ y \neq 1}} \phi(y) \hgq{\phi}{\phi}{\epsilon}{y;q} \hgq{\gamma}{\gamma^{-1}}{\epsilon}{y;q} + \phi(-1)\gamma(-1)(q+1) \\
& = & q^2 \sum_{y \in \mathbb{F}_q} \phi(y) \hgq{\phi}{\phi}{\epsilon}{y;q}\hgq{\gamma}{\gamma^{-1}}{\epsilon}{y;q} + \phi(-1)\gamma(-1)q \\
& = & q^3 \pFFq{4}{3}{\gamma & \gamma^{-1} & \phi & \phi}{ & \epsilon & \epsilon & \epsilon}{1 ; q} + \phi(-1)\gamma(-1)q.
\end{eqnarray*}
\end{proof}
\begin{theorem}\label{teo6}
If $q \geq 7$ then,
\[
T_N(V_{\psi_{-1}}) \cong V_{\psi_{-1}}.
\]
\end{theorem}
\begin{proof}
It suffices to show that $T_{N,\psi_{-1}} \neq 0$. It follows from Lemma \ref{lemma_extra} that
\[
T_{N,\psi_{-1}} = \frac{(q-1)}{4} \left[ q^2 - 2 q -3 - q^2 \sum_{ b\in \mathbb{F}_q^*, b \neq 1} P_{\phi}(2b^h-1) P_{\phi}(2b-1) \right].
\]
Let $f$ be the function in $\ell^2(\mathbb{F}_q,m)$ defined before the statement of Theorem \ref{teo2}. By Lemmas \ref{lemma12} and \ref{lemma13} we see that the sum
\[
\sum_{ b\in \mathbb{F}_q^*, b \neq 1} P_{\phi}(2b^h-1) P_{\phi}(2b-1)
\]
can be written in terms of the function $f$. In particular,
\begin{eqnarray*}
\sum_{ b\in \mathbb{F}_q^*, b \neq 1} P_{\phi}(2b^h-1) P_{\phi}(2b-1) & = & \sum_{ b\in \mathbb{F}_q^*, b \neq 1} \phi(1-b) P_{\phi}(2b-1) P_{\phi}(2b-1) \\
& = & \phi(2) \sum_{ x\in \mathbb{F}_q, x \neq \pm 1} \phi(1-x) P_{\phi}(x) P_{\phi}(x)\\
& = & \phi(2) \langle f , P_{\phi} \rangle - \frac{q+1}{q^2}.
\end{eqnarray*}
Thus, $T_{N,\psi_{-1}} $ can be expressed in terms of $f$:
\begin{equation}\label{ecu20}
T_{N,\psi_{-1}} = \frac{(q-1)}{4} \left[ q^2- q - 2 - \phi(2) q^2 \langle f , P_{\phi} \rangle \right].
\end{equation}
We claim that $ \phi(2) q^2 \langle f , P_{\phi} \rangle \leq 2q^{3/2}$. This claim together with Equation (\ref{ecu20}) immediately implies that $T_{N,\psi_{-1}} \neq 0$ for every $q \geq 7$.
To prove our claim we note that the character sum $\phi(2) q^2 \langle f , P_{\phi} \rangle$ can be written in terms of a hypergeometric sum $_4\mathbb{F}_3$. Letting $\gamma=\phi$ in Lemma \ref{lem:<f,P_gamma>}, $$\phi(2) q^2 \langle f , P_{\phi} \rangle= q^3 \pFFq{4}{3}{\phi & \phi & \phi & \phi}{ & \epsilon & \epsilon & \epsilon}{1 ; q} + q.$$ Therefore, our claim follows directly from the final conclusion of Proposition \ref{prop:15}.
\end{proof}
To study the restriction of $T_N$ onto $V_{\nu_{\gamma}}$ we consider two cases. First, if $\gamma$ is a character whose order is not equal to three, four or six then we can apply arguments similar to the ones used in the proof of Theorem \ref{teo2} to prove that the restriction is an isomorphism. On the other hand, different ideas have to be used to show that the same result holds when $\gamma$ has order three, four or six. The next theorem deals with these cases.
\begin{theorem}\label{teo3}
Assume that $q \geq 11$. If $\gamma \in \Gamma$
then
\[
T_N(V_{\nu_{\gamma}}) \cong V_{\nu_{\gamma}}.
\]
\end{theorem}
\begin{proof}
We proceed as we did in the proof of Theorem \ref{teo2}. Thus, to prove this theorem it is enough to show that $T_{N,\nu_{\gamma}} \neq 0$. It follows from Lemma \ref{lemma_extra} that
\[
T_{N, \nu_{\gamma}} = \frac{(q-1)}{4} \left[ q^2 - 3q - \left( q + 1 \right)\gamma(-1) \phi(-1) - q^2 \sum_{ b\in \mathbb{F}_q^*, b \neq 1} P_{\gamma}(2b^h-1) P_{\phi}(2b-1) \right].
\]
Applying Lemmas \ref{lemma12} and \ref{lemma13} it is possible to write the sum of products of Legendre sums in terms of the function $f$. In fact,
\begin{eqnarray*}
\sum_{ b\in \mathbb{F}_q^*, b \neq 1} P_{\gamma}(2b^h-1) P_{\phi}(2b-1) & = & \phi(2) \left( 1 -\frac{1}{q} \right)^{1/2} \langle f , P_{\gamma}' \rangle - (q+1) \frac{\gamma(-1)\phi(-1) }{q^2}.
\end{eqnarray*}
Therefore, for every $\gamma \in \Gamma$ we have
\begin{equation}\label{ecu18_2}
T_{N, \nu_{\gamma}} = \frac{q^2(q-1)}{4} \left[ 1- \frac{3}{q} - \phi(2) \left( 1 -\frac{1}{q} \right)^{1/2} \langle f , P_{\gamma}' \rangle \right].
\end{equation}
Recall that
\begin{equation}\label{ecu18}
\Vert f \Vert^2=\langle f, P_{\epsilon}'\rangle^2 + \langle f, P_{\phi}'\rangle^2 + \sum_{\gamma} \langle f, P_{\gamma} \rangle^2 + \sum_{\beta} \langle f, R_{\beta}'\rangle^2 = 1-\frac{1}{q} - \frac{2}{q^2},
\end{equation}
where $\{ P_{\epsilon}', P_{\phi}', P_{\gamma}', R_{\beta}' : \mbox{ } \gamma \in \Gamma, \beta \in B \}$ is an orthonormal basis of $\ell^2(\mathbb{F}_q,m)$. Equation (\ref{ecu18}) implies that at most one of the coefficients $ \langle f, g \rangle$ with $g \in \{ P_{\epsilon}', P_{\phi}', P_{\gamma}', R_{\beta}' : \mbox{ } \gamma \in \Gamma, \beta \in B \}$
can be close to $1$. On the other hand, it is clear from (\ref{ecu18_2}) that $ T_{N, \nu_{\gamma}} = 0$ if and only if the coefficient $\langle f, P_{\gamma}' \rangle$ is close to $1$.
To prove the theorem we proceed by contradiction. Assume that there exists $\gamma \in \Gamma$ such that $ T_{N, \nu_{\gamma}} = 0$. Hence, it follows from equation (\ref{ecu18_2}) that
\begin{equation}\label{ecu18_3}
\langle f , P_{\gamma}' \rangle^2 =1 -\frac{5}{q} + \frac{4}{q(q-1)}.
\end{equation}
Let $\mbox{Gal}(\mathbb{Q}(\zeta_{q-1})/\mathbb{Q} )$ be the Galois group where $\zeta_{q-1}$ is a primitive $(q-1)$-th root of the unity. If $\gamma$ is a nontrivial character whose order is not equal to three, four or six, there exists $\sigma \in \mbox{Gal}(\mathbb{Q}(\zeta_{q-1})/\mathbb{Q} )$ such that $\gamma^{\sigma} \neq \gamma$ and $\gamma^{\sigma} \neq \gamma^{-1}$. Now, applying the Galois automorphism $\sigma$ to both sides of (\ref{ecu18_3}) we conclude that
\begin{eqnarray*}
\sigma \left( \langle f , P_{\gamma}' \rangle^2 \right) & = & \sigma\left(1 -\frac{5}{q} + \frac{4}{q(q-1)}\right)\\
\langle f , P_{\gamma^{\sigma}}' \rangle^2 & = & 1 -\frac{5}{q} + \frac{4}{q(q-1)}.
\end{eqnarray*}
Thus, $ \langle f , P_{\gamma}' \rangle^2 $ and $\langle f , P_{\gamma^{\sigma}}' \rangle^2$ are equal to $1 -\frac{5}{q} + \frac{4}{q(q-1)}$ which is a contradiction because at most one of the coefficients $ \langle f, g \rangle$ with $g \in \{ P_{\epsilon}', P_{\phi}', P_{\gamma}', R_{\beta}' : \mbox{ } \gamma \in \Gamma, \beta \in B \}$
can be close to $1$. Assume now $\gamma \in \Gamma$ is a character of order $3$, $4$ or $6$. From equation (\ref{ecu18_2}) we get the following expression for $T_{N,\nu_{\gamma}}$,
\[
T_{N, \nu_{\gamma}} = \frac{(q-1)}{4} \left[ q^2 - 3q - \phi(2) q^2 \langle f , P_{\gamma} \rangle \right].
\]
By Lemma \ref{lem:<f,P_gamma>}, $$\phi(2) q^2 \langle f , P_{\gamma} \rangle =q^3 \pFFq{4}{3}{\gamma & \gamma^{-1} & \phi & \phi}{ & \epsilon & \epsilon & \epsilon}{1 ; q} + \phi(-1)\gamma(-1)q.$$
Now applying Proposition \ref{prop:15}, we conclude that $T_{N\nu_{\gamma}}\neq 0$.
\end{proof}
Finally, we are ready to prove Theorem \ref{psl_teo2}.
\begin{proof}[Proof of Theorem \ref{psl_teo2}]
Recall that in Section \ref{psl_im_TN} we proved the following lower and upper bounds on the rank of the derangement matrix $M$ of $PSL(2,q)$ acting on $PG(1,q)$,
\begin{equation}
\sum_{\{ \chi : \mbox{ }T_{N,\chi} \neq 0 \}} \dim(V_{\chi}) \leq \mbox{rank}(M) \leq q(q-1).
\end{equation}
These bounds imply that if $T_{N,\chi}$ is not zero for every $\chi \in \{ \lambda_1, \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$ then the rank of $M$ is $q(q-1)$.
If $q \geq 11$ then it follows from Theorems \ref{teo2}, \ref{teo6} and \ref{teo3} that $T_{N,\chi}\neq 0$ for all $\chi \in \{ \lambda_1, \psi_{-1}, \{\eta_{\beta} \}_{\beta \in B}, \{ \nu_{\gamma}\}_{\gamma \in \Gamma} \}$. Furthermore, for each odd prime power $q$, $3<q<11$, we use a computer to check that the rank of $M$ is exactly $q(q-1)$.
\end{proof}
\section{Conclusions}
In this paper we consider the natural right action of $PSL(2,q)$ on $PG(1,q)$, where $q$ is an odd prime power. Using the eigenvalue method, it was proved in \cite{Karen1, Karen3} that the maximum size of an intersecting family in $PSL(2,q)$ is $q(q-1)/2$. Meagher and Spiga \cite{Karen1} conjectured that the cosets of point stabilizers are the only intersecting families of maximum size in $PSL(2,q)$, when $q>3$ is an odd prime power. Here, we prove their conjecture in the affirmative using tools from representation theory of $PGL(2,q)$ and deep results from number theory.
For future research, one could consider the stability problem concerning intersecting families of $PSL(2,q)$. To present this problem we introduce the notion of stability.
Let $X$ be a finite set and $G$ a finite group acting on $X$. Recall that a subset $S$ of $G$ is said to be an intersecting family if for any $g_1,g_2 \in S$ there exists an element $x\in X$ such that $x^{g_1}= x^{g_2}$. We will refer to intersecting families of maximum size as {\it extremal families}. Moreover, intersecting families whose sizes are close to the maximum are called {\it almost extremal families}. We say that the extremal families of a group $G$ acting on $X$ are {\it stable} if almost extremal families are similar in structure to the extremal ones.
The stability of intersecting families has been studied during the past few years (cf. \cite{Ellis5, eff, Plaza}). Consider the action of $S_n$ on $[n]$. As was remarked in the introduction, the size of extremal families in $S_n$ is $(n-1)!$ and every extremal family is a coset of a point stabilizer. Furthermore, the stability of extremal families in $S_n$ was established by Ellis \cite{Ellis5}, who proved that for any $\epsilon >0$ and $n > N(\epsilon)$, any intersecting family of size at least $(1-1/e +\epsilon) (n-1)!$ must be strictly contained in an extremal family. Analogously, the same problems were solved for the group $PGL(2,q)$ acting on $PG(1,q)$. In fact, the size of extremal families in $PGL(2,q)$ is $q(q-1)$ and every extremal family is a coset of a point stabilizer. Recently, in \cite{Plaza} it was proved that the extremal families in $PGL(2,q)$ are stable.
We conjecture that the extremal families in $PSL(2,q)$ are also stable. The precise statement is given below.
\begin{conjecture}
Let $S$ be an intersecting family in $PSL(2,q)$ with $q>3$ an odd prime power. Then there exists $\delta > 0$ such that if $|S| \geq (1-\delta) q(q-1)/2$ then $S$ is contained within a coset of a point stabilizer.
\end{conjecture}
\section*{Acknowledgment}
The authors would like to thank the reviewers for their helpful comments.
|
1,477,468,751,434 | arxiv | \section{Introduction}
A classical result for $2d$-Ricci flow states that any metric on a $2$-surface converges under renormalized Ricci flow to a metric of constant scalar curvature: This was first proven by Hamilton \cite{hamilton1}. In the case of surfaces of genus $0$ this was initially proven for metrics of strictly positive scalar curvature. This assumption was eventually removed by Chow \cite{chow1}. Their techniques rely heavily on establishing a Harnack inequality and an entropy bound. Their approach also gives an independent proof of the uniformization theorem, c.f. \cite{chenlutian}. Due to the difficulty of this approach, the conformally round case was later revisited and several new proofs were given utilizing the uniformization theorem \cite{andrewsbryan,bartzstruweye,struwe}.
In this paper, we gain a new geometric perspective on $2d$-Ricci flow on topological spheres by uniquely embedding any conformally round metric on the $2$-sphere into the past-pointing standard lightcone in the $3+1$-Minkowski spacetime. This idea of identifying any metric conformal to a a given Riemannian metric as a unique cross section on a null hypersurface within a spacetime satisfying the Einstein Vacuum Equations was first proposed by Fefferman--Graham \cite{feffermangraham} to study conformal invariants. In the context of Lorentzian geometry, Hamilton's initial restriction to metrics of strictly positive scalar curvature translates to a physically reasonable assumption on the representing codimension-$2$ surface in the Minkowski spacetime. From the Gauß equation, we are moreover able to conclude the equivalence of Ricci flow and null mean curvature flow along the lightcone first studied by Roesch--Scheuer \cite{roeschscheuer}. As defined by Roesch--Scheuer, null mean curvature flow
\begin{align}
\frac{\d}{\d t}x=-\frac{1}{2}\spann{\vec{\mathcal{H}},L}\ul{L}
\end{align}
describes the evolution proportional to the projection of the mean curvature vector with respect to the null generator $\ul{L}$ of the null hypersurface, where $\vec{\mathcal{H}}$ is the codimension-$2$ mean curvature vector in the ambient spacetime and $L$ is a null vector such that $\{\ul{L},L\}$ forms an appropriate null frame of the normal space of the surface. This projection along the null hypersurface greatly reduces the difficulty of the general codimension-$2$ problem and essentially transforms it into a scalar valued parabolic equation. This was utilized by Roesch--Scheuer in the detection of marginally outer trapped surfaces. No such surfaces exist in the Minkowski spacetime as they arise as the cross section between the null hypersurface and a (black hole) horizon. Due to the equivalence to Ricci flow, we can conclude that the flow extinguishes in finite time along the lightcone and provide a full characterization of the singularity models of the flow (Corollary \ref{thm_singularities}), each corresponding to a member of the restricted Lorentz group $\operatorname{SO}^+(3,1)$. Conversely, understanding conformal Ricci flow as an extrinsic curvature flow along the lightcone gives rise to a new proof of Hamilton's classical result (Theorem \ref{thm_mainthm}). The main ingredient to this approach will be a choice of gauge for the null frame giving rise to a scalar valued second fundamental form along the lightcone. Then any conformally round metric has constant scalar curvature if and only if this scalar valued second fundamental form is pure trace (Proposition \ref{prop_codazziminkowski2}). Studying the evolution of this geometric object along the flow will allow us to adopt techniques utilized in the study of $3d$ Ricci flow and mean curvature flow (cf. \cite{andrewschowguentherlangford,chowluni}).
This observation extends to higher dimensions, as it turns out that null mean curvature flow as defined above is equivalent to the Yamabe flow in the conformal class of the round metric for arbitrary dimension. See Section \ref{sec_discussion} for a more detailed discussion and references. \newline\\
This paper is structured as follows:\newline
In \Cref{sec_prelim} we fix some notation and recall the well-known null structure equations in the Minkowski spacetime. In \Cref{sec_nullgeom} we compute all necessary geometric objects on the standard Minkowski lightcone and fix our choice of null gauge. In \Cref{sec_equialence} we establish the equivalence between Ricci flow in the conformal class of the round sphere and null mean curvature flow along the standard Minkowski lightcone, and prove Corollary \ref{thm_singularities}. In \Cref{sec_hamilton} we prove Theorem \ref{thm_mainthm}, establishing a new proof of Hamilton's classical result. We close with some comments on the Yamabe flow in the higher dimensional case in \Cref{sec_discussion}.
\subsection*{Acknowledgements.}
I would like to express my sincere gratitude towards my PhD supervisors Carla Cederbaum and Gerhard Huisken for their continuing guidance and helpful discussions.
\setcounter{section}{1}
\section{Preliminaries}\label{sec_prelim}
Throughout this paper, $\R^{3,1}$ will always refer to the $3+1$-dimensional Minkowski spacetime $(\R^{3,1},\eta)$, where $\R^4$ is equipped with the flat metric $\eta$ of signature $(-+++)$. In polar coordinates, $\R^{3,1}$ is given as $\R\times(0,\infty)\times\Sbb^2$ with
\[
\eta=-\d t^2+ \d r^2+r^2\d\Omega^2,
\]
where $\d\Omega^2$ denotes the standard round metric on $\Sbb^2$. The standard lightcone centered at the origin in the Minkowski spacetime is given as the set
\[
C(0):=\{\btr{t}=r\},
\]
and we denote the components $C(0)_+:=C(0)\cap\{t\ge 0\}$, $C(0)_{-}:=C(0)\cap\{t\le 0\}$ as the future-pointing and past-pointing standard lightcone (centered at the origin and with time-orientation induced by $\partial_t$), respectively.
In the following, $(\Sigma,\gamma)$ will always denote a $2$-surface with Riemanian metric $\gamma$, and we are in particular interested in such surfaces arising as closed, orientable, spacelike codimension-$2$ surfaces in $\R^{3,1}$ (usually restricted to the lightcone). As usual, we define the vector valued second fundamental form $\vec\two$ of $\Sigma$ in $\R^{3,1}$ as
\begin{align*}
\vec\two(V,W)=\left(\overline{\nabla}_VW\right)^\perp,
\end{align*}
where $\overline{\nabla}$ denotes the Levi-Civita connection on the Minkowski spacetime. The codimension-$2$ mean curvature vector of $\Sigma$ is then defined as $\vec{\mathcal{H}}=\tr_\gamma \vec\two$, where $\tr_\gamma$ denotes the metric trace on $\Sigma$ with respect to $\gamma$, and we denote its Lorentzian length by $\mathcal{H}^2:=\eta(\vec{\mathcal{H}},\vec{\mathcal{H}})$. Let $\{\ul{L},L\}$ be a null frame of $\Gamma(T^\perp\Sigma))$, i.e., $\eta(\ul{L},\ul{L})=\eta(L,L)=0$, with $\eta(\ul{L},L)=2$. Then $\vec{\two}$ and $\vec{\mathcal{H}}$ admit the decomposition
\begin{align}
\begin{split}
\vec{\two}&=-\frac{1}{2}\chi\ul{L}-\frac{1}{2}\ul{\chi}L,\\
\vec{\mathcal{H}}&=-\frac{1}{2}\theta\ul{L}-\frac{1}{2}\ul{\theta}L.
\end{split}
\end{align}
Here, the null second fundamental forms $\ul{\chi}$ and $\chi$ with respect to $\ul{L}$ and $L$, respectively, are defined as
\begin{align*}
\ul{\chi}(V,W)&\definedas \spann{\overline{\nabla}_V\ul{L},W}=-\spann{\overline{\nabla}_VW,\ul{L}},\\
{\chi}(V,W)&\definedas \spann{\overline{\nabla}_V{L},W}=-\spann{\overline{\nabla}_VW,L},
\end{align*}
for all tangent vector fields $V,W\in T\Sigma$, and the null expansions $\ul{\theta}$ and $\theta$ with respect to $\ul{L}$ and $L$, respectively, as
\begin{align*}
\ul{\theta}&\definedas \tr_\gamma\ul{\chi},\\
{\theta}&\definedas \tr_\gamma{\chi}.
\end{align*}
In particular
\begin{align}
\begin{split}\label{eq_secondffnulldecomp}
\newbtr{\vec\two}^2&=\spann{\chi,\ul{\chi}},\\
\mathcal{H}^2&=\ul{\theta}\theta,
\end{split}
\end{align}
and if $\mathcal{H}^2$ is constant along $\Sigma$, i.e., $\vec{\mathcal{H}}$ has constant Lorentzian length along $\Sigma$, we call $\Sigma$ a surface of constant spacetime mean curvature (STCMC), cf. Cederbaum--Sakovich \cite{cederbaumsakovich}.
Finally, we define the connection one-form $\zeta$ as
\[
\zeta(V)\definedas\frac{1}{2}\spann{\overline{\nabla}_V\ul{L},L}
\]
and together with $\ul{\chi}$, $\chi$, we collect the following well-known identities:
\begin{lem}\label{lem_vectorderivatives}
\begin{align*}
\overline{\nabla}_{\partial_i}\partial_j&=\nabla_{\partial_i}\partial_j-\frac{1}{2}\ul{\chi}_{ij}L-\frac{1}{2}\chi_{ij}\ul{L},\\
\overline{\nabla}_{\partial_i}\ul{L}&=\ul{\chi}_i^j\partial_j+\zeta(\partial_i)\ul{L},\\
\overline{\nabla}_{\partial_i}L&=\chi_i^j\partial_j-\zeta(\partial_i)L.
\end{align*}
\end{lem}
We denote the Riemann curvature tensor, Ricci curvature and scalar curvature on $(\Sigma,\gamma)$ by $\Rm$, $\Ric$, $\operatorname{R}$, respectively, where we use the following conventions:
\begin{align*}
\Rm(X,Y,W,Z)&=\spann{\nabla_X\nabla_YZ-\nabla_Y\nabla_XZ-\nabla_{[X,Y]}Z,W},\\
\Ric(V,W)&=\tr_\gamma \Rm(V,\cdot,W,\cdot),\\
\operatorname{R}&=\tr_\gamma\Ric.
\end{align*}
We use $\nabla$ for the Levi--Civita connection on $(\Sigma,\gamma)$ and $\nabla^k$ for the $k$-th tensor derivative. We use $\btr{\cdot}:=\btr{\cdot}_{\gamma}$ for all respective tensor norms induced by $\gamma$, where we will usually omit $\gamma$ for convenience when it is clear from the context. By slight abuse of notation, we will denote the gradient of a function $f$ on $(\Sigma,\gamma)$ by $\nabla f$. We denote the trace free part of a $(0,2)$-tensor $T$ as $\accentset{\circ}{T}:=T-\tr_\gamma T\gamma$.
We briefly recall the well-known Gauß and Codazzi Equations in the case of a codimension-$2$ surface in $\R^{3,1}$, which can be directly computed from Lemma \ref{lem_vectorderivatives}. Compare \cite[Theorem 2.2]{wang}\footnote{Note the different sign conventions for $\ul{L}$.}.
\begin{prop}[Gauß Equations]\label{prop_nullgauß}
\begin{align*}
\Rm_{ijkl}&=\frac{1}{2}\chi_{jl}\ul{\chi}_{ik}+\frac{1}{2}\ul{\chi}_{jl}\chi_{ik}-\frac{1}{2}\chi_{jk}\ul{\chi}_{il}-\frac{1}{2}\ul{\chi}_{jk}\chi_{il},\\
\Ric_{ik}&=\frac{1}{2}\theta\ul{\chi}_{ik}+\frac{1}{2}\ul{\theta}\chi_{ik}-\frac{1}{2}(\chi\cdot\ul{\chi})_{ik}-\frac{1}{2}(\ul{\chi}\cdot\chi)_{ik},\\
\operatorname{R}&=\mathcal{H}^2-\newbtr{\vec{\two}}^2.
\end{align*}
\end{prop}
\begin{prop}[Codazzi Equations]\label{prop_nullcodazzi}
\begin{align*}
\nabla_i\ul{\chi}_{jk}-\nabla_j\ul{\chi}_{ik}
&=-\zeta_j\ul{\chi}_{ik}+\zeta_i\ul{\chi}_{jk},\\
\nabla_i\chi_{jk}-\nabla_j\chi_{ik}&=+\zeta_j\chi_{ik}-\zeta_i\chi_{jk}.
\end{align*}
\end{prop}
In the following, we will always consider null vector fields $\ul{L}$ such that their integral curves are geodesics, i.e.,
\begin{align}\label{eq_geodesiccondition}
\overline{\nabla}_{\ul{L}}\ul{L}=0.
\end{align}
Under this additional assumption, the propagation equations, also known as the\linebreak Raychaudhuri optical equations (cf. \cite[Lemma 3.2]{roeschscheuer}), along a (local) deformation
\[
\frac{\d}{\d t}=\varphi \ul{L}
\]
with $\varphi\in C^2(\Sigma)$, simplify in $\R^{3,1}$ to the following:
\begin{lem}[Propagation Equations]\label{lem_propagation}\,
\begin{itemize}
\item[\emph{(i)}] $\frac{\d}{\d t}\gamma_{ij}=2\varphi\ul{\chi}_{ij}$, $\frac{\d}{\d t}\gamma^{ij}=-2\varphi\ul{\chi}^{ij}$, $\frac{\d}{\d t}\d\mu=\varphi\ul{\theta}\d \mu$,
\item[\emph{(ii)}]
$\frac{\d}{\d t}\ul{\chi}_{ij}=\varphi(\ul{\chi})^2_{ij}$,
\item[\emph{(iii)}]
$\frac{\d}{\d t}L=-2\nabla\varphi-2\varphi\zeta^k\partial_k$,
\item[\emph{(iv)}]
$\frac{\d}{\d t}\chi_{ij}=-2\Hess_{ij}\varphi-2(\d\varphi_i\otimes\zeta_j+\d\varphi_j\otimes\zeta_i)-\varphi\left(2\nabla_i\zeta_j+2\zeta_i\otimes\zeta_j-(\chi\ul{\chi})_{ij}\right)$,
\item[\emph{(v)}]
$\frac{\d}{\d t}\ul{\theta}=-\varphi\newbtr{\ul{\chi}}^2$,
\item[\emph{(vi)}]
$\frac{\d}{\d t}\theta=-2\Delta\varphi-2\gamma(\nabla\varphi,\zeta)-\varphi\left(\newbtr{\vec{\two}}^2+2\dive\zeta+2\btr{\zeta}^2\right)$.
\end{itemize}
\end{lem}
\section{Null Geometry on the standard Minkowski lightcone}\label{sec_nullgeom}
For the sake of simplicity, we will keep the discussion of null geometry to the standard Minkowski lightcone. For the interested reader, we refer to \cite{marssoria,roeschscheuer} for a more complete and general introduction to null geometry. For our purpose, it is most convenient to introduce the null coordinates $v\definedas r+t$, $u\definedas r-t$ on $\R^{3,1}$. Then $\eta$ can be written as
\[
\eta=\frac{1}{2}\left(\d u\d v+\d v\d u\right)+r^2\d\Omega^2
\]
with $r=r(u,v)=\frac{u+v}{2}$. Now, all past-pointing standard lightcones in the Minkowski spacetime are given by the sets $\{v=\operatorname{const.}\}$ (and similarly all future-pointing lightcones are given by $\{u=\operatorname{const.}\}$). From now on, we will work on the null hypersurface $\mathcal{N}=\{v=0\}=C(0)_-$, i.e., the past-pointing standard lightcone centered at the origin, but all identities derived for $\mathcal{N}$ will also analogously hold on all level sets of $v$ and $u$ respectively. Note that $\mathcal{N}$ has the induced degenerate metric
\[
r^2\d\Omega^2,
\]
and is generated by the geodesic integral curves of $\ul{L}:=2\partial_u$. Note that $\ul{L}$ is past-pointing and consistent with assumption \eqref{eq_geodesiccondition}. Recall that the null generator $\ul{L}$ of a null hypersurface is both tangential and normal to $\mathcal{N}$, and by choice of $\ul{L}$ we have $\ul{L}(r)=1$. Thus, $r$ restricts to an affine parameter along $\mathcal{N}$. In particular, we can represent any spacelike cross section $\Sigma$ of $\mathcal{N}$ (which intersects any integral curve of $\ul{L}$ exactly once) as a graph over $\Sbb^2$, i.e., $\Sigma=\Sigma_\omega=\{\omega=r\}\subseteq\mathcal{N}$. In particular, $\Sigma$ has the induced metric
\[
\gamma=\omega^2\d\Omega^2,
\]
so $(\Sigma,\gamma)$ is conformally round. Conversely, for any conformally round metric $\gamma_\omega=\omega^2\d\Omega^2$ there exists a unique spacelike cross section $\Sigma_\omega$ such that $(\Sigma_\omega,\gamma_\omega)$ embeds into $\mathcal{N}$, where we will omit the subscript $\omega$ in the following when it is clear from the context. This observation is similar to an idea developed by Fefferman--Graham \cite{feffermangraham}, and their construction indeed yields the standard lightcone in the $3+1$-Minkowski spacetime in the case of the round $2$-sphere.
We now want to represent the extrinsic curvature of a spacelike cross section $(\Sigma,\gamma)$ of $\mathcal{N}$ as a codimension-$2$ surface with respect to a particular null frame. Recall that the null generator $\ul{L}$ is both tangent and normal to $\mathcal{N}$, in particular $\ul{L}$ is normal to any spacelike cross section $(\Sigma,\gamma)$. We further consider a normal null vector field $L$ along $\Sigma$ such that $\eta(\ul{L},L)=2$. This uniquely determines $L$ such that $\{\ul{L},L\}$ form a frame of the normal bundle $T^\perp\Sigma$ of $\Sigma$. Note that $L$ is future-pointing.
We now remark that the standard round spheres $\{\Sigma_s\}_{s\in(0,\infty)}$ form a level-set foliation with respect to the integral curves of the null generator $\ul{L}$. It is easy to verify that for any leaf $\Sigma_s$, we have $L=2\partial_v$ and find
\begin{align*}
\ul{\chi}_s&=\chi_s=s\d\Omega^2,\\
\ul{\theta}_s&=\theta_s=\frac{2}{s},\\
\zeta_s&=0.
\end{align*}
From this background foliation, we can explicitly compute all extrinsic curvature quantities for any spacelike cross section $\Sigma$, cf. \cite[Proposition 1]{marssoria}\footnote{Note the different sign conventions $k=-\ul{L}$, $s_l=-\zeta$.}:
\begin{prop}\label{prop_minkowskilightcone}
For any spacelike cross section $(\Sigma,\gamma)$ in $\mathcal{N}$, we find
\begin{enumerate}
\item[\emph{(i)}] $\gamma=\omega^2\d\Omega^2$,
\item[\emph{(ii)}] $\ul{\chi}=\frac{1}{\omega}\gamma$,
\item[\emph{(iii)}] $\ul{\theta}=\frac{2}{\omega}$,
\item[\emph{(iv)}] $\chi=\frac{1}{\omega}(1+\btr{\nabla\omega}^2)\gamma-2\Hess\, \omega$
\item[\emph{(v)}] $\theta=2\left(\frac{1}{\omega}+\frac{\btr{\nabla\omega}^2}{\omega}-\Delta \omega\right)$,
\item[\emph{(vi)}] $\zeta=-\frac{\d\omega}{\omega}$,
\end{enumerate}
where $\Hess$ and $\Delta$ denote the Hessian and Laplace--Beltrami operator on $(\Sigma,\gamma)$, respectively.
\end{prop}
\begin{bem}\label{bem_minkowskilightcone}
The fact that the null mean curvature $\ul{\chi}$ depends only pointwise on $\omega$ together with the background foliation of round spheres gives
\[
\newbtr{\vec{\two}}^2=\spann{\ul{\chi},\chi}=\frac{1}{2}\mathcal{H}^2,
\]
and thus
\begin{align}\label{eq_gaußcurvature}
\operatorname{R}=\frac{1}{2}\mathcal{H}^2
\end{align}
by the twice contracted Gauß equation Proposition \ref{prop_nullgauß}, which can also be directly verified from (iii) and (v) in Proposition \ref{prop_minkowskilightcone}. Since $\Sigma$ is $2$-dimensional, we can therefore express the Riemann tensor of the surface as
\begin{align}\label{eq_riemannminkowski}
\Rm_{ijkl}=\frac{1}{4}\mathcal{H}^2\left(\gamma_{ik}\gamma_{jl}-\gamma_{jk}\gamma_{il}\right).
\end{align}
We would like to emphasize here that $\mathcal{H}^2$ refers by definition to the signed Lorentzian length of the mean curvature tensor and can therefore be (locally) negative despite the suggestive power of $2$ as an exponent.
\end{bem}
In particular, we always have
\begin{align}
\accentset{\circ}{\vec{\two}}=-\frac{1}{2}\accentset{\circ}{\chi}\ul{L},
\end{align}
so $\newbtr{\accentset{\circ}{\vec{\two}}}^2=0$ although $\accentset{\circ}{\vec{\two}}\not=0$, and the property of $\vec{\two}$ being pure trace is instead more accurately captured by $\newbtr{\accentset{\circ}{\chi}}^2=0$. Along $\mathcal{N}$, this is made precise by the following lemma.
\begin{prop}\label{prop_codazziminkowski}
Let $(\Sigma,\gamma)$ be a spacelike cross section of $\mathcal{N}$ with $\accentset{\circ}{\chi}=0$. Then $\mathcal{H}^2$ is constant and strictly positive along $\Sigma$. In particular, $\gamma$ is a metric of constant scalar curvature.
\end{prop}
\begin{bem}\label{bem_codazzi}
Note that
\begin{align}
\accentset{\circ}{\chi}=-2\accentset{\circ}{\Hess}_\gamma\omega=2\omega^2\accentset{\circ}{\Hess}_{\Sbb^2}\left(\frac{1}{\omega}\right),
\end{align}
where ${\Hess}_{\Sbb^2}$ denotes the Hessian on $(\Sbb^2,\d\Omega^2)$. One can also verify by computation in coordinates that $\accentset{\circ}{\Hess}_{\Sbb^2}\left(\frac{1}{\omega}\right)=0$ if and only if $\omega$ is of the form
\begin{align}\label{eq_metricconstantscalar}
\omega(\vec{x})=\frac{c}{\sqrt{1+\norm{\vec{a}}^2}+\vec{a}\cdot\vec{x}}
\end{align}
for $\vec{x}\in\Sbb^2$ and a fixed vector $\vec{a}\in\R^3$, which are exactly the metrics of constant scalar curvature on $\Sbb^2$. Hence, the converse statement of Proposition \ref{prop_codazziminkowski} is also true. It is a well-known fact that all such metrics can be obtained from the round metric by a suitable M\"obius transformation, cf. \cite[Proposition 6]{marssoria}, \cite[Section 5.2]{wang}. Moreover, the metrics \eqref{eq_metricconstantscalar} describe exactly the images of round spheres after a suitable Lorentz transformation in $\operatorname{SO}^+(3,1)$ in the Minkowski spacetime, which leave the lightcone $\mathcal{N}$ invariant. These observations illustrate once again the well-known fact that the M\"obius group is isomorphic to the restricted Lorentz group $\operatorname{SO}^+(3,1)$.
\end{bem}
\begin{proof}[Proof of Proposition \ref{prop_codazziminkowski}]
Combining the Codazzi equation for $\chi$ from Proposition \ref{prop_nullcodazzi} with the explicit form of $\zeta$ from Proposition \ref{prop_minkowskilightcone}, we find
\begin{align}
\nabla_i\chi_{jk}-\frac{\d \omega_i}{\omega}\chi_{jk}=\nabla_j\chi_{ik}-\frac{\d\omega_j}{\omega}\chi_{ik}.
\end{align}
Multiplying the equation by $\ul{\theta}=\frac{2}{\omega}>0$, we get
\begin{align}
\nabla_{i}\left(\ul{\theta}\chi\right)_{jk}=\nabla_{j}\left(\ul{\theta}\chi\right)_{jk}.
\end{align}
Hence $\nabla\left(\ul{\theta}\chi\right)$ is totally symmetric and since $\tr_\gamma \ul{\theta}\chi=\mathcal{H}^2$, we find
\[
\nabla_i\mathcal{H}^2=\dive \left(\ul{\theta}\chi\right)_i=\frac{1}{2}\nabla_i\mathcal{H}^2+\dive\accentset{\circ}{\left(\ul{\theta}\chi\right)}_i=\frac{1}{2}\nabla_i\mathcal{H}^2
\]
by assumption. Therefore $\mathcal{H}^2$ is constant, in particular $\gamma$ is a metric of constant scalar curvature by the Gauß equation \eqref{eq_gaußcurvature}. Finally, the Gauß--Bonnet Theorem ensures the positivity of $\operatorname{R}$ and hence $\mathcal{H}^2$.
\end{proof}
Motivated by this, we will choose the symmetric $(0,2)$-form $A\definedas \ul{\theta}\chi$ as a scalar valued representation of the vector valued second fundamental form $\vec{\two}$. This can be regarded as a choice of gauge. Rephrasing Proposition \ref{prop_codazziminkowski} in terms of $A$, we see that we can prove the following identity in complete analogy to the properties of the scalar valued second fundamental form $h$ of an embedded, orientable surface in $\R^3$.
\begin{prop}\label{prop_codazziminkowski2}
Let $(\Sigma,\gamma)$ be a spacelike cross section of $\mathcal{N}$. Then $\nabla A$ is totally symmetric, i.e.,
\begin{align}
\nabla_iA_{jk}=\nabla_jA_{ik}.
\end{align}
In particular, we find
\begin{align}\label{eq_Agradientestimate}
\btr{\nabla A}^2\ge \frac{3}{4}\btr{\nabla\mathcal{H}^2}^2,
\end{align}
and $\accentset{\circ}{A}=0$ if and only if $\mathcal{H}^2$ is a strictly positive constant.
\end{prop}
We further derive the propagation equations for the geometric objects $A$ and $\mathcal{H}^2$ from Lemma \ref{lem_propagation} and Proposition \ref{prop_minkowskilightcone}:
\begin{lem}\label{lem_Apropagation}
\begin{align*}
\frac{\d}{\d t}A_{ij}
=&-2\Hess_{ij}(\ul{\theta}\varphi),\\
\frac{\d}{\d t}\mathcal{H}^2
=&-2\Delta(\ul{\theta}\varphi)-(\ul{\theta}\varphi)\mathcal{H}^2.
\end{align*}
\end{lem}
\begin{proof}
From Lemma \ref{lem_propagation} (iv) and (v), and $\ul{\chi}=\frac{1}{2}\ul{\theta}\gamma$, we compute
\begin{align*}
\frac{\d}{\d t}A_{ij}&=-2\ul{\theta}\Hess_{ij}\varphi-2\ul{\theta}(\d\varphi_i\otimes\zeta_j+\d\varphi_j\otimes\zeta_i)-\varphi\ul{\theta}\left(2\nabla_i\zeta_j+2\zeta_i\otimes\zeta_j-\frac{1}{2}A_{ij}\right)-\frac{1}{2}\varphi\ul{\theta}A_{ij}\\
&=-2\ul{\theta}\Hess_{ij}\varphi-2\ul{\theta}(\d\varphi_i\otimes\zeta_j+\d\varphi_j\otimes\zeta_i)-\varphi\ul{\theta}\left(2\nabla_i\zeta_j+2\zeta_i\otimes\zeta_j\right).
\end{align*}
We now observe that the remaining terms on the right hand side exactly combine into $-2\Hess_{ij}\left(\ul{\theta}\varphi\right)$ due to the explicit formulas for $\ul{\theta}$ and $\zeta$ listed in Proposition \ref{prop_minkowskilightcone}. Taking a trace, where $\frac{\d}{\d t}\gamma^{ij}=-\varphi\ul{\theta}\gamma^{ij}$, completes the proof.
\end{proof}
We close this section by establishing a null version of the Simons' identity for $A$ in the $3+1$-Minkowski lightcone $\mathcal{N}$, which will be crucial for our later analysis.
\begin{lem}[Null Simons' Identity]\label{lem_nullsimon}
\begin{align*}
\Delta A_{ij}=\Hess_{ij}\mathcal{H}^2+\frac{1}{2}\mathcal{H}^2\accentset{\circ}{A}_{ij}.
\end{align*}
\end{lem}
\begin{proof}
In the following lines, we will make frequent use of the Codazzi equation for A, Lemma \ref{prop_codazziminkowski2}, and the fact that for any symmetric $(0,2)$ tensor $T$, we have that
\[
\nabla_k\nabla_lT_{ij}-\nabla_l\nabla_kT_{ij}=\Rm_{kljm}T^m_i+\Rm_{klim}T^m_j.
\]
Thus, we compute
\begin{align*}
\nabla_k\nabla_lA_{ij}
=&\nabla_k\left(\nabla_iA_{lj}\right)\\
=&\nabla_i\nabla_k A_{jl}+\Rm_{kilm}A^m_j+\Rm_{kijm}A^m_l\\
=&\nabla_i(\nabla_j A_{kl})+\Rm_{kilm}A^m_j+\Rm_{kijm}A^m_l\\
=&\nabla_i\nabla_j A_{kl}+\frac{1}{4}\mathcal{H}^2\left(\left(\gamma_{kl}\gamma_{im}-\gamma_{il}\gamma_{km}\right)A^m_j+\left(\gamma_{kj}\gamma_{im}-\gamma_{ij}\gamma_{km}\right)A^m_l\right)\\
=&\nabla_i\nabla_j A_{kl}+\frac{1}{4}\mathcal{H}^2\left(A_{ij}\gamma_{kl}+A_{il}\gamma_{kj}-A_{kj}\gamma_{il}-A_{kl}\gamma_{ij}\right),
\end{align*}
where we have used \eqref{eq_riemannminkowski} in the second to last line. Taking a trace with respect to the $kl$ entries yields the claim.
\end{proof}
\section{$2d$-Ricci flow along the standard Minkowski lightcone}\label{sec_equialence}
We will now investigate null mean curvature flow restricted to the (past-pointing) standard lightcone in the $3+1$-Minkowski spacetime. Recall that null mean curvature flow along null hypersurfaces is defined as
\[
\frac{\d}{\d t}x=\frac{1}{2}\spann{\vec{\mathcal{H}},\ul{L}}\ul{L}=-\frac{1}{2}\theta\ul{L},
\]
as first studied by Roesch--Scheuer \cite{roeschscheuer}. Note that since $\ul{L}(r)=1$, the above is equivalent to the following evolution equation for $\omega$
\begin{align}\label{eq_nullmeancurvatureflow}
\frac{\d}{\d t}\omega=-\frac{1}{2}\theta.
\end{align}
Recall that $\accentset{\circ}{\Ric}=0$ in dimension $2$, so $2d$-Ricci flow agrees with the Yamabe flow \cite{hamilton2} and naturally preserves the conformal class of the metric. More explicitly, in our case
\begin{align*}
\frac{\d}{\d t}\gamma_{ij}=-2\Ric_{ij}&\Leftrightarrow \frac{\d}{\d t}\left(\omega^2\d\Omega^2\right)=-2\mathcal{K}\omega^2\d\Omega^2\\
&\Leftrightarrow \frac{2}{\omega}\left(\frac{\d}{\d t}\omega\right)\gamma_{ij}=-2\mathcal{K}\gamma_{ij}\\
&\Leftrightarrow \frac{\d}{\d t}\omega=-\omega\mathcal{K},
\end{align*}
where $\mathcal{K}$ denotes the Gauß curvature. Note that by the twice contracted Gauß equation \eqref{eq_gaußcurvature} and the explicit form of $\ul{\theta}$, we have that $\theta=2\omega\mathcal{K}$. Therefore, $2$-dimensional Ricci flow in the conformal class of the round sphere is equivalent to null mean curvature flow on the past-pointing standard lightcone in the $3+1$-Minkowski spacetime. Since $2d$-Ricci flow and its renormalized equation are fully understood in this case, we find the following corollary as a consequence of the Gauß equation \eqref{eq_gaußcurvature}, Remark \ref{bem_codazzi}, and a classical result of $2d$-Ricci flow first proven by Hamilton \cite{hamilton1}, where the initial restriction to metrics of strictly positive scalar curvature in the case of surfaces of genus $0$ was later removed by Chow \cite{chow1}:
\begin{thm}[Hamilton and Chow, {\cite[Corollary 1.3]{chow1}}]
If $g$ is any metric on a Riemann surface, then under Hamilton's Ricci flow, $g$ converges to a metric of constant curvature.
\end{thm}
\begin{kor}\label{thm_singularities}
Let $(\Sigma_0,\gamma_0)$ be a spacelike cross section of the past-pointing standard lightcone $\mathcal{N}$ in the $3+1$-Minkowski spacetime. Then the solution of null mean curvature flow starting from $\Sigma_0$ extinguishes in the tip of the cone in finite time and the renormalization by volume converges to a surface of constant spacetime mean curvature, which exactly arise as the image of a round sphere of a Lorentz transformation in $\operatorname{SO}^+(3,1)$ consisting of a Lorentz boost with boost vector
\[
\vec{v}=\begin{pmatrix}
\sqrt{1+\norm{\vec{a}}}\\\vec{a}
\end{pmatrix}
\]
for a vector $\vec{a}\in \R^3$ and a rotation determined by the choice of coordinates on $\Sbb^2$.
\end{kor}
Conversely, we will show in the next section that by studying null mean curvature flow along the standard Minkowski lightcone a new proof for renormalized $2d$-Ricci flow arises.
\section{A new proof of Hamilton's Theorem}\label{sec_hamilton}
With this approach to $2d$-Ricci flow, we give a new proof of Hamilton's classical result:
\begin{thm}[cf. \cite{hamilton1}]\label{thm_mainthm}
Let $(\Sigma_0,\gamma_0)$ be a surface with conformally round metric $\gamma_0$ and strictly positive scalar curvature. Then a solution of renormalized Ricci flow exists for all time and the metrics $\gamma_t$ converge to a smooth metric $\gamma_\infty$ of constant scalar curvature in $C^k$ for all $k\in\N$ as $t\to\infty$.
\end{thm}
Note that the assumption of strictly positive scalar curvature translates by the Gauß equation \eqref{eq_gaußcurvature} to the assumption that the mean curvature vector $\vec{\mathcal{H}}$ is everywhere spacelike. Throughout this section, we will use the extrinsic objects $A$, $\mathcal{H}^2$ evolving under null mean curvature flow on the standard lightcone in the $3+1$-Minkowski spacetime, but will frequently exploit its equivalence to $2d$-Ricci flow to switch freely between the frameworks of null mean curvature flow and $2d$-Ricci flow. A key tool in the proof will be to first study the evolution of $\newbtr{\accentset{\circ}{A}}^2$ along the unnormalized flow which can be combined into the evolution of $\mathcal{H}^2$ to yield a crucial gradient estimate. Translating these to the renormalized flow will then yield the proof of Theorem \ref{thm_mainthm}.
\begin{bem}
Note that there does not seem to be a direct connection between $\accentset{\circ}{A}$ and the auxiliary term $M=\accentset{\circ}{\Hess}f$ in the modified renormalized flow in \cite{bartzstruweye, hamilton1}, where $f$ solves
\[
\Delta f=\left(\operatorname{R}-\fint_\Sigma \operatorname{R}\right).
\]
To see this, consider any stationary point of the renormalized flow where $f$ is necessarily constant while $\accentset{\circ}{A}=0$ holds for all functions $\omega$ of the form \eqref{eq_metricconstantscalar} arising from Lorentz transformations.
\end{bem}
We start by computing the relevant evolution equations for the unnormalized flow.
\begin{prop}\label{prop_nullmeancurvature1} For a smooth solution to null mean curvature flow, we find
\begin{align*}
\frac{\d}{\d t}\btr{A}^2&=\Delta\btr{A}^2-2\btr{\nabla A}^2+\frac{1}{2}\left(\mathcal{H}^2\right)^3,\\
\frac{\d}{\d t}\mathcal{H}^2&=\Delta\mathcal{H}^2+\frac{1}{2}\left(\mathcal{H}^2\right)^2.
\end{align*}
\end{prop}
\begin{proof}
For $\varphi=-\frac{1}{2}\theta$, the evolution equation for $\mathcal{H}^2$ is immediate form Lemma \ref{lem_Apropagation}. Combining the evolution equation for A from Lemma \ref{lem_Apropagation} with the null Simons' identity, Lemma \ref{lem_nullsimon}, we have
\[
\frac{\d}{\d t}A_{ij}=\Delta A_{ij}-\frac{1}{2}\mathcal{H}^2\accentset{\circ}{A}_{ij}.
\]
Hence
\begin{align*}
\frac{\d}{\d t}\btr{A}^2
&=\frac{\d}{\d t}\left(\gamma^{ik}\gamma^{jl}A_{ij}A_{jk}\right)\\
&=2\gamma^{ik}\gamma^{jl}A_{ij}\frac{\d}{\d t}A_{kl}+2\frac{\d}{\d t}\gamma^{ik}\gamma^{jl}A_{ij}A_{kl}\\
&=2\spann{A,\Delta A-\frac{1}{2}\mathcal{H}^2\accentset{\circ}{A}}+\mathcal{H}^2\btr{A}^2\\
&=\Delta\btr{A}^2-2\btr{\nabla A}^2-\mathcal{H}^2\newbtr{\accentset{\circ}{A}}^2+\mathcal{H}^2\btr{A}^2\\
&=\Delta\btr{A}^2-2\btr{\nabla A}^2+\frac{1}{2}\left(\mathcal{H}^2\right)^3.
\end{align*}
\end{proof}
Therefore, as we already know from Ricci flow, cf. \cite[Corollary 2.11]{chowluni}, the positivity of $\mathcal{H}^2$ is preserved under the flow by the parabolic maximum principle \cite[Proposition 2.9]{chowluni}. In particular, the flow exists only for finite time, as $\mathcal{H}^2\to\infty$ in finite time.
\begin{prop}\label{prop_nullmeancurvature2}
\begin{align*}
\frac{\d}{\d t}\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}
&=\Delta\left(\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}\right)+\frac{2}{\mathcal{H}^2}\spann{\nabla \mathcal{H}^2,\nabla\left(\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}\right)}-\mathcal{H}^2\frac{\newbtr{\accentset{\circ}{A}}^2}{\left(\mathcal{H}^2\right)^2}-2\btr{\nabla\frac{A}{\mathcal{H}^2}}^2.
\end{align*}
\end{prop}
\begin{bem}\label{bem_blowup}
In particular, any upper bound on $\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}$ is preserved under the flow, so for a sequence $(p,t)$ of points and times, we have that $\btr{A}^2\to\infty$ if and only if $\mathcal{H}^2\to\infty$.
\end{bem}
\begin{proof}
By Proposition \ref{prop_nullmeancurvature1}, we have that
\begin{align*}
\frac{\d}{\d t}\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}
&=\frac{\frac{\d}{\d t}\btr{A}^2}{\left(\mathcal{H}^2\right)^2}-\frac{2}{\left(\mathcal{H}^2\right)^3}\btr{A}^2\frac{\d}{\d t}\mathcal{H}^2\\
&=\frac{\Delta\btr{A}^2}{\left(\mathcal{H}^2\right)^2}-\frac{2\btr{\nabla A}^2}{\left(\mathcal{H}^2\right)^2}+\frac{1}{2}\mathcal{H}^2-\frac{2}{\left(\mathcal{H}^2\right)^3}\btr{A}^2\Delta\mathcal{H}^2-\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}\mathcal{H}^2\\
&=\frac{\Delta\btr{A}^2}{\left(\mathcal{H}^2\right)^2}-\frac{2\btr{\nabla A}^2}{\left(\mathcal{H}^2\right)^2}-\frac{2}{\left(\mathcal{H}^2\right)^3}\btr{A}^2\Delta\mathcal{H}^2-\mathcal{H}^2\frac{\newbtr{\accentset{\circ}{A}}^2}{\left(\mathcal{H}^2\right)^2}.
\end{align*}
Note that
\begin{align*}
\Delta\left(\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}\right)
&=\frac{\Delta \btr{A}^2}{\left(\mathcal{H}^2\right)^2}-\frac{4}{\left(\mathcal{H}^2\right)^3}\spann{\nabla\btr{A}^2,\nabla \mathcal{H}^2}+\frac{6}{\left(\mathcal{H}^2\right)^4}\btr{A}^2\btr{\nabla \mathcal{H}^2}^2-\frac{2}{\left(\mathcal{H}^2\right)^3}\btr{A}^2\Delta\mathcal{H}^2.
\end{align*}
Thus
\begin{align*}
\frac{\d}{\d t}\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}
&=\Delta\left(\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}\right)+\frac{4}{\left(\mathcal{H}^2\right)^3}\spann{\nabla\btr{A}^2,\nabla \mathcal{H}^2}-\frac{6}{\left(\mathcal{H}^2\right)^4}\btr{A}^2\btr{\nabla \mathcal{H}^2}^2-\frac{2\btr{\nabla A}^2}{\left(\mathcal{H}^2\right)^2}-\mathcal{H}^2\frac{\newbtr{\accentset{\circ}{A}}^2}{\left(\mathcal{H}^2\right)^2}.
\end{align*}
Moreover,
\begin{align*}
\spann{\nabla\left(\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}\right),\nabla\mathcal{H}^2}=\frac{1}{\left(\mathcal{H}^2\right)^2}\spann{\nabla\btr{A}^2,\nabla\mathcal{H}^2}-\frac{2}{\left(\mathcal{H}^2\right)^3}\btr{A}^2\btr{\nabla\mathcal{H}^2}^2.
\end{align*}
Therefore, we conclude that
\begin{align*}
\frac{\d}{\d t}\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}
=&\,\Delta\left(\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}\right)+\frac{2}{\mathcal{H}^2}\spann{\nabla\left(\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}\right),\nabla\mathcal{H}^2} -\mathcal{H}^2\frac{\newbtr{\accentset{\circ}{A}}^2}{\left(\mathcal{H}^2\right)^2}\\
&-\frac{2}{\left(\mathcal{H}^2\right)^4}\btr{A}^2\btr{\nabla \mathcal{H}^2}^2-\frac{2\btr{\nabla A}^2}{\left(\mathcal{H}^2\right)^2}+\frac{2}{\left(\mathcal{H}^2\right)^3}\spann{\nabla\btr{A}^2,\nabla\mathcal{H}^2}.
\end{align*}
Now notice that
\begin{align*}
\nabla_i\frac{A_{jk}}{\mathcal{H}^2}&=\frac{1}{\mathcal{H}^2}\nabla_iA_{jk}-\frac{1}{\left(\mathcal{H}^2\right)^2}A_{jk}\nabla_i\mathcal{H}^2\\
&=\frac{1}{\left(\mathcal{H}^2\right)^2}(\mathcal{H}^2\nabla_iA_{jk}-A_{jk}\nabla_i\mathcal{H}^2),
\end{align*}
so that
\begin{align*}
\btr{\nabla\frac{A}{\mathcal{H}^2}}
&=\frac{1}{\left(\mathcal{H}^2\right)^4}\left(\left(\mathcal{H}^2\right)^2\btr{\nabla A}^2+\btr{A}^2\btr{\nabla \mathcal{H}^2}^2-2\mathcal{H}^2A^{jk}\nabla_iA_{jk}\nabla_i\mathcal{H}^2\right)\\
&=\frac{1}{\left(\mathcal{H}^2\right)^4}\left(\left(\mathcal{H}^2\right)^2\btr{\nabla A}^2+\btr{A}^2\btr{\nabla \mathcal{H}^2}^2-\mathcal{H}^2\spann{\nabla\btr{A}^2,\nabla\mathcal{H}^2}\right).
\end{align*}
The claim follows from using the last identity in the evolution equation.
\end{proof}
We would like to point out the similarity to the evolution of the corresponding quantities for mean curvature flow in Euclidean space, where the second fundamental form $h$ and mean curvature $H$ are replaced here by $A$ and $\mathcal{H}^2$ in the evolution equations. However, the presence of the additional good term $-\mathcal{H}^2\frac{\newbtr{\accentset{\circ}{A}}^2}{\left(\mathcal{H}^2\right)^2}$ will allow us to prove a scale breaking estimate similar to work of Huisken (cf. \cite[Theorem 8.6]{andrewschowguentherlangford}) for mean curvature flow without any pinching condition and without the need to employ a Stampacchia iteration.
\begin{thm}\label{thm_nullmeancurvature1}
Let $\{\Sigma_t\}_{t\in[0,T_{max})}$ be a family of closed, topological $2$-spheres with strictly positive scalar curvature evolving under Ricci flow in the form of null mean curvature flow. Then, for any $\sigma\in[0,1]$, there exists $C=C(\Sigma_0,\sigma)$, such that
\[
\frac{\newbtr{\accentset{\circ}{A}}^2}{\left(\mathcal{H}^2\right)^2}\le C(\mathcal{H}^2)^{-\sigma}
\]
for all $0\le t<T_{max}$.
\end{thm}
\begin{proof}
We define $f_0\definedas \frac{\newbtr{\accentset{\circ}{A}}^2}{\left(\mathcal{H}^2\right)^2}$, and hence $f_0=\frac{\btr{A}^2}{\left(\mathcal{H}^2\right)^2}-\frac{1}{2}$ as $\tr_\gamma A=\mathcal{H}^2$. By Proposition \ref{prop_nullmeancurvature2} the evolution of $f_0$ is given by
\[
\frac{\d}{\d t}f_0=\Delta f_0+\frac{2}{\mathcal{H}^2}\spann{\nabla\mathcal{H}^2,\nabla f_0}-\mathcal{H}^2f_0-2\btr{\nabla\frac{A}{\mathcal{H}^2}}^2.
\]
We now consider $f_\sigma\definedas (\mathcal{H}^2)^\sigma f_0$ for some $\sigma\in[0,1]$. Then
\begin{align*}
\frac{\d}{\d t}f_\sigma
=&\,(\mathcal{H}^2)^\sigma\frac{\d}{\d t}f_0+\sigma(\mathcal{H}^2)^{\sigma-1}f_0\frac{\d}{\d t}\mathcal{H}^2\\
=&\,(\mathcal{H}^2)^\sigma\Delta f_0+\sigma(\mathcal{H}^2)^{\sigma-1}f_0\Delta \mathcal{H}^2+2(\mathcal{H}^2)^{\sigma-1}\spann{\nabla f_0,\nabla \mathcal{H}^2}\\
&-\left(\mathcal{H}^2\right)^{\sigma+1}\left(1-\frac{1}{2}\sigma\right)f_0-2\left(\mathcal{H}^2\right)^{\sigma}\btr{\nabla\frac{A}{\mathcal{H}^2}}.
\end{align*}
Similar to before, we compute that
\begin{align*}
\Delta f_\sigma
=&\,(\mathcal{H}^2)^\sigma \Delta f_0+2\sigma(\mathcal{H}^2)^{\sigma-1}\spann{\nabla f_0,\nabla \mathcal{H}^2}\\
&+\sigma(\sigma-1)f_0(\mathcal{H}^2)^{\sigma-2}\btr{\nabla\mathcal{H}^2}^2+\sigma(\mathcal{H}^2)^{\sigma-1}f_0\Delta\mathcal{H}^2,
\end{align*}
and
\begin{align*}
\spann{\nabla f_\sigma,\nabla\mathcal{H}^2}&=(\mathcal{H}^2)^\sigma\spann{\nabla f_0,\nabla\mathcal{H}^2}+\sigma(\mathcal{H}^2)^{\sigma-1}f_0\btr{\nabla\mathcal{H}^2}^2.
\end{align*}
Therefore, we have that
\begin{align*}
\frac{\d}{\d t}f_\sigma
=&\,\Delta f_\sigma+2(1-\sigma)(\mathcal{H}^2)^{\sigma-1}\spann{\nabla f_0,\nabla\mathcal{H}^2}+\sigma(1-\sigma)f_0(\mathcal{H}^2)^{\sigma-2}\btr{\nabla\mathcal{H}^2}^2\\
&-\left(\mathcal{H}^2\right)^{\sigma+1}\left(1-\frac{1}{2}\sigma\right)f_0-2\left(\mathcal{H}^2\right)^{\sigma}\btr{\nabla\frac{A}{\mathcal{H}^2}}\\
=&\,\Delta f_\sigma+\frac{2(1-\sigma)}{\mathcal{H}^2}\spann{\nabla f_\sigma,\nabla \mathcal{H}^2}-\sigma(1-\sigma)f_\sigma(\mathcal{H}^2)^{-2}\btr{\nabla\mathcal{H}^2}^2\\
&-\mathcal{H}^2\left(1-\frac{1}{2}\sigma\right)f_\sigma-2\left(\mathcal{H}^2\right)^{\sigma}\btr{\nabla\frac{A}{\mathcal{H}^2}}\\
\le&\,\Delta f_\sigma+\frac{2(1-\sigma)}{\mathcal{H}^2}\spann{\nabla f_\sigma,\nabla \mathcal{H}^2}
\end{align*}
for $\sigma\in[0,1]$. Thus the claim follows by the parabolic maximum principle.
\end{proof}
Using this estimate, we establish a gradient bound, that will allow us to conclude that the ratio between the minimum and maximum of $\mathcal{H}^2$ and therefore $\operatorname{R}$ converges to $1$ as $t\to T_{max}$. We will state this in the framework of Ricci flow.
\begin{thm}\label{thm_gradientestiamte}
Let $\{\Sigma_t\}_{t\in[0,T_{max})}$ be a family of closed, topological $2$-spheres with strictly positive scalar curvature evolving under Ricci flow. For any $\eta>0$, there exists $C_{\eta}>0$ only depending on $\eta$ and $\Sigma_0$, such that
\begin{align*}
\btr{\nabla \operatorname{R}}\le \eta^2\operatorname{R}^{\frac{3}{2}}+C_\eta.
\end{align*}
for all $t\in[0,T_{max})$.
\end{thm}
\begin{bem}\label{bem_gradientestimate}
As $\mathcal{H}^2=2\operatorname{R}$ by the Gauß Equation \eqref{eq_gaußcurvature}, it suffices to proof
\[
\btr{\nabla \mathcal{H}^2}\le \eta^2\left(\mathcal{H}^2\right)^{\frac{3}{2}}+C_\eta
\]
From this, we get the crucial gradient estimate
\[
\btr{\nabla \operatorname{R}}\le \eta^2 \operatorname{R}^{\frac{3}{2}}_{max}
\]
for any $\eta>0$ and $t$ sufficiently close to $T_{max}$. This estimate allowed Hamilton to conclude that the ratio of $\operatorname{R}_{min}$ ad $\operatorname{R}_{max}$ converges to $1$ in the $3$-dimensional case using the Theorem of Bonnet--Myers, cf. \cite[Lemma 3.22]{chowluni}. We now established this estimate for $2$-dimensional Ricci flow and can argue analogously. Compare also the corresponding result by Huisken for mean curvature flow of convex surfaces, cf. \cite[Corollary 8.16]{andrewschowguentherlangford}.
\end{bem}
\begin{kor}\label{kor_gradientestimate}
As $t\to T_{max}$,
\begin{align*}
\frac{\mathcal{H}^2_{max}}{\mathcal{H}^2_{min}}=\frac{\operatorname{R}_{max}}{\operatorname{R}_{min}}&\to 1,\\
\operatorname{diam}(\Sigma_t)&\to 0.
\end{align*}
\end{kor}
\begin{proof}[Proof of Theorem \ref{thm_gradientestiamte}]
We fix any $\sigma\in(0,1]$, and by Theorem \ref{thm_nullmeancurvature1}, there exists $C$ only depending on $\Sigma_0$ (and our fixed choice of $\sigma$), such that
\begin{align}\label{eq_thmgradientestimate1}
\frac{\newbtr{\accentset{\circ}{A}}^2}{\left(\mathcal{H}^2\right)^2}\le C(\mathcal{H}^2)^{-\sigma}.
\end{align}
Using Young's Inequality, we find that for any $\varepsilon>0$, there exists $C_\varepsilon>0$ (only depending on $\varepsilon$ and $\Sigma_0$), such that
\begin{align}\label{eq_thmgradietestiamte2}
\newbtr{\accentset{\circ}{A}}^2\le \varepsilon\left(\mathcal{H}^2\right)^2+C_\varepsilon.
\end{align}
We define
\begin{align}\label{eq_thmgradietestiamte3}
G_\varepsilon\definedas 2C_\varepsilon+(\varepsilon+\frac{1}{2})\left(\mathcal{H}^2\right)^2-\btr{A}^2=2 C_\varepsilon+\varepsilon\left(\mathcal{H}^2\right)^2-\newbtr{\accentset{\circ}{A}}^2\ge C_\varepsilon>0.
\end{align}
Computing the evolution of \eqref{eq_thmgradietestiamte3} and abbreviating $\frac{\d}{\d t}$ as $\partial_t$, we find that
\begin{align*}
\left(\partial_t-\Delta\right)G_\varepsilon
&=\varepsilon\left(\mathcal{H}^2\right)^3+2\btr{\nabla A}^2-(1+2\varepsilon)\btr{\nabla\mathcal{H}^2}^2.
\end{align*}
Recall that $\btr{\nabla A}^2\ge \frac{3}{4}\btr{\nabla\mathcal{H}^2}^2$ as proven in Proposition \ref{prop_codazziminkowski2}. Hence, for any $0<\varepsilon\le \frac{1}{8}$ we find
\[
\btr{\nabla A}^2-(\frac{1}{2}+\varepsilon)\btr{\nabla\mathcal{H}^2}\ge \frac{1}{8}\btr{\nabla\mathcal{H}^2}^2,
\]
and thus we have
\begin{align}\label{eq_thmgradientestimate4}
\left(\partial_t-\Delta\right)G_\varepsilon\ge \frac{1}{4}\btr{\nabla \mathcal{H}^2}^2
\end{align}
for any $0<\varepsilon\le\frac{1}{8}$.
Also recall that by Proposition \ref{prop_nullmeancurvature1}, we have
\begin{align}\label{eq_thmgradientestimate5}
\left(\partial_t-\Delta\right)\mathcal{H}^2\ge 0,
\end{align}
and since $\mathcal{H}^2=2R$ and we are evolving the surface under Ricci flow, we have that
\begin{align}\label{eq_thmgradientestimate6}
\left(\partial_t-\Delta\right)\btr{\nabla \mathcal{H}^2}^2&\le C\mathcal{H}^2\btr{\nabla \mathcal{H}^2}^2-2\btr{\nabla^2\mathcal{H}^2}^2,
\end{align}
cf. \cite[Chapter 3]{chowluni}. We now look at a new maximum of $\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}$ at a time and point $(t,p)$. In particular, we have that
\[
0=\nabla\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}=2\frac{\nabla^i\mathcal{H}^2\nabla\nabla_i\mathcal{H}^2}{G_\varepsilon\mathcal{H}^2}-\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\left(\frac{\nabla G_\varepsilon}{G_\varepsilon}+\frac{\nabla \mathcal{H}^2}{\mathcal{H}^2}\right),
\]
so at $(p,t)$ it holds that
\begin{align}\label{eq_thmgradientestimate7}
4\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\spann{\frac{\nabla G_\varepsilon}{G_\varepsilon},\frac{\nabla\mathcal{H}^2}{\mathcal{H}^2}}\le \frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\btr{\frac{\nabla G_\varepsilon}{G_\varepsilon}+\frac{\nabla \mathcal{H}^2}{\mathcal{H}^2}}^2\le 4\frac{\btr{\nabla^2\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}.
\end{align}
Moreover, direct computation yields
\begin{align*}
\left(\partial_t-\Delta\right)\left(\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\right)
=&\,\frac{(\partial_t-\Delta)\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}-\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\left(\frac{(\partial_t-\Delta)G_\varepsilon}{G_\varepsilon}+\frac{(\partial_t-\Delta)\mathcal{H}^2}{\mathcal{H}^2}\right)\\
&-\frac{2}{G_\varepsilon\mathcal{H}^2}\spann{\nabla\left(\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\right),\nabla(G_\varepsilon\mathcal{H}^2)}-\frac{2\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\spann{\frac{\nabla G_\varepsilon}{G_\varepsilon},\frac{\nabla\mathcal{H}^2}{\mathcal{H}^2}}.
\end{align*}
Hence, using \eqref{eq_thmgradientestimate4}, \eqref{eq_thmgradientestimate5}, \eqref{eq_thmgradientestimate6}, and \eqref{eq_thmgradientestimate7}, we see that at $(p,t)$
\begin{align*}
0&\le \frac{1}{G_\varepsilon\mathcal{H}^2}\left(C(n)\mathcal{H}^2\btr{\nabla\mathcal{H}^2}^2-2\btr{\nabla^2\mathcal{H}^2}^2\right)-\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\left(\frac{1}{4}\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon}\right)+\frac{2\btr{\nabla^2\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\\
&=\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\left(C(n)\mathcal{H}^2-\frac{1}{4}\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon}\right),
\end{align*}
so after rearranging, we have that
\begin{align*}
\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\le 4C(n)
\end{align*}
at any new maximum. So we find that
\begin{align*}
\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}\le \max\left(\max\limits_{\Sigma_0}\frac{\btr{\nabla\mathcal{H}^2}^2}{G_\varepsilon\mathcal{H}^2}, 4C(n)\right)=:\widetilde{C},
\end{align*}
and in particular
\begin{align*}
\btr{\nabla\mathcal{H}^2}^2\le\widetilde{C}G_\varepsilon\mathcal{H}^2\le \widetilde{C}\mathcal{H}^2\left(\varepsilon\left(\mathcal{H}^2\right)^2+2C_\varepsilon\right).
\end{align*}
After taking a square root, the proof now follows for any $\eta>0$ using Young's inequality again and choosing $0<\varepsilon<\frac{1}{8}$ sufficiently small.
\end{proof}
We briefly recall some properties of $n$-dimensional Ricci flow renormalized by volume (cf. \cite[Chapter 3.6]{chowluni}), i.e.,
\begin{align}\label{eq_renormricciflow}
\frac{\d} {\d\widetilde{t}}\widetilde{\gamma}(\,\widetilde{t}\,)&=-\widetilde{\Ric}(\,\widetilde{t}\,)+\frac{2}{n}\widetilde{r}\widetilde{\gamma}(\,\widetilde{t}\,),
\end{align}
where $\widetilde{r}=\fint\widetilde{\operatorname{R}}$, such that along any solution we have that $\operatorname{Vol}(\widetilde{\gamma}(\,\widetilde{t}\,))=\operatorname{Vol}(\widetilde{\gamma}(0))=\operatorname{const.}$. Given a solution of Ricci flow $\gamma(t)$, $t\in[0,T)$, the metrics $\widetilde{\gamma}(\,\widetilde{t}\,)\definedas c(t)\gamma(t)$, with
\[
c(t)\definedas\exp\left(\frac{2}{n}\int\limits_0^tr(\tau)\d\tau\right),\text{ }\widetilde{t}(t)\definedas\int\limits_0^tc(\tau)\d\tau,
\]
satisfy \eqref{eq_renormricciflow} with initial condition $\widetilde{\gamma}(0)=\gamma(0)$, so we can always renormalize a given solution of Ricci flow. Moreover, we have the following transformation laws for evolution equations by Hamilton:
\begin{lem}[{Hamilton, see \cite[Lemma 3.26]{chowluni}}]\label{lemma_renormalizedflow}
If an expression $X=X(\gamma)$ formed algebraically from the metric and the Riemann curvature tensor has degree $k$, i.e.,\linebreak $X(c\gamma)=c^kX(\gamma)$, and if under the Ricci flow
\[
\partial_tX=\Delta X+Y,
\]
then the degree of $Y$ is $k-1$ and the evolution under the normalized Ricci flow \linebreak
{$\frac{\partial} {\partial\widetilde{t}}\widetilde{\gamma}_{ij}=-\widetilde{\Ric}_{ij}+\frac{2}{n}\widetilde{r}\widetilde{\gamma}_{ij}$}
of $\widetilde{X}:=X(\widetilde{\gamma})$ is given by
\[
\partial_{\widetilde{t}}\widetilde{X}=\widetilde{\Delta}\widetilde{X}+\widetilde{Y}+k\frac{2}{n}\widetilde{X}.
\]
\end{lem}
Recall that by \cite[Remark 3.27]{chowluni}, this lemma also extends to the corresponding partial differential inequalities if $Y$ is of degree $k-1$, and is further also applicable to arbitrary tensor derivatives of such expressions as used frequently throughout \cite[Chapter 3]{chowluni}.
\begin{bem}
In this section, as before, we will only look at the case when $n=2$ and $\gamma(t)$ is conformal to the round sphere for each $t$. Thus $\widetilde{\gamma}(\,\widetilde{t}\,)$ is conformally round, and by the Gauß--Bonnet Theorem
\[
\widetilde{r}(t)=\frac{8\pi}{\operatorname{Vol}(\widetilde{\gamma}(\,\widetilde{t}\,))}=\frac{8\pi}{\operatorname{Vol}(\widetilde{\gamma}(0))}
\]
is positive and remains constant along the flow.
\end{bem}
From now on, will assume without loss of generality that
\begin{align}\label{eq_scalarcurvatureratio}
\frac{1}{2}\operatorname{R}_{max}(t)\le \operatorname{R}(x,t)
\end{align}
for all $t\in[0,T_{max}), x\in\Sigma_t$ (this is ultimately satisfied for $t$ sufficiently close to $T_{max}$ due to Corollary \ref{kor_gradientestimate}). Note that due to the relation between $\mathcal{H}^2$ and $\operatorname{R}$ via the Gauß equation \eqref{eq_gaußcurvature}, combining Proposition \ref{prop_nullmeancurvature1} with the fact that $\operatorname{R}_{max}\to\infty$ as $t\to T$, we find that $\operatorname{R}_{max}\ge (T-t)^{-1}$. In particular,
\begin{align}\label{eq_scalarcurvlowerbound}
\operatorname{R}(t,x)\ge \frac{1}{2(T-t)}.
\end{align}
In the following, we will switch freely between the framework of (renormalized) $2$-d Ricci flow and null mean curvature flow along the past-pointing standard lightcone. Recall that most importantly, bounds for $\mathcal{H}^2$ and its derivatives correspond directly to bounds on the scalar curvature and its derivatives via the Gauß equation \eqref{eq_gaußcurvature}.
In our analysis of the renormalized flow we will closely follows the outline of Hamilton's strategy presented in \cite[Chapter 3]{chowluni} for $3$-dimensional Ricci flow, and include the proofs for the sake of completeness. We start by establishing the following lemma:
\begin{lem}\label{lem_renormalizedflow}
For the renormalized flow \eqref{eq_renormricciflow}, we have that
\begin{enumerate}
\item[\emph{(i)}] $\widetilde{T}=\infty$,
\item[\emph{(ii)}] $
\lim\limits_{\widetilde{t}\to\infty}\frac{\widetilde{\mathcal{H}}^2_{min}}{\widetilde{\mathcal{H}}^2_{max}}=1
$
\item[\emph{(iii)}] There exists $C>0$ such that $\frac{1}{C}\le \widetilde{\mathcal{H}}^2_{min}(\,\widetilde{t}\,)\le \widetilde{\mathcal{H}}^2_{max}(\,\widetilde{t}\,)\le C$ for all $\widetilde{t}$,
\item[\emph{(iv)}] $\operatorname{diam}(\widetilde{\gamma}(\,\widetilde{t}\,))\le C$
\item [\emph{(v)}] $\newbtr{\accentset{\circ}{\widetilde{A}}}^2\le Ce^{-\delta \widetilde{t}}$,
\item [\emph{(vi)}] $\btr{\widetilde{\nabla}\widetilde{\mathcal{H}}^2}^2\le Ce^{-\delta \widetilde{t}}$,
\item [\emph{(vii)}] $\widetilde{\mathcal{H}}^2_{max}-\widetilde{\mathcal{H}}^2_{min}\le Ce^{-\delta \widetilde{t}}$.
\end{enumerate}
\end{lem}
\begin{proof}
In the following, $C,\widetilde{C}$ will always denote positive constants independent of $t$ that may vary from line to line.
\begin{enumerate}
\item[(i)] By substitution rule, we find that
\[
\int\limits_0^{\widetilde{t}(t_0)}\widetilde{r}(\widetilde{\tau})\d\widetilde{\tau}=\int\limits_0^{t_0}r(\tau)\d\tau.
\]
Combining this formula for $t\to T$ with \eqref{eq_scalarcurvlowerbound}, we can conclude (i) since $\widetilde{r}$ is constant.
\item[(ii)] Follows immediately from Corollary \ref{kor_gradientestimate}.
\item[(iii)] By the Bishop--Gromov volume comparison, we have that
\[
\operatorname{Vol}(\widetilde{\gamma}(0))=\operatorname{Vol}(\widetilde{\gamma}(\,\widetilde{t}\,))\le C\operatorname{diam}(\widetilde{\gamma}(\,\widetilde{t}\,))^2,
\]
and recall that by the Bonnet--Myers Theorem
\begin{align}\label{eq_renormalizedflow_bonnetmyers}
\operatorname{diam}(\widetilde{\gamma}(\,\widetilde{t}\,))\le C\left(\widetilde{\mathcal{H}}_{max}^2\right)^{-\frac{1}{2}}.
\end{align}
Thus, $\widetilde{\mathcal{H}}_{max}^2$ is uniformly bounded from above. Now note that $\widetilde{\Sigma}_{\widetilde{t}}$ is a topological sphere, in particular simply connected. Hence, Klingenberg's injectivity radius estimate yields that
\[
\operatorname{inj}(\widetilde{\gamma}(\,\widetilde{t}\,))\ge C\left(\widetilde{\mathcal{H}}_{max}^2\right)^{-\frac{1}{2}},
\]
and therefore
\[
\operatorname{Vol}(\widetilde{\gamma}(0))=\operatorname{Vol}(\widetilde{\gamma}(\,\widetilde{t}\,))\ge C\left(\widetilde{\mathcal{H}}_{max}^2\right)^{-1},
\]
so $\widetilde{\mathcal{H}}_{max}^2$ is also uniformly bounded from below. Since the inequality \eqref{eq_scalarcurvatureratio} is preserved under rescaling, we have that the scalar curvature is uniformly bounded and we can therefore pick some constant $C>0$ such that (iii) is satisfied.
\item[(iv)] Follows directly from (iii) via \eqref{eq_renormalizedflow_bonnetmyers}.
\end{enumerate}
We prove (v) and (vi) simultaneously. We define $\Psi\definedas \frac{\btr{\nabla\mathcal{H}^2}^2}{\mathcal{H}^2}+K\newbtr{\accentset{\circ}{A}}^2$ for some positive constant $K$ to be determined later. Recall that along the unnormalized flow, we have that
\[
\left(\partial_t-\Delta\right)\btr{\nabla \mathcal{H}^2}^2\le C\mathcal{H}^2\btr{\nabla \mathcal{H}^2}^2-2\btr{\nabla^2\mathcal{H}^2}^2
\]
for some fixed constant $C$, and we find that
\[
\left(\partial_t-\Delta\right)\newbtr{\accentset{\circ}{A}}^2\le -\frac{1}{2}\btr{\nabla\mathcal{H}^2}^2
\]
by direct computation via Proposition \ref{prop_nullmeancurvature1} and $\btr{\nabla A}^2\ge \frac{3}{4}\btr{\nabla\mathcal{H}^2}^2$ proven in Proposition \ref{prop_codazziminkowski2}. Thus $\Psi$ is of degree $-2$ and satisfies the evolution equation
\begin{align*}
\partial_t\Psi&\le \frac{\Delta\btr{\nabla\mathcal{H}^2}^2}{\mathcal{H}^2}-\frac{\btr{\nabla\mathcal{H}^2}^2}{\left(\mathcal{H}^2\right)^2}\Delta\mathcal{H}+K\Delta \newbtr{\accentset{\circ}{A}}^2-2\frac{\btr{\nabla^2\mathcal{H}^2}^2}{\mathcal{H}^2}+\left(C-\frac{1}{2}-\frac{K}{2}\right)\btr{\nabla\mathcal{H}^2}^2\\
&=\Delta\Psi+2\frac{\spann{\nabla\btr{\nabla\mathcal{H}^2}^2,\nabla\mathcal{H}^2}}{\left(\mathcal{H}^2\right)^2}-2\frac{\btr{\nabla^2\mathcal{H}^2}^2}{\mathcal{H}^2}-2\frac{\btr{\nabla\mathcal{H}^2}^4}{\left(\mathcal{H}^2\right)^3}+\left(C-\frac{1}{2}-\frac{K}{2}\right)\btr{\nabla\mathcal{H}^2}^2.
\end{align*}
Note that
\begin{align*}
\frac{\spann{\nabla\btr{\nabla\mathcal{H}^2}^2,\nabla\mathcal{H}^2}}{\left(\mathcal{H}^2\right)^2}
=2\frac{\nabla_k\nabla_i\mathcal{H}^2\nabla^i\mathcal{H}^2\nabla^k\mathcal{H}^2}{\left(\mathcal{H}^2\right)^2}\le 2\frac{\btr{\nabla^2\mathcal{H}^2}\btr{\nabla\mathcal{H}^2}^2}{\left(\mathcal{H}^2\right)^2}\le \frac{\btr{\nabla^2\mathcal{H}^2}^2}{\mathcal{H}^2}+\frac{\btr{\nabla\mathcal{H}^2}^2}{\left(\mathcal{H}^2\right)^3}.
\end{align*}
We now choose $K\definedas 2C>0$, so we can conclude that $(\partial_t-\Delta)\Psi\le 0$. In particular, $\widetilde{\Psi}$ satisfies
\[
(\partial_{\widetilde{t}}-\widetilde{\Delta})\widetilde{\Psi}\le -\widetilde{C}\widetilde{\Psi}.
\]
By the maximum principle, we can now conclude that
\[
\frac{\btr{\widetilde{\nabla}\widetilde{\mathcal{H}}^2}^2}{\widetilde{\mathcal{H}}^2}+K\newbtr{\accentset{\circ}{\widetilde{A}}}^2\le Ce^{-\delta\widetilde{t}},
\]
so (v) and (vi) follow since $\widetilde{\mathcal{H}}^2$ is uniformly bounded. Lastly (vii) follows form (iv) and (vi).
\end{proof}
In particular $\btr{\widetilde{\operatorname{R}}-\widetilde{r}}\le Ce^{-\widetilde{\delta} t}$, so we know that the evolution speed of the renormalized flow \eqref{eq_renormricciflow} is integrable. We thus acquire uniform bounds and $C^0$ convergence of the metric due to a Lemma by Hamilton (cf. \cite[Lemma 6.10]{chowluni}):
\begin{kor}\label{kor_uniformbound}
Let $\gamma(t)$ be a solution of $2d$-Ricci flow with $\operatorname{R}>0$. Then the renormalized flow \eqref{eq_renormricciflow} exists for all time, and there exists a constant $C>0$ such that
\[
\frac{1}{C}\widetilde{\gamma}(0)\le \widetilde{\gamma}(\widetilde{t})\le C\widetilde{\gamma}(0),
\]
and $\widetilde{\gamma}(\widetilde{t})$ converges uniformly to a limiting metric $\widetilde{\gamma}(\infty)$ on compact sets as $\widetilde{t}\to\infty$.
\end{kor}
\begin{bem}\label{bem_uniformbound}
Since the renormalized metrics are also conformally round, i.e.,\linebreak $\widetilde{\gamma}(\,\widetilde{t}\,)=\widetilde{\omega}^2(\,\widetilde{t}\,)\d\Omega^2$, Corollary \ref{kor_uniformbound} also yields a uniform bound on the conformal factors $\widetilde{\omega}(t)$ only depending on $\widetilde\omega(0)$.
\end{bem}
To complete the proof of Theorem \ref{thm_mainthm}, it remains to show that $\widetilde{\gamma}(\infty)$ is in fact smooth and that the renormalized flow converges in $C^k$ for any $k$. In particular, due to Lemma \ref{lem_renormalizedflow} (i), $\widetilde{\gamma}(\infty)$ is then a metric of constant scalar curvature.
We thus require bounds for the derivatives of the renormalized metrics, and by a standard argument it suffices to bound the derivatives of the Riemann tensor. However, for $n=2$, the Riemann tensor and its derivatives are fully determined by the scalar curvature and its derivatives. By the Gauß equation \eqref{eq_gaußcurvature}, it thus suffices to find appropriate bounds for $\widetilde{\mathcal{H}}^2$ and its derivatives.
\begin{lem}\label{lem_renormflowhigherderivatives}
For all $k\in\N$, there exist $C_k, \delta_k>0$, such that
\[
\btr{\widetilde{\nabla}^k\widetilde{\mathcal{H}}^2}^2\le C_ke^{-\delta_k\widetilde{t}}
\]
\end{lem}
\begin{proof}
Since $n=2$, there exists a fixed constant $C$, such that $\btr{\nabla^k\mathcal{H}^2}^2=C \btr{\nabla^k\Rm}^2$, and thus the evolution of $\btr{\nabla^k\mathcal{H}^2}^2$ along the unnormalized flow can be estimated by
\begin{align}\label{eq_lem_higherderivatives1}
\partial_t\btr{\nabla^k\mathcal{H}^2}^2\le \Delta \btr{\nabla^k\mathcal{H}^2}^2-2\btr{\nabla^{k+1}\mathcal{H}^2}^2+C(k)\sum_{l=0}^k\btr{\nabla^l\mathcal{H}^2}\btr{\nabla^{k-l}\mathcal{H}^2}\btr{\nabla^k\mathcal{H}^2},
\end{align}
where $C(k)$ denotes a constant only depending on $k$, cf. \cite[Chapter 3]{chowluni}.\newline
We will proof the statement by strong induction, where in the following $C,C_k,C_{k+1}$ will be constants only depending on $k$ which may vary from line to line.\newline
The statement is true for $k=1$ as proven in Lemma \ref{lem_renormalizedflow} (vi).\newline
We now assume that the statement is true for all $1\le l\le k$, and proceed from $k$ to $k+1$. We define $f\definedas\btr{\nabla^{k+1}\mathcal{H}^2}^2+K\mathcal{H}^2\btr{\nabla^{k}\mathcal{H}^2}^2$, where $K$ is a positive constant to be determined later. Then $f$ is of degree $-k-3$ and according to \eqref{eq_lem_higherderivatives1} its evolution under Ricci flow is given by
\begin{align*}
\partial_t f
\le &\,\Delta \btr{\nabla^{k+1}\mathcal{H}^2}^2-2\btr{\nabla^{k+2}\mathcal{H}^2}^2+C_{k+1}\sum\limits_{l=0}^{k+1}\btr{\nabla^{l}\mathcal{H}^2}\btr{\nabla^{k+1-l}\mathcal{H}^2}\btr{\nabla^{k+1}\mathcal{H}^2}\\
&+K\btr{\nabla^{k}\mathcal{H}^2}^2\Delta\mathcal{H}^2+\frac{K}{2}\left(\mathcal{H}^2\right)^2\btr{\nabla^{k}\mathcal{H}^2}^2\\
&+K\mathcal{H}^2\Delta \btr{\nabla^{k}\mathcal{H}^2}^2-2K\mathcal{H}^2\btr{\nabla^{k+1}\mathcal{H}^2}^2+K\mathcal{H}^2C_k\sum\limits_{l=0}^k\btr{\nabla^{l}\mathcal{H}^2}\btr{\nabla^{k-l}\mathcal{H}^2}\btr{\nabla^{k}\mathcal{H}^2}\\
=&\,\Delta f-2K\spann{\nabla \mathcal{H}^2,\nabla\btr{\nabla^k\mathcal{H}^2}^2}-2K\mathcal{H}^2\btr{\nabla^{k+1}\mathcal{H}^2}^2\\
&+C_{k+1}\sum\limits_{l=0}^{k+1}\btr{\nabla^{l}\mathcal{H}^2}\btr{\nabla^{k+1-l}\mathcal{H}^2}\btr{\nabla^{k+1}\mathcal{H}^2}+K\mathcal{H}^2C_k\sum\limits_{l=0}^k\btr{\nabla^{l}\mathcal{H}^2}\btr{\nabla^{k-l}\mathcal{H}^2}\btr{\nabla^{k}\mathcal{H}^2}\\
\le\,& \Delta f+C_k(K,\mathcal{H}^2,\nabla^{1\le l \le k}\mathcal{H}^2)+(C_{k+1}-2K)\mathcal{H}^2\btr{\nabla^{k+1}\mathcal{H}^2}^2,
\end{align*}
where we have used Young's inequality in the last line and collected all remaining terms in $C_k(K,\mathcal{H}^2,\nabla^{1\le l \le k}\mathcal{H}^2)$.
We now choose $K:=C_{k+1}$, and thus we find that
\[
(\partial_t-\Delta)f\le C_k(\mathcal{H}^2,\nabla^{1\le l \le k}\mathcal{H}^2),
\]
where $C_k(\mathcal{H}^2,\nabla^{1\le l \le k}\mathcal{H}^2)$ denotes a sum of products of derivatives with at least order $1$ and at most order $k$ such that the factors only depend on $k$ and possibly $\mathcal{H}^2$, and $C_k(\mathcal{H}^2,\nabla^{1\le l \le k}\mathcal{H}^2)$ is of degree $-k-4$. Hence, the evolution of $\widetilde{f}$ along the renormalized flow is given by
\[
(\partial_{\widetilde{t}}-\widetilde{\Delta})\widetilde{f}\le C_k(\widetilde{\mathcal{H}}^2,\widetilde{\nabla}^{1\le l\le k}\widetilde{\mathcal{H}}^2)-C\widetilde{f}\le \widetilde{C}e^{-\widetilde{\delta}\,\widetilde{t}}-C\widetilde{f}
\]
for some $\widetilde{C},\widetilde{\delta}>0$ by induction, as $\widetilde{\mathcal{H}}^2$ is uniformly bounded by Lemma \ref{lem_renormalizedflow}. Now choosing $\delta<\min(\widetilde{\delta},C)$, we find that
\[
(\partial_{\widetilde{t}}-\widetilde{\Delta})\left(e^{\delta\widetilde{t}}\widetilde{f}-\widetilde{C}\widetilde{t}\right)\le0.
\]
So by the maximum principle, there exists $C_0>0$ such that
\[
e^{\delta\widetilde{t}}\widetilde{f}-\widetilde{C}\widetilde{t}\le C_0\Leftrightarrow\widetilde{f}\le Ce^{-\delta\widetilde{t}}(C_0+\widetilde{C}\widetilde{t}).
\]
Since exponential decay wins over linear growth, there exists an appropriate constant $C_k>0$ for any choice $0<\delta_k<\delta$ such that
\[
\widetilde{f}\le C_ke^{-\delta_k\widetilde{t}}.
\]
This concludes the proof.
\end{proof}
From this, we can conclude the uniform convergence in $C^k$ for any $k\in\N$ and Theorem \ref{thm_mainthm} is proven.
\section{Comments}\label{sec_discussion}
We close with some comments on the higher dimensional case. Since the general structure of the standard lightcone derived in \Cref{sec_nullgeom} extends directly to higher dimensions (up to some possibly dimension dependent constants), the geometric intuition developed in \Cref{sec_nullgeom} also holds for the standard lightcone in the $n+1$ dimensional Minkowski spacetime, $n\ge 3$. In particular, the Gauß equation yields
\begin{align}\label{eq_higherdim1}
R=\frac{n-1}{n}\mathcal{H}^2.
\end{align}
From this, we can similarly establish that null mean curvature flow is proportional to the Yamabe flow \cite{hamilton2} for the conformal class of the round metric on $\Sbb^{n-1}$ in all dimensions $n-1\ge 2$. More precisely, the metrics evolve under renormalized null mean curvature flow as
\begin{align}
\frac{\d}{\d\widetilde{t}}\widetilde{g}(\widetilde{t})=-\frac{1}{n-1}\left(\widetilde{R}-\fint\widetilde{R}\right)\widetilde{g}(\widetilde{t}).
\end{align}
Since not all metrics on $\Sbb^{n-1}$ are necessarily conformally round in higher dimension $n-1\ge 3$ they can thus not be embedded isometrically into the standard Minkowski lightcone.
Similar to the $2$-dimensional case, renormalized Yamabe flow has been subject to thorough investigation using various different methods. The case of the conformal class of the round sphere was first treated separately by Chow \cite{chow2} under the additional assumption of positive Ricci curvature, which is preserved under the flow. A uniform approach for locally conformally flat metrics was later provided by Ye \cite{ye}. Schwetlick--Struwe \cite{schwetlickstruwe} performed a precise blow-up analysis and showed that singularities arising in the blow-up procedure can by ruled out by employing the positive mass theorem (cf. \cite{schoenyau}) if the initial energy is less than some uniform bound depending on the Yamabe invariant of the initial metric and the Yamabe energy of the round sphere. The general approach by Brendle \cite{brendle,brendle3} leads to a short proof of the conformally round case \cite{brendle2}. We suspect that the techniques developed in this paper could be applied to gain a new proof of this result, possibly under similar restrictions as Chow \cite{chow2}. This is subject of future work.
\nocite{*}
|
1,477,468,751,435 | arxiv | \section*{SECTION HEADINGS}).
\section{INTRODUCTION}
Though it is typically the first proof-based course most students experience, linear algebra is also \edits{an important topic that is used to solve problems in a wide variety of fields}.
This is why, at many universities, the course is a requirement for a large number of degree programs in engineering and the natural sciences.
However, since many colleges, especially those with a technical focus, possess significantly more of these majors than those within mathematics, the majority of enrolled students often find difficulty with the abstract nature of the subject, especially when the question\edits{s} ``How does this apply to my major?'' \edits{and ``What are the real-world applications?''}, can largely go unanswered.
One valuable application that can be included in a first Linear Algebra course is the method of Principal Component Analysis (PCA).
PCA is a good choice for an applied example to which linear algebra is crucial because it arises in so many different contexts, as we will demonstrate within subsequent sections.
The method arises in countless \edits{disciplines}, including \edits{but not limited to} statistics \cite{J}, electrical engineering \cite{PCAEA},
genetics \cite{RSA}, neuroscience \cite{Peyrache}, facial recognition \cite{Turk}, control theory \cite{PCAEA}, and mechanical and systems engineering \cite{PCAME}.
\edits{Perhaps the largest obstacle to including PCA within a first semester Linear Algebra course is that it is often performed using the Singular Value Decomposition (SVD), which is a topic usually reserved for either a second course in the subject, a course in computational linear algebra, or a graduate linear algebra course (see \cite{Remski,Kalman} for more information on the SVD). Of course, many universities do not offer these follow-on courses, and hence students are not exposed to PCA, even though it is such a widely-utilized tool. However, we note that PCA can be both understood and performed without knowledge of the SVD, using a topic that is usually included within a first-semester course, namely the Spectral Theorem. We will elaborate further on how this is accomplished later on. }
\edits{In addition,} though PCA is sometimes included as an applied topic within linear algebra texts, most notably \cite{Lay} and \cite{Strang}, it is
often overlooked as an important application within other sources.
Even when \edits{discussed in these texts}, it is often difficult to describe the utility or impact of the
method because examples within textbooks \edits{are typically} computed by hand, while real-world implementation is always performed using a computer.
To address these difficulties, we have created resources, both theoretical and computational, for instructional use within the classroom of a first course in linear algebra. These resources require a computer in the classroom, but will not require students to understand numerical aspects of linear algebra.
We also provide the code that generates the data sets, numerics, and images used within our examples. Instructors who are comfortable with {\it MATLAB} can alter our code to make the presentation more interactive, while those with less computational interest or experience need not do so in order to generate all of the material we present.
While we use {\it MATLAB} to perform computations, the principles and algorithms can be applied using other (perhaps open-source) software, such as {\it Octave}.
For instructors without access to {\it MATLAB}, we also provide a number of programs on the first author's website:
\begin{center} \url{http://inside.mines.edu/~pankavic/PCAcode/Octave}.
\end{center}
that are similar to those in the appendix but written using {\it Octave}.
\edits{These computational tools arise from a first-semester Linear Algebra course conducted at the Colorado School of Mines,
and have had a strong impact on the interest of non-majors and, subsequently, the increase in mathematics minors at our institution. Both authors have implemented portions of this material in their classrooms and have received positive feedback from students regarding their interest in PCA applications, especially concerning its use in image compression and statistics. In addition, the inclusion of PCA has benefitted the mathematical breadth and depth of our applied majors and increased their exposure to new and interesting applications of linear algebra.}
Hence, we believe the resources included herein can be used to introduce students to a variety of real-world applications of the subject without the need for a course in scientific computing.
We also mention that the implementation of these materials is fairly robust. Instead of a Linear Algebra course, they could be included within an introductory or second course in statistics for students who have already taken linear algebra, or within a scientific computing course, assuming students possess the necessary background.
In general, this article provides a number of teaching resources and examples to engage students
in learning about this essential application of linear algebra.
\section{BACKGROUND}
We begin by recalling \edits{a crucial} theorem - one of the most important results in all of linear algebra - on which \edits{we base our discussion of PCA}. \edits{Before stating the result, we note that a stronger version of the theorem extends to a more general class of matrices over the complex numbers. }
\begin{theorem*} (Spectral Theorem \cite{Lay})
Let $n \in \mathbb{N}$ be given and $A$ be an $n \times n$ matrix of real numbers with $A^T$ denoting its transpose. \edits{Then, $A$ is symmetric (i.e., $A^T = A$) if and only if $A$ is orthogonally diagonalizable. In this case, $A$ possesses $n$ real eigenvalues, counting multiplicities.}
\end{theorem*}
This is an extremely powerful result and precisely guarantees \newedits{for such A} the existence of
$\lambda_k \in \mathbb{R}$ and orthonormal column vectors $\vect{v}_k \in \mathbb{R}^n$ for every $k=1,...,n$ such that
\begin{equation}
\label{ST}
A = \lambda_1 \vect{v}_1 \vect{v}_1^T + \cdots + \lambda_n \vect{v}_n \vect{v}_n^T
\end{equation}
arising from the orthogonal diagonalization.
Because the $\vect{v}_k$ vectors are orthonormal, each associated matrix $\vect{v}_k\vect{v}_k^T$ \newedits{is orthogonal},
and thus $A$ can be decomposed into a sum wherein each term is an eigenvalue multiplied by a rank one matrix \newedits{generated by a unit vector}.
Hence, the eigenvalues alone determine the magnitude of each term in the sum, while the eigenvectors
determine the directions.
These eigenvectors are called \emph{principal components} or \emph{principal directions} and we will expand upon this further in the next section.
We begin the discussion of the specifics of PCA with an introductory example.
\subsection{Introductory Height \& Weight Problem}
Consider a study in which we want to determine whether or not the heights and weights of a group of individuals are correlated. That is, we want to know
whether the known value of a person's height seems to dictate whether they tend to be heavier or lighter, and thus influences their weight. Assume we are given data for $30$ specific people, displayed within Table~\ref{tab:1}. For this example, our data set originates from a commonly available study \cite{SOCR}. In practice and for large enough class sizes,
the data can be obtained in the classroom, directly from students. For instance, \edits{during} the class period before implementation of this example, the instructor can collect anonymous information regarding student height and weight, compile and store this data, and integrate it into the project for the following class period. \edits{As an alternative, this problem could serve as a project for a group of students within the course, wherein the group collects the data, performs PCA using the code provided in the appendix, interprets the results, and presents their findings in a brief class seminar.} For those with smaller class sizes or concerns regarding anonymity, data can be taken from our source \cite{SOCR} that is readily available on the \newedits{Internet}.
\begin{table}[h]
\begin{tabular}{|l |*{6}{c}|}
\hline
Person & 1 & 2 & 3 & 4 & 5 & 6\\
\hline
\hline
Height & 67.78 & 73.52 & 71.40 & 70.22 & 69.79 & 70.70\\
Weight & 132.99 & 176.49 & 173.03 & 162.34 & 164.30 & 143.30\\
\hline
Person & 7 & 8 & 9 & 10 & 11& 12\\
\hline
\hline
Height & 71.80& 72.01& 69.90 & 68.78 & 68.49 & 69.62 \\
Weight & 161.49 & 166.46 & 142.37 & 150.67 & 147.45 & 144.14\\
\hline
Person & 13 & 14 & 15 & 16 & 17 & 18\\
\hline
\hline
Height & 70.30 & 69.12 & 70.28 & 73.09 & 68.46 & 70.65\\
Weight & 155.61 & 142.46 & 146.09 & 175.00 & 149.50 & 162.97\\
\hline
Person & 19 & 20 & 21 & 22 & 23 & 24\\
\hline
\hline
Height & 73.23 & 69.13 & 69.83 & 70.88 & 65.48 & 70.42\\
Weight & 177.90 & 144.04 & 161.28 & 163.54 & 126.90 & 149.50\\
\hline
Person & 25 & 26 & 27 & 28 & 29 & 30\\
\hline
\hline
Height & 69.63 & 69.21 & 72.84 & 69.49 & 68.53 & 67.44\\
Weight & 161.85 & 149.72 & 172.42 & 151.55 & 138.33 & 133.89\\
\hline
\end{tabular}
\centering \caption { \footnotesize Heights (in.) and Weights (lbs.) for $30$ young adults \cite{SOCR}.}
\label{tab:1}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{PCA_Ht_Wt_1}
\vspace{-0.2in}
\caption{ \label{fig:data} \footnotesize \edits{Plot} of Height/Weight \edits{datapoints}.}
\end{figure}
\begin{figure}[t]
\begin{subfigure}[t]{0.45\textwidth}
\hspace{-0.3in}
\includegraphics[scale=0.33]{PCA_Ht_Wt_Comp1}
\vspace{-0.2in}
\end{subfigure}
\hspace{0.1in}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[scale=0.33]{PCA_Ht_Wt_Comp2}
\vspace{-0.2in}
\end{subfigure}
\caption{ \label{fig:PCintro} \footnotesize Height/Weight data projected onto the principal components (left - $v_1$; right - $v_2$) . By the Spectral Theorem, the data represented in Fig.~\ref{fig:data}
is exactly the sum of the projections onto these two components.}
\end{figure}
Since the question of interest is whether the two measured variables, height and weight, seem to change together, the relevant quantity to consider is the covariance of the two characteristics within the data set.
This can be formed in the following way.
First, the data is stored in $X$, a $2 \times 30$ matrix.
\edits{Then, the entries are used to compute the mean in each row, which will be used to center \edits{or ``mean-subtract''} the data}. \edits{This latter step is essential, as many of the results concerning PCA are only valid upon centering the data at the origin.} Computing the means of our measurements (Table~\ref{tab:1}), we find
$$\mbox{\boldmath${\mu}$} = \left[\begin{array}{r} 70.06 \\ 154.25 \end{array}\right].$$
Using $x_{ij}$, the entries of the data matrix $X$, the associated $2 \times 2$ covariance matrix $S$ is constructed with entries
$$s_{ik} = \frac{1}{30-1} \sum_{j=1}^{30} (x_{ij} - \mu_i) (x_{kj} - \mu_k)$$
so that
$$S = \left[\begin{array}{rr} 3.26 & 21.72 \\ 21.72 & 188.96 \end{array}\right]. $$
Notice that this matrix is necessarily symmetric, so using the Spectral Theorem it can be orthogonally diagonalized. Upon computing the eigenvalues and eigenvectors \newedits{of S}, we find $\lambda_1 = 191.46$, $\lambda_2 = 0.76$, and
$$ \vect{v}_1 = \left[\begin{array}{r} 0.11 \\ 0.99 \end{array}\right]
\qquad
\vect{v}_2 = \left[\begin{array}{r} -0.99 \\ 0.11 \end{array}\right]. $$
Here, $\vect{v}_1$ and $\vect{v}_2$ are the \emph{principal components} of the \edits{covariance matrix $S$ generated by the data matrix $X$,}
as previously described.
Thus, we see from (\ref{ST}) that
$$S = \lambda_1 \vect{v}_1 \vect{v}_1^T + \lambda_2 \vect{v}_2 \vect{v}_2^T$$
and because the difference in eigenvalues is so large, it appears
that the first term is responsible for most of the information encapsulated
within $S$.
Regardless, we can re-express the given data in the new orthonormal basis generated by $\vect{v}_1$ and $\vect{v}_2$
by computing the coordinates $P^T X$ where
$$P = \left[\begin{array}{rr} 0.11 & -0.99 \\ 0.99 & 0.11 \end{array}\right]$$
is the orthogonal matrix whose columns are $\vect{v}_1$ and $\vect{v}_2$.
In fact, we could left multiply the data matrix by each component separately, namely $\vect{v}_1^T X$
and $\vect{v}_2^T X$, to project the data onto each principal direction (Fig.~\ref{fig:PCintro}).
Hence, the data can be separated into projections along $\vect{v}_1$ and $\vect{v}_2$, respectively.
\edits{We see from looking at the scales} in Figure~\ref{fig:PCintro} that the heights and
weights along $\vect{v}_2$ are significantly less than those along $\vect{v}_1$, which tells us that the majority of the information contained within $X$ lies along $\vect{v}_1$.
Computing the slope of the line in the direction of $\vect{v}_1$ and choosing a point thru which
it passes, we can represent it by
$$y - 154.25 = 9 (x - 70.06),$$ where $x$ represents the height of a given individual and $y$ is
their corresponding weight.
Hence, we see that height and weight appear to be strongly correlated,
and PCA has determined the direction with optimal correlation between the variables.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{HtWt_biplot}
\vspace{-0.2in}
\caption{ \footnotesize \label{fig:HtWt_biplot} Biplot of Height/Weight data with $2$ principal components. \edits{The blue Height and Weight vectors are displayed as linear combinations of the principal components.} Note that \edits{these principal components} effectively rotate the \edits{height and weight} data in the plane.}
\end{figure}
The principal component analysis for this example took a small set of data and identified
a new orthonormal basis in which to re-express it.
In two dimensions the data are effectively rotated to lie along the line of best fit (Fig.~\ref{fig:HtWt_biplot}), with the second principal
direction merely representing the associated unit orthogonal complement of the first.
This mirrors the general aim of PCA: to obtain a new orthonormal basis that organizes the data optimally, in the sense that the variance contained within the vectors is maximized along \edits{successive} principal component(s).
\subsection{Summary of PCA}
\label{sec:steps}
In short, PCA can be performed to compute an optimal\newedits{, ordered} orthonormal basis of a given set of vectors, or data set, in the following steps.
\begin{enumerate}
\item Gather $n$ samples of $m$-dimensional data, i.e. vectors $\vect{d}_1, ..., \vect{d}_n \in \mathbb{R}^m$ stored in the
\edits{$m \times n$} matrix $X$ with columns $\vect{d}_1, ..., \vect{d}_n$, so that $x_{ij}$ represents
the $i^{th}$ entry of the $j^{th}$ sample vector, and compute
the mean vector (in $\mathbb{R}^m$)
$$ \mbox{\boldmath${\mu}$} = \frac{1}{n}\sum_{k=1}^n \vect{d}_k,$$
\item Build the corresponding mean-centered data matrix $B$ with columns given by $\vect{d}_j - \mbox{\boldmath${\mu}$}$ so that the entries are
$$b_{ij} = x_{ij} - \mu_i$$
for every $i=1,...,m$ and $j=1,...,n$.
\item Use $B$ to compute the symmetric, $m \times m$ covariance matrix
$$ S = \frac{1}{n-1}B B^T.$$
\item Find the eigenvalues $\lambda_1,..., \lambda_m$ of $S$ (arranged in decreasing order including multiplicity) and an orthonormal set of \edits{corresponding} eigenvectors
$\vect{v}_1, ..., \vect{v}_m$. These create a new basis for $\mathbb{R}^m$ in which the data can be expressed.
\item Finally, the data is represented in terms of the new basis vectors $\vect{v}_1, ..., \vect{v}_m$ using the coordinates $\vect{y}_1 = \vect{v}_1^T X, ..., \vect{y}_m = \vect{v}_m^T X$. This can also be represented as the matrix $Y = P^TX$ where $P$ is the matrix with columns $\vect{v}_1, ..., \vect{v}_m$. Should we wish to convert the data back to the original basis, we merely utilize the orthogonality of $P$ and compute $PY$ to find $X$
\end{enumerate}
In the final sections, we will extend our introductory example while presenting additional applications in which PCA appears prominently \edits{and can} be implemented within a classroom environment. \edits{For each of the subsequent examples, an instructor could integrate the content into handouts or group worksheets, assign group projects with brief presentations during class, or merely use the given materials to present the information in an interactive lecture format.}
\section{Application - Data Analysis}
\label{data}
In the previous section, we developed a method for principal component analysis which determined \edits{a} basis with maximal variance. Notice that the first component really encapsulated the majority of the information
embedded within the data.
Since the eigenvalues can be ordered, we might also
be able to truncate the sum in (\ref{ST}) to reduce the amount of stored data.
For instance, in our previous example, we might only keep the first principal component since
the data can be mostly explained just by knowing this characteristic, rather than every height and weight.
\edits{In this case, each data point would then be represented by its projection onto the first principal component.}
Upon performing the final step, we might also interpret the results: are a small number of the eigenvalues $\lambda_k$ \edits{much less (perhaps by an order of magnitude)} than the others? If so, this indicates a reduction in the dimension of the data is possible without losing much information, \edits{while}
if this does not occur then the dimension of the data may not be easily reduced \edits{in this way}.
Suppose that in addition to computing the components, we were to truncate
the \edits{new} basis matrix, $P$, so that we keep only the first $r$ columns with $r < m$. We would thus have a matrix
$\tilde{P} \in \mathbb{R}^{m\times r}$\edits{, and this would give rise to the $r \times n$ matrix $\tilde{Y} = \tilde{P}^T X$ containing
a truncation of the data represented by the first $r$ principal components.
From this we could also create the $m \times n$ matrix $\tilde{X} = \tilde{P}\tilde{P}^T X$, which represents the projection of $X$ onto the first $r$ principal directions. This reduced representation of the data would then possess less information than $X$, but retain the most information when compared to any other matrix of the same rank.}
In fact, an error estimate is obtained from the Spectral Theorem - namely, the amount of information retained is given
by the {\em spectral ratio} \edits{of the associated covariance matrix}
\begin{equation}
\label{sigma}
\sigma^2 \newedits{:=} \frac{ \sum_{k=1}^r \lambda_k}{ \sum_{k=1}^n \lambda_k}.
\end{equation}
Thus, in our first example, we can keep only the first component of each data point (a $1 \times 30$ matrix)
rather than the full data set ($2 \times 30$ matrix) and still retain $99\% $ of the information contained
within because
$$ \sigma^2 = \frac{191.46}{191.46 + 0.76} > 0.99.$$
In situations where the dimension of the input vector is large, but the components of the vectors are highly correlated, it is beneficial to reduce the dimension of the data matrix using PCA. This has three effects - it orthogonalizes the \edits{basis vectors} (so that they are uncorrelated), orders the resulting orthogonal components so that those with the largest \edits{variance appear} first, and eliminates dimensions that contribute the least to the variation in the data set.
\begin{figure}[t]
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[scale=0.3]{Iris_biplot1}
\vspace{-0.1in}
\caption{\footnotesize Biplot of Iris data. }
\label{fig:BP2D}
\end{subfigure}
\begin{subfigure}[t]{\textwidth}
\centering
\includegraphics[scale=0.3]{Iris_biplot2}
\vspace{-0.1in}
\caption{\footnotesize Enlarged portion of (a). Notice that Petal Width and Petal Length point in nearly identical directions}
\label{fig:BP2DZoom}
\end{subfigure}
\caption{Biplots of Fisher's Iris data projected onto $r = 2$ principal components with $\sigma^2 = 97.77\%$}
\label{Biplot}
\end{figure}
We now extend our introductory example to larger data sets with many other characteristics.
For our first example, we'll utilize the built-in {\it MATLAB} data set \texttt{fisheriris.mat}.
This famous collection of data arises from Fisher's 1936 paper \cite{Fisher} describing $50$ different samples of $4$ characteristics from each of three species of Iris. Hence, the data set contains $150$ points and $4$ variables: \textbf{Sepal length}, \textbf{Sepal width}, \textbf{Petal length}, and \textbf{Petal width}.
With the $4 \times 150$ data matrix loaded, we perform the steps outlined within Section \ref{sec:steps} and compute the first two \edits{principal} components, i.e. those corresponding to the largest eigenvalues $\lambda_1 = 4.23$ and $\lambda_2 = 0.24$.
The others $\lambda_3 = 0.08$ and $\lambda_4 = 0.02$ are omitted.
It may be difficult to visualize even the first two principal components in this example because they are vectors in $\mathbb{R}^4$, but we can list them:
\begin{table}[H]
\centering
\begin{tabular}{l | c c}
Characteristic & PC1 & PC2\\
\hline
Sepal Length & 0.3614 & 0.6566\\
Sepal Width & -0.0845 & 0.7302\\
Petal Length & 0.8567 & -0.1734\\
Petal Width & 0.3583 & -0.0755
\end{tabular}
\label{tab:3}
\end{table}
\vspace{-0.2in}
\noindent Alternatively, we can visualize each data point projected onto the first two principal components as in Fig.~\ref{fig:BP2D}.
This figure is actually a biplot, containing both the projected data points and
the proportion of each characteristic which accounts for the respective principal component. For instance, because the Petal length accounts for
a large proportion of the first principal component, it points in nearly the same direction as the $x$ axis within the biplot. Similarly, Sepal width and Sepal length seem
to account for a large amount of the second principal component.
\begin{figure}[t]
\begin{subfigure}[t]{0.45\textwidth}
\hspace{-0.3in}
\includegraphics[scale=0.33]{BPcities2D}
\vspace{-0.3in}
\caption{\footnotesize $r=2$ components}
\label{fig:BPcities2D}
\end{subfigure}
\hspace{0.1in}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[scale=0.33]{BPcities3D}
\vspace{-0.3in}
\caption{\footnotesize $r=3$ components}
\label{fig:BPcities3D}
\end{subfigure}
\caption{Biplots of \texttt{cities} data, both with $\sigma^2 \approx 94.43\%$}
\label{Biplot_cities}
\end{figure}
Notice in Fig.~\ref{fig:BP2D} that most of the data lie along the first component, which is mostly determined by the length of the Iris petals. Additionally, Petal length and Petal width appear to be strongly correlated because they point in nearly identical directions within the new basis. This can be seen in the enlarged portion of the biplot shown in Fig.~\ref{fig:BP2DZoom}. Contrastingly, Sepal width and Sepal length seem only mildly correlated and neither appears to be correlated to the pedal characteristics since \edits{these vectors point in somewhat orthogonal directions}.
Of course, the dimension and complexity of this example can be increased by using a larger set of data, such as another built-in {\it MATLAB} sample called \texttt{cities.mat}. For completeness, we've included biplots (Fig.~\ref{Biplot_cities}) of the first few principal components of the \texttt{cities} set, which contains $m=9$ different attributes (i.e., climate, crime, education, etc..) for $n=329$ cities, again using the code contained in the appendix.
As a final observation, we note that due to the \edits{separation between eigenvalues of the covariance matrix (sometimes called the ``spectral gap'')}, the majority of data points lie along the first principal component in Fig.~\ref{Biplot_cities}(a) and within the plane generated by the first two principal components in Fig.~\ref{Biplot_cities}(b). \edits{This indicates that the majority of the variance within the data exists in these two directions, and hence one might safely eliminate the other dimensions, which contribute the least to the variation in the data set.}
\section{Application - Neuroscience}
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{Neuro_potentials}
\vspace{-0.2in}
\caption{ \footnotesize \label{fig:ap} Examples of action potentials}
\end{figure}
For another field in which PCA is quite useful, we turn to Neuroscience. In electrophysiological recordings of neural activity, one electrode typically records the action potentials (or spikes) of several neurons.
Before one can use the recorded spikes to study the coding of information in the brain,
they must first be associated with the neuron(s) from which the signal arose.
This is often accomplished by a procedure called \textit{spike sorting}, which can be accomplished because the recorded spikes of each neuron often have characteristic shapes \edits{\cite{spikes}}.
For example, Fig.~\ref{fig:ap} shows three different shapes of action potentials recorded with an electrode.
These potentials are plotted by connecting measurements at $64$ different points in time using linear interpolation.
\newedits{Due to distinctions in their peaks and oscillations}, the spikes in Fig.~\ref{fig:ap} are due to three different neurons, and PCA can be used to identify the principal variations of these spike shapes.
\begin{figure}[t]
\hspace{-0.7in}
\includegraphics[scale=0.38]{Neuro_cluster}
\vspace{-0.4in}
\caption{ \footnotesize \label{fig:cluster} Spike data projected onto principal components}
\end{figure}
Let's consider a data matrix $X$ of $9195$ recorded spikes, sampled at $64$ different points in time as in the previous figure
so that $X \in \mathbb{R}^{64 \times 9195}$.
For this example, we've used real human data that is freely-available online \cite{spikes}\newedits{, but note that this data is not directly connected to the examples of action potentials in Fig.~\ref{fig:ap}}.
Using PCA we compute the eigenvalues of the covariance matrix to find that the first two
($\lambda_1 = 2.9 \times 10^4$ and $\lambda_2 = 4.7\times 10^3$) account for $83\%$ of the total information.
\newedits{Of course, additional components can be included to increase this percentage.}
A plot of the data projected along the first two principal components is given in Fig.~\ref{fig:cluster}.
We notice that two distinct clusters have formed within the data, and thus it appears that the spikes are formed from two
different neurons.
If we denote the first two principal vectors by $\vect{v}_1$ and $\vect{v}_2$, then Cluster \#1 (right) and Cluster \#2 (left) appear to center around
$100\vect{v}_1 + 25\vect{v}_2$ and $-275\vect{v}_1$, respectively. The spike shapes corresponding to these vectors are displayed in
Fig.~\ref{fig:cluster_comp} and represent averaged activity from the two neurons. Hence, we see
that PCA can be used both to determine the degree of correlation amongst certain characteristics and to identify clustering patterns
within data.
Additionally, it provides us with a lower-dimensional picture of high-dimensional data sets, and this is often very useful when attempting
to visualize high-dimensional data.
With the action potentials determined, they can be associated to specific neurons and analyzed within neuroscience studies.
\begin{figure}[t]
\centering
\includegraphics[scale=0.36]{Cluster_comp}
\vspace{-0.2in}
\caption{ \footnotesize \label{fig:cluster_comp} Representative action potentials for clusters in projected data}
\end{figure}
\section{Application - Image Compression}
Another important application of PCA is Image Compression. Because images are stored as large matrices with real entries, one can reduce their storage requirements by keeping only the essential portions of the image \edits{\cite{Lay}}. Of course, information (in this case, fine-grained detail of the image) is naturally lost in this process, but it is done so in an optimal manner, so as to maintain the most essential characteristics.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{Durer_1}
\vspace{-0.2in}
\caption{ \footnotesize \label{fig:Durer} Albrecht D\"{u}rer's {\em Melancolia} displayed as a $648 \times 509$ pixelated imaged, taken from {\it MATLAB}'s built-in ``Durer'' file}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{eigen_hist}
\vspace{-0.2in}
\caption{ \label{fig:eigen} \footnotesize The first $35$ eigenvalues of \edits{the covariance matrix $S$ generated from $X$ in the image compression example and} arranged in decreasing order}
\end{figure}
\edits{In this section we detail a specific example for the use of PCA to compress an image.
Since the effects of keeping a lower dimensional projection of the image will be visually clear, this particular example
is a great candidate for an interactive, in-class activity.
More specifically, one can provide students with the code given in the appendix and ask them to determine the number of principal components $r$ that they must preserve in order to visually identify the image. Additionally, since the variance in the projection of the original image onto the reduced basis is computed in the code, students could be asked to identify the value of $r$ that is needed to capture a certain percentage of the total variance. For instance, Fig. \ref{Durer} shows that $r$ must be at least $60$ in order to capture $99.78\%$ of the image detail.}
\edits{Throughout the example} we will work with a built-in test image -
Albrecht D\"{u}rer's {\em Melancolia} displayed in Fig.~\ref{fig:Durer}.
{\it MATLAB} considers greyscale images like this as objects consisting of two portions - a matrix of
pixels and a colormap.
Our image is stored in a $648 \times 509$ pixel matrix, and
thus contains $648 \times 509 = 329,832$ total pixels.
The colormap is a $648 \times 3$ matrix, which we will ignore for the current study.
Each element of the pixel matrix contains a real number representing the intensity of grey
scale for the corresponding pixel.
{\it MATLAB} displays all of the pixels simultaneously
with the correct intensity, and the greyscale image that we see is produced.
The $648 \times 509$ matrix containing the pixel information is our data matrix, $X$.
Because the most important information in the reduced matrix $\tilde{X}$, described in Section \ref{data}, is captured by the first few principal components this suggests a
way to compress the image by \edits{using the lower-rank approximation} $\tilde{X}$.
Computing the distribution of associated eigenvalues as in previous examples, we see the formation of a large spectral gap, as shown in Fig.~\ref{fig:eigen}. Hence, the truncation of the sum of principal components in $\tilde{X}$ should still contain a large amount of the total information of the original image $X$.
In Fig.~\ref{Durer}, we've represented $\tilde{X}$ for four choices of $r$ (i.e., the number of principal components \edits{used}), and the associated
spectral ratio, $\sigma^2$, retained by those reduced descriptions
is also listed.
Notice that the detail of the image improves as $r$ is increased,
and that a fairly suitable representation can be obtained with around $90$ components rather than the full $648$ vector description.
Hence, PCA has again served the useful purpose of reducing the dimension of the original \newedits{data set} while preserving its most essential features.
\begin{figure}[t]
\begin{subfigure}[t]{0.45\textwidth}
\hspace{-0.3in}
\includegraphics[scale=0.3]{Durer_PC_3}
\vspace{-0.3in}
\caption{ \label{figure:PC3} \footnotesize $r = 3$, $\sigma^2 = 88.89\%$}
\end{subfigure}
\hspace{0.1in}
\begin{subfigure}[t]{0.45\textwidth}
\hspace{-0.2in}
\includegraphics[scale=0.3]{Durer_PC_30}
\vspace{-0.3in}
\caption{ \label{figure:PC30} \footnotesize $r = 30$, $\sigma^2 = 99.61\%$}
\end{subfigure}
\\
\begin{subfigure}[t]{0.45\textwidth}
\hspace{-0.3in}
\includegraphics[scale=0.3]{Durer_PC_60}
\vspace{-0.3in}
\caption{ \label{figure:PC60} \footnotesize $r = 60$, $\sigma^2 = 99.78\%$}
\end{subfigure}
\hspace{0.1in}
\begin{subfigure}[t]{0.45\textwidth}
\hspace{-0.2in}
\includegraphics[scale=0.3]{Durer_PC_90}
\vspace{-0.3in}
\caption{ \label{figure:PC90} \footnotesize $r = 90$, $\sigma^2 = 99.92\%$}
\end{subfigure}
\caption{The D\"{u}rer image with varying \edits{numbers of} principal components.}
\label{Durer}
\end{figure}
\section{CONCLUSION}
The aim in writing this article is to generate and present clear, concise resources for the
instruction of Principle Component Analysis (PCA) within \edits{an introductory} Linear Algebra course.
Students should be led to understand that PCA is a powerful, useful tool that is utilized \edits{in data analysis and} throughout
the sciences.
The visual components of the Image Compression examples in particular are designed to display the utility of PCA
and linear algebra, in general.
\edits{Within a more advanced setting (perhaps a second semester course in linear algebra),} new material regarding the Singular Value Decomposition can also be included to \edits{introduce additional details concerning the implementation of} PCA \edits{and provide an elegant structure for performing the method} \cite{Kalman, TB}. That being said, the SVD can often be a technical and
time-consuming topic that one first encounters in a computational\edits{, honors, or graduate} linear algebra course
rather than within an introductory setting.
In general, we believe that these instructional resources should be helpful in seamlessly integrating one of the most essential current
applications of linear algebra, PCA, into a standard undergraduate course.
\section*{ACKNOWLEDGEMENTS}
The authors would like to thank the College of Engineering and Computational Sciences at the Colorado School of Mines for partial support to develop and implement the educational tools included within this article.
Additionally, the first author thanks the National Science Foundation for support under award DMS-1211667.
|
1,477,468,751,436 | arxiv | \section{Introduction}
\label{sec:intro}
ChemXSeer is a digital library and data repository for the Chemoinformatics and Computational Chemistry domains~\cite{Mitra:2007:CDL:1317353.1317356}. It currently offers search functionalities on papers and formulae, CHARMM calculation data and Gaussian computation data, and also features a comprehensive search facility on chemical databases. A table search functionality~\cite{Liu:2007:TAT:1255175.1255193}, similar in spirit to the one featured in CiteSeerX\footnote{http://citeseerx.ist.psu.edu/}, is currently under development. Gaussian document search has been a key component of ChemXSeer from its inception. The alpha version of Gaussian search featured a simple query box and an SQL back-end. Here we describe the next generation of Gaussian search\footnote{http://cxs05.ist.psu.edu:8080/ChemXSeerGaussianSearch} which includes a customized user interface for Computational Chemistry researchers, boolean query functionality on a pre-specified set of attributes, and a faceted browsing option over three key attribute types. The current version of Gaussian search is powered by Apache Solr\footnote{http://lucene.apache.org/solr/}, a state-of-the-art open-source enterprise search engine indexer.
The organization of this paper is as follows. In Section~\ref{sec:file_structure}, we give a brief overview of the Gaussian software and Gaussian files, emphasizing the need for a customized search interface rather than a simple one. Description of the search interface appears in Section~\ref{sec:system_description}, followed by a brief sketch of related work in Section~\ref{sec:related_work}. We conclude in Section~\ref{sec:conclusion}, outlining our contributions and providing directions for future improvement.
\section{Gaussian Files}
\label{sec:file_structure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{gaussianfile.png}
\end{center}
\caption{Screenshot of a Gaussian document.}
\label{fig:gaussianfile}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.6\linewidth]{oldcxs.png}
\end{center}
\caption{First-generation Gaussian query interface.}
\label{fig:oldcxs}
\end{figure}
Computational chemists perform Gaussian calculations to determine properties of a chemical system using a wide array of computational methods. The methods include molecular mechanics, ground state semi-empirical, self-consistent field, and density functional calculations. Computational methods such as these are key to the upsurge of interest in chemical calculations, partly because they allow fast, reliable, and reasonably easy analysis, modeling, and prediction of known and proposed systems (e.g., atoms, molecules, solids, proposed drugs, etc.) under a wide range of physical constraints, and partly because of the availability of well-tested, comprehensive software packages like Gaussian that implement many of these methods with good tradeoff between accuracy and processing time.
The Gaussian software is actually a suite of several different chemical computation models, including packages for molecular mechanics, Hartree-Fock methods, and semi-empirical calculations. While the exact details of the functionalities of this software are beyond the scope of this paper\footnote{For details, please see\\ http://www.gaussian.com/g\_tech/g\_ur/g09help.htm}, we would like the reader to note that each run of the Gaussian software is equivalent to conducting a chemical experiment with certain inputs and under certain physico-chemical conditions. The output of the software consists of a large amount of information returned to the user via the computer console and usually redirected to a suitably-named output file. We are interested in these output files, henceforth referred to as ``Gaussian files'' or ``Gaussian documents''.
The Gaussian files contain detailed information about the calculations being performed on the system of interest. Although the details of the calculations are essential for the analysis of the system being studied, the output file can be cumbersome to a new user. Each Gaussian file begins with the issued command that initiated a particular calculation, followed by copyright information, memory and hard disk specification, basis set, job type, method used, and several different matrices (e.g., Z-matrix, distance matrix, orientation matrix, etc.). It may also contain other information like rotational constants, trust radius, maximum number of steps, and steps in a particular run. Gaussian files are semi-structured (Figure~\ref{fig:gaussianfile}) in the sense that these parameters tend to appear in a particular order or with explicit markups.
Since Gaussian files are important to the design, testing and prediction of new chemical systems, ChemXSeer had integrated a search functionality on these files. The alpha version of Gaussian search interface only consisted of a simple query box (Figure~\ref{fig:oldcxs}), and the back-end of the search engine was an SQL database that stored data extracted from the Gaussian files. Although simple, the interface allowed users to type in fielded queries and view results in an easy-to-understand format. In the current version, we have retained many aspects of the alpha version, including parts of the search results page and visual representation of individual Gaussian files.
However, our domain experts argued that a more complex interface including faceted search was justified, partly because it eases the task of a researcher by limiting the number of search results to examine, and partly because such interfaces have already been successfully implemented~\cite{doi:10.1021/ci600510j}. A computational chemist usually knows what kinds of parameters he/she is looking for in a Gaussian files database, and therefore it makes sense to refine search results using this information. We identified three important parameters towards this end - Job Type, Method Used and Basis Set. There are other parameters and metadata that we can extract from the Gaussian files, but they are not as important from a domain expert's point of view. These are Charge, Degree of Freedom, Distance Matrix, Energy, Input Orientation, Mulliken Atomic Charge, Multiplicity, Optimized Parameters, Frequencies, Thermo-chemistry, Thermal Energy, Shielding Tensors, Reaction Path, PCM, and Variational Results. Metadata like ID, Title and File Path are used in organizing the search results.
\section{System Description}
\label{sec:system_description}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{blockdiagram.png}
\end{center}
\caption{Gaussian search system architecture.}
\label{fig:blockdiagram}
\end{figure}
The basic query to the Gaussian search system is an atom (i.e., element) or a collection of atoms. The system returns all Gaussian files containing those atoms. However, as experienced by researchers, such basic queries often return a large number of search results, many of which are not relevant. While we can think of improving the ranking of search results in tune with traditional information retrieval research, domain experts have informed us that since Gaussian files are semi-structured, a faceted browsing option would be more appropriate. It remains open, however, whether ranking within each facet could be improved. Currently we rank the search results by their external IDs, because our domain experts were not overly concerned with the ranking.
The system architecture is given in Figure~\ref{fig:blockdiagram}. Figure~\ref{fig:blockdiagram} has three principal components - the query interface, the search results page and the Gaussian file description page. The user supplies a query using the query interface, consisting of atoms (mandatory field), method used, job type and basis set. The last three fields are optional, and can be combined in boolean AND/OR fashion. The boolean query goes to the Gaussian document index, which in turn returns on the search results page all Gaussian files satisfying the boolean query. The search results page contains links to individual Gaussian file descriptions, which in turn link to the actual Gaussian documents. Figure~\ref{fig:blockdiagram} also indicates that the index was generated from Gaussian documents using Apache Solr.
The lower section of Figure~\ref{fig:blockdiagram} explains the faceted browsing part. Facets are created based on three attributes - job type, method used, and basis set. Each facet link consists of an attribute, its value, and the number of search results under the current set that satisfy this value. The search results page contains links to different values of the attributes. When the user clicks on such a link, a refined query is sent to the Gaussian document index and the resulting smaller set of search results is returned.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{newcxs.png}
\end{center}
\caption{Gaussian query interface.}
\label{fig:newcxs}
\end{figure}
\begin{table}
\caption{Gaussian Attribute Categories}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Job Type} & \textbf{Method Used} & \textbf{Basis Set}\\
\hline
Any & Any & Any \\
Single Point & Semi-empirical & gen\\
Opt & Molecular Mechanics &\\
Freq & Hartree-Fock &\\
IRC & MP Methods &\\
IRCMax & DFT Methods &\\
Force & Multilevel Methods &\\
ONIOM & CI Methods &\\
ADMP & Coupled Cluster Methods &\\
BOMD & CASSCF &\\
Scan & BD &\\
PBC & OVGF &\\
SCRF & Huckel &\\
NMR & Extended Huckel &\\
& GVB &\\
& CBS Methods &\\
\hline
\end{tabular}
\end{center}
\label{tab:g_att_cat}
\end{table}
The implementation of our query interface (Figure~\ref{fig:newcxs}) was inspired partly by the EMSL Basis Set Exchange interface\footnote{https://bse.pnl.gov/bse/portal}, and partly by the requirements mentioned by our domain experts. Our interface features a periodic table of elements, where users can click to select and de-select each element (atom) individually. The selected elements appear together in the textbox at the bottom of the table. Users can specify whether they want search results that contain only the selected elements - no more and no less, or whether they want search results that contain the selected elements as well as other elements. After selecting elements, users can optionally select Job Type, Method Used, and Basis Set from the drop-down menus provided. They can also directly type in the desired values for these attributes in the textboxes. Finally, they can specify AND/OR from another drop-down menu. The default option is AND. Fourteen Job Type categories (values), sixteen Method Used categories, and two Basis Set categories are provided in the drop-down menus. These categories are given in Table~\ref{tab:g_att_cat}. Each category has several sub-categories that are dealt with by the search system. For example, if a user specifies ``Hartree-Fock'' as the Method Used category, the system will search for four sub-categories of Hartree-Fock - hf, rhf, rohf and uhf. These sub-categories were specified by our domain experts. A sample of Method Used sub-categories is given in Table~\ref{tab:sub_cat}. Table~\ref{tab:sub_cat} shows the sub-categories for three Method Used categories - Molecular Mechanics, CI Methods, and CBS Methods. For the Basis Set attribute there are many categories, but only two options are provided in the drop-down menu to keep it short and simple. Users can type in the category (e.g., 3-21G*) in the textbox provided.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{searchresultspage.png}
\end{center}
\caption{A search results page.}
\label{fig:searchresultspage}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{gaussiandetails.png}
\end{center}
\caption{A Gaussian file description.}
\label{fig:gaussiandetails}
\end{figure}
Ten search results are shown in one search results page (Figure~\ref{fig:searchresultspage}) with the total number of results shown at the top. Note that the left part of the search results page (Figure~\ref{fig:searchresultspage}) contains links for faceted browsing, and the right part contains the actual results. Each search result consists of a link to the corresponding Gaussian file description and a one-line summary of the file containing attribute information. The Gaussian file description (Figure~\ref{fig:gaussiandetails}) consists of a Jmol~\cite{Hanson:kk5066} rendering of the system being studied, followed by a summary of the Gaussian job and information about attributes extracted from the file. The summary contains a link to the Gaussian document (Figure~\ref{fig:gaussiandetails}). Currently we have indexed 2148 documents.
The faceted browsing section (left half of Figure~\ref{fig:searchresultspage}) follows the architectural specification of Figure~\ref{fig:blockdiagram}. Users can refine search results any time simply by clicking on a particular attribute category. An ``All Results'' link has been provided to help users quickly find the original set of results. Anecdotal evidence from our domain experts suggests that the faceted browsing feature has been able to significantly cut down on the number of search results to examine, thereby saving a considerable amount of time on the part of a Computational Chemistry researcher. Moreover, since each facet link gives the number of search results to examine for a particular attribute category, a user can readily obtain a visual appreciation of the distribution of search results across different attribute categories for a single query.
The core search and indexing functionality of Gaussian search is currently provided by Apache Solr, an open-source state-of-the-art enterprise search server designed to handle, among other things, faceted search, boolean queries, and multivalued attributes. In our case, atoms (chemical elements) in a Gaussian document comprise a multivalued attribute. Each Gaussian document was converted by our home-grown metadata extractor into an XML-style file suitable for ingestion to Solr. The selection of Solr as the back-end platform for this system was partly motivated by the need to integrate ChemXSeer architecture with SeerSuite\footnote{http://sourceforge.net/projects/citeseerx/}, a package of open-source software tools that powers the CiteSeerX digital library.
\begin{table}
\caption{A sample of Method Used sub-categories}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Molecular Mechanics} & \textbf{CI Methods} & \textbf{CBS Methods}\\
\hline
amber & cis & cbs-4m \\
drieding & cis(d) & cbs-lq\\
uff & cid & cbs-q\\
& cisd & cbs-qb3\\
& qcisd & cbs-apno\\
& qcisd(t) &\\
& sac-ci &\\
\hline
\end{tabular}
\end{center}
\label{tab:sub_cat}
\end{table}
\section{Related Work}
\label{sec:related_work}
In this section we give a brief sketch of the related work. The importance of using large databases to support chemistry calculations has been illustrated by Feller in~\cite{DBLP:journals/jcc/Feller96}. Schuchardt, et al., describe such a database, the Basis Set Exchange~\cite{doi:10.1021/ci600510j}. Basis Set Exchange helps users find particular basis sets that work on certain collections of atoms, while ChemXSeer lets users search Gaussian files with basis sets as boolean query components.
Among other purely chemistry-domain digital libraries, OREChem ChemXSeer by Li, et al.~\cite{Li:2010:OCS:1816123.1816160} integrates semantic web technology with the basic ChemXSeer framework. The Chemical Education Digital Library~\cite{chemeddl} and the JCE (Journal of Chemical Education) Digital Library~\cite{jcedl} focus on organizing instructional and educational materials in Chemistry. Both these projects are supported by NSF under the National Science Digital Library (NSDL). In contrast with these studies, our focus here is to design a search functionality on Gaussian files that helps domain experts locate attribute information more easily.
\section{Conclusion}
\label{sec:conclusion}
In this paper our contributions are two-fold:
\begin{itemize}
\item design of a new search engine for Computational Chemistry research on documents produced by the widely used Gaussian software, and
\item design of a metadata extractor that sieves out several important attributes from the Gaussian documents, and exports them into Solr-ingestible XML format.
\end{itemize}
Future work consists of integration of documents from the ChemXSeer Digital Library with Gaussian files so that users can have an integrated view of calculations, results, and analysis. The metadata extractor could also be improved. There are a few cases where our metadata extractor could not locate certain attribute values, mainly due to the anomalous placement of those attributes in the Gaussian output files. The structure of these documents appeared inconsistent in certain places. Information extraction techniques may be useful for handling these cases. Another area of potential research is improving the ranking of search results. Although our domain experts were not concerned with ranking, it remains to be seen if combining attribute information can help pull up more relevant files earlier in the ranking. Finally, Section~\ref{sec:file_structure} indicates the presence of several other attributes in the Gaussian documents. It would be interesting to explore whether these attributes are useful and can be leveraged to produce additional relevant information.
\section{Acknowledgments}
\label{sec:ack}
This work was partially supported by the National Science Foundation award CHE-0535656.
\bibliographystyle{abbrv}
\small
|
1,477,468,751,437 | arxiv | \section{Introduction}
The presence of heavy elements in the atmosphere of many cool white dwarfs is attributed
to circumstellar debris that are continuously or intermittently accreted onto the white dwarf surface \citep[see][]{zuc2003}.
Questions remain as to the phase and composition of the accreted material and its effect on the white dwarf abundance pattern.
\citet{zuc2003} and \citet{koe2005} estimate that approximately 25\% of cool, H-rich white dwarfs show the presence of metal lines
in their spectra, while \citet{zuc2010} found that a similar fraction of He-rich white dwarfs show metal lines.
\citet{kil2006} explored the link between heavy element pollution in white dwarfs and the presence of
a warm, near-infrared (IR) debris disk feeding the photosphere as in the case of the proto-typical dusty,
polluted white dwarf G29-38 \citep{jur2003}.
However, \citet{far2009} reported
that only 21\% of polluted white dwarfs have mid-IR excess consistent with a circumstellar disk
and they also noted
an increased likelihood of a mid-IR excess for objects with a higher calcium abundance
($\log{\rm Ca/H(e)} \ga -8.0$). Actually,
\citet{far2010} observed several polluted H-rich (DAZ) and He-rich (DZ)
white dwarfs with {\it Spitzer} and found that the debris disks have varying
thickness and, therefore, some of these may be narrow
enough to escape detection. Moreover,
\citet{jur2008} shows that polluted white dwarfs
that do not exhibit an IR excess may still be experiencing
gas accretion that originated from the tidal destruction of a large number
of small asteroids. Indeed, gaseous disks around white dwarfs were detected via
near-IR calcium triplet emission \citep{gae2006,gae2007}.
Various accretion scenarios or many types of sources may be involved. For example, the immediate environment of white dwarf stars
is likely composed of remnant bodies that survived post-asymptotic giant branch evolution. These bodies may be
low-mass stellar \citep[see, e.g.,][]{kaw2008} or sub-stellar companions \citep[see, e.g.,][]{ste2013}, or disrupted planetary systems, i.e., objects that are otherwise quite common. The formation of
a debris disk could mix material and produce an abundance pattern averaged over several constituents. Consequently,
the observed abundance would be representative of the stellar neighborhood. However, single
body accretion could deliver a greater diversity of abundance patterns \citep[see, e.g.,][]{zuc2011}.
The accretion history and diffusion time-scales mingle in a complex manner \citep{dup1992}. \citet{koe2009} considered three
possible sequences of events: One of continuous accretion and built-up toward diffusive equilibrium, one
at diffusive equilibrium, and, after extinction of the accretion source, one of decline in photospheric abundances.
Therefore, the observed abundance pattern, i.e., the photospheric abundance ratios, may differ considerably from the source pattern,
and a case-by-case study of DAZ white dwarfs may yet reveal considerable abundance variations.
On last count \citep[see, e.g.,][]{zuc2003,koe2005}, cool DAZ white dwarfs ($T_{\rm eff} \lesssim 8000$ K) are outnumbered at least 1:3
by the DZ white dwarfs \citep[see, e.g.,][]{duf2007}.
Interestingly, the number of identified DAZ white dwarfs in the local sample of \citet{gia2012}
is comparable to that of DZ white dwarfs (11 versus 13 stars),
although the DAZ count corresponds to a much smaller fraction of all H-rich white dwarfs
than that of their He-rich counterparts, i.e., one tenth versus one third, respectively.
Clearly, many polluted H-rich white dwarfs remain to be recognized as such in the local sample.
\citet{koe2009} cites difficulties in detecting weak metal lines in the opaque, neutral hydrogen environment of cool DA white dwarfs.
However, recent
spectroscopic surveys of local white dwarf candidates \citep[see, e.g.,][]{kaw2006,kaw2012a} have uncovered several new cool DAZ white dwarfs
\citep{kaw2011,kaw2012b}.
In this context, we present an analysis of the polluted, otherwise hydrogen-rich white dwarf NLTT~25792 from the revised
New Luyten Two-Tenth catalog \citep{sal2003}.
\citet{kaw2011b} and \citet{gia2011} reported the detection of a strong Ca~K line but without other identifiable elements.
This object is also known in the Edinburgh-Cape catalog as EC~10542$-$2236 \citep{kil1997} and in the Luyten-Palomar catalog as LP~849-31.
NLTT~25792 is located above the Galactic plane ($l=271.1687,\,b=+32.8146$) and
toward a relatively tenuous line of sight in the interstellar medium \citep[$E_{B-V}=0.0595$,][]{sch1998}.
The proper motion quoted in the rNLTT catalog
is $(\mu_\alpha\,\cos{\delta}, \mu_\delta)=(-72, 293)$ mas yr$^{-1}$.
Section 2 details our recent observations of this object using the X-shooter spectrograph at the European Southern Observatory,
and in Section 3 we present an analysis of this and other related polluted white dwarfs.
We conclude in Section 4.
\section{Observations}
\begin{deluxetable}{llc}
\tablecaption{Photometry\label{tbl_phot}}
\tablewidth{0pt}
\tablehead{
\colhead{Survey and band\tablenotemark{a}} & \colhead{$\lambda$ effective} & \colhead{$m$} }
\startdata
{\it GALEX} NUV & 2271 \AA & $17.78\pm0.03$ \\
SDSS $u$ & 3551 \AA & $16.540\pm0.007$ \\
SDSS $g$ & 4686 \AA & $16.079\pm0.004$ \\
SDSS $r$ & 6116 \AA & $15.974\pm0.004$ \\
SDSS $i$ & 7481 \AA & $15.984\pm0.005$ \\
SDSS $z$ & 8931 \AA & $16.062\pm0.007$ \\
2MASS $J$ & 1.235 $\mu$m & $15.521\pm0.050$ \\
2MASS $H$ & 1.662 $\mu$m & $15.401\pm0.111$ \\
2MASS $K$ & 2.159 $\mu$m & $15.943\pm0.255$ \\
{\it WISE} $W1$ & 3.353 $\mu$m & $15.268\pm0.046$ \\
{\it WISE} $W2$ & 4.603 $\mu$m & $15.051\pm0.103$
\enddata
\tablenotetext{a}{{\it GALEX} GR6/GR7 photometry obtained at galex.stsci.edu/GalexView/;
SDSS Photometric Catalog, Release 7 \citep{aba2009};
2MASS photometry \citep{skr2006};
({\it WISE}) photometry \citep{cut2012}.
}
\end{deluxetable}
We obtained two consecutive sets of echelle spectra using the X-shooter spectrograph \citep{ver2011}
attached to the UT2 (Kueyen) at Paranal Observatory on
UT 2013 May 10.
The observations were conducted in clear sky conditions at an average airmass of 1.24 in the first set and 1.60 in the second.
The seeing conditions were
1.20 arcsec on average ($\sigma=0.11$ arcsec) during the first observation and 1.36 arcsec
($\sigma=0.13$ arcsec) during the second.
The spectra were obtained at the parallactic angle and in the ``stare'' mode,
with the slit width set to 0.5, 0.9
and 0.6 arcsec for the UVB, VIS and NIR arms, respectively.
This arrangement delivered
a resolution of $R=\lambda/\Delta\lambda = 9100$, 8800 and 6200 for the UVB, VIS and NIR arms, respectively, for
nominal wavelength ranges of 2940-5930 \AA\ with UVB, 5250-10490 \AA\ with VIS,
and 0.98-2.48 $\mu$m with NIR. For each set,
the exposure times were 2940 and 3000 s for the UVB and VIS arms, respectively.
For the NIR arm we obtained five exposures of 600 s each. An analysis of the NIR observations will be presented elsewhere.
We reduced the observations using the X-shooter reduction pipeline under the ESO Recipe Flexible Execution
Workbench (Reflex). Details of the X-shooter pipeline and Reflex are available in the ESO documents
VLT-MAN-ESO-14650-4840 (issue 12.0) and VLT-MAN-ESO-19540-5899 (issue 2.0). The extracted spectra were resampled with 0.2 \AA\ bins, corresponding
to half of a resolution element in the UVB spectra, and one third of a resolution element in the VIS spectra.
The signal-to-noise ratio (SNR) achieved in the co-added UVB spectrum at $\lambda=$3100 \AA\ was SNR$\sim 10$ per bin
thereby setting the
lowest usable wavelength. The measured SNR reached $\sim$30 per bin at 3500 \AA, $\sim$80 at 3900 and 4200 \AA, and $\sim$90 at 5000 \AA.
The measured SNR reached $\sim$54 per bin in the co-added VIS spectrum at 5900 \AA, $\sim$70 at 6600 \AA, and $\sim$90 at 8600 \AA.
Table~\ref{tbl_phot} lists available photometric measurements from the Galaxy Evolution Explorer ({\it GALEX}) sky survey,
the Sloan Digital Sky Survey (SDSS), the Two Micron All Sky Survey (2MASS), and the Wide-field Infrared Survey Explorer
({\it WISE}).
\subsection{Comparison Sample}
Also, we obtained a series of HIRES spectra ($R=25000$ to 40000) for a set of closely
related DAZ white dwarfs: WD~0208$+$396 (G74-7), WD~0354$+$463, WD~1257$+$278, and WD~1455$+$298 \citep{zuc2003,zuc2011}.
\citet{zuc2003} published the spectra for WD~0208$+$396 (with the Keck Observatory Archive, KOA, label HI.19990813.45167),
WD~0354$+$463 (HI.19980123.32497), and WD~1455$+$298 (HI.19980624.32907). We supplemented the published data set with
spectra obtained from the KOA for WD~1257$+$278 (HI.20100327.33502, HI.20100327.35353, and HI.20100328.32028)
and WD~1455$+$298 (HI.20060617.24712, HI.20060617.27218, HI.20060617.29698, and HI.20060617.32157).
We used the weighted average of the spectra in our abundance analysis.
A comparative abundance analysis should help us identify
common properties of the sample, or, alternatively, features that are peculiar to NLTT~25792.
The spectral energy distributions (SED) of the comparison stars (see Appendix~1) are well reproduced by
a single star model or in the case of WD~0354$+$463 by a binary star template.
The SED of WD~1455$+$298 shows a weak IR excess in the WISE bands ($\lambda \gtrsim 3\mu$m) suggesting the presence of warm dust; \citet{far2008}
noted a weak excess in {\it Spitzer} IRAC measurements at $\lambda \gtrsim 5\mu$m and inferred a dust temperature of 400~K.
The prototypical DAZ white dwarf G74-7 \citep[WD~0208$+$396][]{lac1983,bil1997,zuc2003} lies in the Galactic plane ($l=139.2,\,b=-20.4$)
at a distance of $\sim17$ pc \citep[$\pi=0.0598\pm0.0035$][]{van1995} and toward a relatively tenuous
line-of-sight ($E_{\rm B-V} = 0.0485$).
The DAZ white dwarf WD~0354$+$463 \citep{zuc2003} and its dM7 companion form an unresolved,
possibly close pair \citep[sep.$<0.8$ AU,][]{far2006}. \citet{zuc2003} noted emission in the H$\beta$ line core,
but no radial velocity variations have been reported to date. The presence
of heavy elements in the atmosphere of the white dwarf may be attributed to
a wind-accretion mechanism \citep{zuc2003,deb2006}. The binary lies in the Galactic plane ($l=153.2,\,b=-5.1$)
and in a relatively dense line-of-sight ($E_{\rm B-V} = 0.642$).
The SED of WD~0354$+$463 is well reproduced by the combination of a DAZ white dwarf
model and a M7 template constructed using optical/IR spectroscopy of VB~8 \citep[$=$GJ~644~C;][]{tin1998,ray2009,cus2005}.
The star VB~8 is located at a distance of $6.5\pm0.1$ pc \citep{tho2003}.
This exercise demonstrates that the optical/ultraviolet part of the
spectrum is dominated by the DAZ white dwarf. Therefore, the Balmer line analysis is probably correct although weak emission
line cores should be excluded from the analysis. However, the spectral decomposition based on the M7 template implies
a larger distance ($d=43.2\pm0.7$ pc) than estimated using the white dwarf absolute magnitude ($d=33.4\pm2.5$ pc).
The distance estimates are reconciled by adopting for the companion an absolute magnitude
$M_K=10.32$, or a sub-type between M8 and M9 \citep{kir1994} and 0.56 mag fainter than the template itself \citep[$M_K ({\rm VB~8}) = 9.78$ mag,][]{kir1994}.
Alternatively, the white dwarf itself could be 0.56 mag brighter implying a lower surface gravity than measured spectroscopically.
Finally, both WD~1257$+$278 \citep{zuc2003,zuc2011} and WD~1455$+$298 \citep{zuc2003} are relatively more distant at $\sim35$ pc \citep{van1995},
but at high Galactic latitudes of $b=88.1$ and $62.1$, respectively, and correspondingly low
extinction in the line-of-sight ($E_{B-V} = 0.0095$ and 0.0177, respectively).
No IR calcium triplet emission have been detected in the HIRES spectra of WD~1257$+$278 and WD~1455$+$298.
\begin{figure*}
\epsscale{1.0}
\plotone{f1.eps}
\caption{Sections of the X-shooter spectra from 3500 to 4420 \AA\ and showing many spectral features listed
in Table~\ref{tbl_line}. The data are compared to a representative model with $[{\rm X/H}]=-2.5$.
\label{fig1}}
\end{figure*}
\section{Analysis}
We based our analysis of the Balmer and heavy element line profiles on a grid of model atmospheres in local thermodynamic
equilibrium that include convective energy transport. We employed the mixing-length formalism with parameters ML2 and
$\alpha=0.6$. Details of Balmer line profiles are provided by \citet{kaw2006}. Heavy-element contributions to the electron density
are included with the metallicity varying from $[{\rm X/H}]\equiv \log{\rm X/H}-\log{\rm X/H}_\odot=-4.0$ to $-2.0$, where X includes the 18 most abundant
species from carbon to zinc. Based on the model atmosphere structure we computed detailed heavy element line profiles using Voigt functions and state-of-the-art oscillator strengths
and broadening parameters \citep[see details in][]{kaw2012b}; In most cases, collisions with neutral hydrogen atoms, and, to a lesser extent, electrons dominate the line profiles.
The solar abundance scale employed in the present work was built using the compilations of
\citet{asp2009} and \citet{lod2009}. \citet{lod2009} provide a critical compilation of meteoritic, i.e.,
the CI carbonaceous chondrites, and solar photospheric abundances. Employing somewhat different criteria,
\citet{asp2009} list solar photospheric abundances that differ on average by only 0.005 dex with a dispersion
of 0.04 dex from those of \citet{lod2009} for a group of abundant elements comprising Na, Mg, Al, Si, Ca, and Fe.
Note that for that same group of elements, the CI-chondrites and solar photospheric abundances are nearly
identical with differences no larger than 0.02 dex \citep{lod2009}.
Therefore, in this work, we use the straight average of the solar photospheric abundances of \citet{asp2009}
and the CI-chondrites and solar photospheric abundances of \citet{lod2009}.
We will refer to this joint scale as the ``solar abundances'':
$\log{\rm Na/H}_\odot = -5.72$,
$\log{\rm Mg/H}_\odot = -4.44$,
$\log{\rm Al/H}_\odot = -5.54$,
$\log{\rm Si/H}_\odot = -4.48$,
$\log{\rm Ca/H}_\odot = -5.67$,
$\log{\rm Fe/H}_\odot = -4.53$.
Finally, \citet{lod2009} proposes to scale proto-solar (or ``solar system'') abundances from solar abundances using the logarithmic
relation $X_0=X+0.053$. Abundance ratios are not affected by this scaling and the proto-solar abundances
will not be considered further.
We fitted the Balmer line profiles and extracted the effective temperature and surface gravity (Section 3.2) and constrained the abundance of
individual elements (Section 3.3) using $\chi^2$ minimization techniques.
\subsection{NLTT~25792: Line Identifications and Radial Velocity}
\begin{deluxetable}{lrc}
\tablecaption{Equivalent widths and line velocities\label{tbl_line}}
\tablewidth{0pt}
\tablehead{
\colhead{Ion,$\lambda$ (\AA) \tablenotemark{a}} & \colhead{E.W.(m\AA)} & \colhead{$v$ (km s$^{-1}$) \tablenotemark{b}}
}
\startdata
\ion{Fe}{1} 3440.606 & 110. & 33.9 \\
\ion{Fe}{1} 3565.379 & 37. & 24.1 \\
\ion{Fe}{1} 3570.097 & 99. & 25.6 \\
\ion{Fe}{1} 3581.193 & 97. & 25.2 \\
\ion{Fe}{1} 3719.935 & 105. & 24.7 \\
\ion{Fe}{1} 3722.563 & 23. & 22.5 \\
\ion{Fe}{1} 3733.317 & 18. & 28.8 \\
\ion{Fe}{1} 3734.864 & 110. & 24.7 \\
\ion{Fe}{1} 3737.131 & 107. & 22.8 \\
\ion{Fe}{1} 3745.561 & 76. & 30.0 \\
\ion{Fe}{1} 3748.262 & 30. & 25.2 \\
\ion{Fe}{1} 3749.485 & 74. & 25.3 \\
\ion{Fe}{1} 3758.233 & 62. & 23.3 \\
\ion{Fe}{1} 3763.789 & 37. & 25.3 \\
\ion{Fe}{1} 3767.191 & 33. & 27.6 \\
\ion{Fe}{1} 3795.002 & 22. & 26.4 \\
\ion{Fe}{1} 3804.791 & 19. & 17.4 \\
\ion{Fe}{1} 3815.840 & 24. & 24.9 \\
\ion{Fe}{1} 3820.425 & 76. & 23.6 \\
\ion{Fe}{1} 3824.444 & 32. & 22.8 \\
\ion{Fe}{1} 3825.881 & 47. & 19.8 \\
\ion{Fe}{1} 3827.823 & 45. & 23.7 \\
\ion{Fe}{1} 3829.452 \tablenotemark{c} & 37. & 20.7 \\
\ion{Mg}{1} 3832.300 & 79. & 22.7 \\
\ion{Fe}{1} 3834.222 & 44. & 28.2 \\
\ion{Mg}{1} 3838.292 & 116. & 24.2 \\
\ion{Fe}{1} 3856.371 & 25. & 26.3 \\
\ion{Fe}{1} 3859.911 & 70. & 23.5 \\
\ion{Ca}{2} 3933.660 & 1215. & 19.6 \\
\ion{Al}{1} 3944.006 & 23. & 21.3 \\
\ion{Al}{1} 3961.520 & 48. & 27.0 \\
\ion{Ca}{2} 3968.470 & ... & 22.1 \\
\ion{Fe}{1} 4045.812 & 64. & 21.5 \\
\ion{Fe}{1} 4063.594 & 26. & 23.4 \\
\ion{H}{1} 4101.734 & ... & 17.0 \\
\ion{Ca}{1} 4226.730 & 96. & 26.0 \\
\ion{Fe}{1} 4271.760 & 37. & 22.3 \\
\ion{H}{1} 4340.462 & ... & 19.0 \\
\ion{Fe}{1} 4383.545 & 46. & 23.9 \\
\ion{Fe}{1} 4404.750 & 32. & 23.3 \\
\ion{Fe}{1} 4415.122 & 9. & 23.5 \\
\ion{H}{1} 4861.323 & ... & 22.6 \\
\ion{Fe}{1} 5167.488 & 22. & 33.5 \\
\ion{Mg}{1} 5172.684 & 40. & 27.0 \\
\ion{Mg}{1} 5183.604 & 66. & 25.4 \\
\ion{H}{1} 6562.797 & ... & 26.9 \\
\ion{Ca}{2} 8498.020 & 58. & 31.7 \\
\ion{Ca}{2} 8542.090 & 273. & 29.5 \\
\ion{Ca}{2} 8662.140 & 158. & 29.2
\enddata
\tablenotetext{a}{Laboratory wavelength from the server \url{http://www.nist.gov/pml/data/asd.cfm} at the National Institute of Standards and Technology (NIST).}
\tablenotetext{b}{Heliocentric velocities.}
\tablenotetext{c}{Blended with \ion{Mg}{1} $\lambda$3829.3549\AA.}
\end{deluxetable}
\begin{figure*}
\epsscale{1.15}
\plottwo{f2a.eps}{f2b.eps}
\caption{Comparing the Ca~H\&K doublet (lower panel) and ultraviolet \ion{Mg}{1}/\ion{Fe}{1} lines (top panel) in the X-shooter spectra of NLTT~25792 (black lines)
with Keck/HIRES spectra other DAZ white dwarfs (grey lines): (left panels) G74-7 (WD~0208+396) and (right panels) WD~0354+463 (DAZ+dM). The HIRES spectra have been
degraded to the X-shooter resolution. \label{fig2}}
\end{figure*}
Figure~\ref{fig1} shows segments of the UVB X-shooter spectrum. Notable features include the Ca~H\&K doublet and the upper Balmer lines
(H$\gamma$ to H8) and numerous \ion{Fe}{1} lines.
The average velocity of 49 spectral lines (Table~\ref{tbl_line}) found between $\sim3440$ and $\sim8662$\AA\ is
24.7 km~s$^{-1}$\ with a dispersion of only 3.6 km~s$^{-1}$.
Estimating the gravitational redshift of the white dwarf at $31.7\pm1.5$ km~s$^{-1}$\ (Section 3.2), the radial velocity of the white dwarf
is $v_r = -7.0\pm3.9$ km~s$^{-1}$. The velocity difference between the two consecutive ($\Delta t\approx 1$ hr) exposures is $v_1-v_2 = -2.2$ km~s$^{-1}$\
with a dispersion of 4.1 km~s$^{-1}$.
Figure~\ref{fig2} compares the main spectral features in NLTT~25792 and the comparison stars. These objects share many important spectral features,
most notably dominant Ca~H\&K doublets and rich iron line spectra.
A shift of the Ca~K line in NLTT~25792 relative to the other two stars is apparent when lining-up the spectra with other narrow metal lines.
The IR calcium triplet in NLTT~25792 is in absorption with no evidence of an emission component.
\subsection{Atmospheric Parameters and Spectral Energy Distribution (SED) of NLTT~25792}
We fitted the Balmer line profiles, H$\beta$ to H$_{10}$ excluding H$\epsilon$, in the X-shooter and FORS1 \citep{kaw2011b} spectra independently.
Our new measurements, ($T_{\rm eff}$$,\log{g})=(7900\pm20\,{\rm K},8.09\pm0.04$) with X-shooter and ($7910\pm20\,{\rm K},7.96\pm0.05$) with FORS1, are
in excellent agreement with the measurements of \citet{gia2011}: ($T_{\rm eff}$$,\log{g})=(7910\pm118\,{\rm K},8.05\pm0.08$).
We adopted the weighted averages of the measurements: ($T_{\rm eff}$$,\log{g})= (7903\pm16\,{\rm K},8.04\pm0.03$).
Based on these parameters, the mass of the white dwarf is $0.618\pm0.018\,M_\odot$ and the radius is $0.0124\pm0.0002\,R_\odot$ with
a cooling age of 1.2 Gyr. The absolute magnitude in the SDSS $r$ band, $M_r=13.18\pm0.04$ mag, locates the star at
a distance of $d=36.2\pm0.7$ pc.
\begin{figure}
\epsscale{1.15}
\plotone{f3.eps}
\caption{Spectral energy distribution, $f_\lambda$ (erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$) vs $\lambda$, of the DAZ NLTT~25792 from the
near UV to the near-IR (Table~\ref{tbl_phot}). The data are compared to the best-fit model to the Balmer lines including the
effect of interstellar extinction (full line) or excluding it (dashed line). The K-band shows a possible flux deficit.
The {\it GALEX} NUV and SDSS $u$ magnitudes are best fit with the model including interstellar extinction.
\label{fig3}}
\end{figure}
The SED of NLTT~25792 (Fig.~\ref{fig3}) is characteristic of a 7900~K white dwarf without a low-mass stellar companion. Also,
the SED does not show a warm disk often encountered in the IR spectra of polluted white dwarfs \citep{kil2006,far2009}.
In fact, the measured IR flux shows an unexplained dip measured in the K-band.
However, we noted the possible effect of interstellar reddening on the ultraviolet part of the SED with $E_{B-V} = 0.016$ using the
models of \citet{car1989} and $R_V=3.1$.
\subsection{Heavy Element Abundance Pattern in NLTT~25792}
We measured the abundance of magnesium, aluminum, calcium and iron using spectral lines listed in Table~\ref{tbl_line}. We fitted
the line profiles rather than the equivalent widths. These are provided as indicative of the relative line strengths.
Following a procedure adopted in the past \citep{kaw2012b}, we employed the broadening parameter $\Gamma$ from
\citet{bar2000} added to Stark and natural broadening parameters. The resulting Voigt profiles are convolved with Gaussian profiles set to match the
spectral resolution.
The measured abundances are listed in Table~\ref{tbl_abun}. We noted a discrepancy between the calcium abundance based
on the Ca~K line (${\rm [Ca/H]} = -2.14\pm0.06$) and that
based on \ion{Ca}{1} and the \ion{Ca}{2} triplet (${\rm [Ca/H]} = -2.40\pm0.06$). We excluded the Ca~H\&K lines from the abundance measurement
and further investigate the question in Section 3.5.
The upper limit to the equivalent width of \ion{Si}{1}$\lambda$3905.523 line is $\sim10$\,m\AA\ and the upper limits to
the \ion{Na}{1}$\lambda\lambda$5889.951,5895.924 lines are $\sim15$\,m\AA. These estimates correspond to
$3\sigma$ upper limits, i.e.,
$3\times \Delta\lambda/{\rm SNR}$, where $\Delta\lambda$ is the full width of a resolution element and the
SNR is measured within bins the size of a resolution element. For example, $\Delta\lambda = 0.50$ \AA\ near \ion{Fe}{1}$\lambda$4415 and the measured SNR is
90 per 0.2 \AA\ bin, or 142 per resolution element, corresponding to a minimum equivalent width of $\sim10$ m\AA.
The corresponding silicon and sodium abundance upper limits are $\log{\rm Si/H} \lesssim -7.5$
([Si$/$H]$\lesssim -3.0$) and
$\log{\rm Na/H} \lesssim -8.8$
([Na$/$H]$\lesssim -3.1$).
Silicon and sodium are markedly depleted relative to all other elements (Mg, Al, Ca, and Fe) on the solar abundance scale.
Also, we noted a possible ISM component to the \ion{Na}{1}$\lambda$5889.951 line at $-13$\,km~s$^{-1}$\ and with an equivalent width of 35\,m\AA.
We did not attempt to constrain the CNO abundance because of a lack of practical abundance diagnostics at a temperature of $\sim7900$ K for NLTT~25792.
For example, the \ion{O}{1}$\lambda$7773 triplet remains extremely weak under these conditions.
The systematic effects of effective temperature and surface gravity variations have been investigated.
The abundance shifts corresponding to surface gravity shifts of $\pm0.1$ dex
for a reference model at $T_{\rm eff}$$=7900$~K and $\log{g}=8.0$ are negligible when measuring calcium or aluminum abundances, but amount to
$\mp 0.01-0.02$ in the logarithm of the magnesium abundance and $\pm0.02-0.04$ for iron.
The effect of temperature shifts of $\pm100$~K
for the same reference model are $\pm0.06$ for calcium and aluminum, and $\pm0.04$ for magnesium and iron.
In summary, the temperature uncertainty dominates the errors in abundances relative to hydrogen, but because the relevant elements
follow the same trends it would not affect abundance ratios.
\subsection{Comparative Analysis}
\begin{deluxetable*}{ccccccc}
\tablecaption{Properties and abundances\label{tbl_abun}}
\tablewidth{0pt}
\tablehead{
\colhead{} & \colhead{NLTT~25792} \tablenotemark{a} & \colhead{G74-7} & \colhead{WD~1455+298} & \colhead{WD~0354+463} & \multicolumn{2}{c}{WD~1257+278 \tablenotemark{a}}}
\startdata
$T_{\rm eff}$ (K) & $7903\pm16$ & $7306\pm22$ & $7383\pm19$ & $8240\pm120$ & $8609\pm20$ & (8600) \\
$\log{g}({\rm cm\,s^{-2}})$ & $8.04\pm0.03$ & $8.06\pm0.02$ & $7.97\pm0.03$ & $7.96\pm0.10$ & $8.24\pm0.02$ & (8.0) \\
& & & & & \\
$\log{\rm Mg/H}$ & $-$7.24$\pm$0.05 & $-$7.79$\pm$0.06 & $-$8.03$\pm$0.06 & $-$6.70$\pm$0.05 & $-$7.49$\pm$0.08 & $-$7.51$\pm$0.09 \\
${\rm [Mg/H]}$ \tablenotemark{b} & $-$2.80$\pm$0.05 & $-$3.35$\pm$0.06 & $-$3.59$\pm$0.06 & $-$2.26$\pm$0.05 & $-$3.05$\pm$0.08 & $-$3.07$\pm$0.09 \\
& & & & & \\
$\log{\rm Al/H}$ & $-$8.16$\pm$0.11 & $-$8.90$\pm$0.20 & ... & $-$7.98$\pm$0.13 & $-$8.50$\pm$0.25 & $-$8.50$\pm$0.25 \\
${\rm [Al/H]}$ \tablenotemark{b} & $-$2.62$\pm$0.11 & $-$3.36$\pm$0.20 & ... & $-$2.44$\pm$0.13 & $-$2.96$\pm$0.25 & $-$2.96$\pm$0.25 \\
& & & & & \\
$\log{\rm Ca/H}$ & $-$8.07$\pm$0.06 & $-$9.05$\pm$0.04 & $-$9.51$\pm$0.03 & $-$8.20$\pm$0.03 & $-$8.38$\pm$0.06 & $-$8.39$\pm$0.06 \\
${\rm [Ca/H]}$ \tablenotemark{b} & $-$2.40$\pm$0.06 & $-$3.38$\pm$0.04 & $-$3.84$\pm$0.03 & $-$2.53$\pm$0.03 & $-$2.71$\pm$0.06 & $-$2.72$\pm$0.06 \\
& & & & & \\
$\log{\rm Fe/H}$ & $-$7.16$\pm$0.04 & $-$8.03$\pm$0.09 & $-$8.40$\pm$0.08 & $-$7.13$\pm$0.11 & $-$7.47$\pm$0.09 & $-$7.45$\pm$0.10 \\
${\rm [Fe/H]}$ \tablenotemark{b} & $-$2.63$\pm$0.04 & $-$3.50$\pm$0.09 & $-$3.87$\pm$0.08 & $-$2.60$\pm$0.11 & $-$2.94$\pm$0.09 & $-$2.92$\pm$0.10
\enddata
\tablenotetext{a}{The calcium abundance measurement excludes Ca H\&K.}
\tablenotetext{b}{$[{\rm X/H}] = \log{\rm X/H}-\log{\rm X/H}_\odot$.}
\end{deluxetable*}
The abundance analysis of the four related stars is performed for given atmospheric parameters (Table~\ref{tbl_abun}).
We collected published effective temperature and surface gravity
measurements based on Balmer line profile analyses or joint line profile and parallax analyses.
For G74-7 we averaged the temperature and gravity measurements of \citet{bil1997}, \citet{gia2004}, \citet{gia2005}, \cite{hol2008},
\citet{gia2011}, and \citet{gia2012} that are based on a Balmer line profile analysis (method 1), and compared the results to
the average of the measurements of \citet{ber1997}, \citet{leg1998}, and \citet{ber2001} that
are based on the parallax of \citet{van1995}, optical/IR SEDs, and high-dispersion H$\alpha$ spectroscopy (method 2).
Some of these measurements may well be redundant, but, in general,
they should reflect on differing data sources or model atmosphere generations.
In this case, the two methods delivered consistent results: ($T_{\rm eff}$, $\log{g})=7305\pm22,\,8.07\pm0.03)$ with method 1
and ($T_{\rm eff}$, $\log{g})=7316\pm103,\,8.02\pm0.05)$ with method 2. Therefore, we adopted the weighted average of all
temperature and surface gravity measurements (Table~\ref{tbl_abun}).
The corresponding abundance measurements differ slightly from published values: we obtain
a lower calcium abundance ($-0.2$ dex)
but a higher magnesium abundance (0.3 and 0.1 dex) than in \citet{bil1997} and \citet{zuc2003}, although our iron and aluminum abundance
measurements are in good agreement (within $\approx 0.1$ dex) with those of \citet{zuc2003}.
The adopted atmospheric parameters in either study are similar to those adopted in this work. Variations may be
attributed to different fitting techniques or model generations.
The pattern is very nearly scaled on the solar pattern with $[{\rm X/H}]\approx -3.4$, where X represents Mg, Al, Ca, and Fe.
Next, for WD~0354+463 we used the Balmer line analysis of \citet{gia2011}. The effective temperature adopted by \citet{zuc2003} was
close to 500~K cooler and should affect abundance measurements. Predictably, because of the higher temperature adopted
in this work our abundance measurements are on average 0.19 dex higher with a dispersion of only 0.08 dex. However, the patterns are
similar, except for a slight magnesium enrichment in our data.
In addition, and as in the case of G74-7, the calcium abundance measurements based on \ion{Ca}{1}$\lambda$4226 and Ca~K are formally
in agreement.
For WD~1257$+$278 we averaged the measurements of \citet{gia2011}, \citet{lim2010}, \citet{hol2008}, and \citet{lie2005}
to which we compounded our own analysis of two available SDSS spectra. We excluded a notably defective H$\alpha$ line from
the analysis of one of the SDSS spectrum.
We obtained similar parameters (Table~\ref{tbl_abun}) to those adopted by \citet{zuc2011}.
The surface gravity (hence mass) obtained in the joint astrometric/photometric/spectroscopic analysis of \citet{ber2001}
is lower than estimated in the cited spectroscopic analyses. Comparing the last two columns of Table~\ref{tbl_abun} we conclude
that the effect on the abundance analysis
of the surface gravity uncertainty is negligible.
However, \citet{zuc2011} list the following abundances for WD~1257$+$278:
$[{\rm Mg/H}] = -2.80$, $[{\rm Al/H}] = -2.63$, $[{\rm Ca/H}] = -2.37$, and $[{\rm Fe/H}] = -2.88$.
Their adopted stellar parameters are nearly identical to ours ($T_{\rm eff}$$=8600\pm100$ K, $\log{g}=8.10\pm0.15$).
Apart from similar iron abundances, the abundances of Mg, Al, and Ca are approximately
a factor of two lower in our work although we employed the same data set.
Such discrepancies may, in part, be caused by differences in the model atmospheres or by
different line measurement techniques. The equivalent width measurement of weak lines are notably affected by
the choice of the integration window that may inadvertently include
neighboring lines or uncalibrated continuum variations. Abundance measurements based on a few lines may suffer from
such systematic effects that, however, would tend to average out when including many spectral lines in the abundance measurement.
Spectral line fitting may still suffer from poor continuum placement but the line integration is necessarily
confined to the width of the synthetic line profile. We noted that the present iron abundance measurement and that
of \citet{zuc2011} are based on numerous lines, possibly averaging out systematic effects, and are formally in agreement.
Overall, and in agreement with \citet{zuc2011}, we found that the atmosphere of WD~1257$+$278 appears relatively rich in calcium, particularly
relative to magnesium.
The calcium abundance measured from the Ca~K line, $[{\rm Ca/H}] = -2.53\pm0.04$, is
0.16 dex higher than measured using the \ion{Ca}{1}$\lambda$4226 line and the Ca~K line
core is poorly fitted, although the overall line profile is well matched. This slight abundance discrepancy
may suggest an effect similar to that observed in NLTT~25792 although the Ca~K line in WD~1257$+$278 does
not show a notable blue shift or asymmetry.
Finally, we averaged the effective temperature and surface gravity measurements of \citet{gia2011}, \citet{hol2008}, \citet{lie2005}, and our own measurement
of WD~1455$+$298 based
on SDSS spectroscopy. These measurements based on Balmer line profiles are only marginally consistent with the parallax that implies
a larger stellar radius hence lower gravity:
WD~1455$+$298 is a suspected double degenerate \citep{ber2001}.
However, H$\beta$ and other lines only show single components. Moreover, the SED (Appendix 1) does not show evidence of a cool
companion. This discrepancy remains unresolved. The atmosphere of WD~1455$+$298 is the cleanest of the sample with an
average metallicity index of only $[{\rm X/H}]=-3.7$ where X represents Mg, Ca, and Fe.
Our new abundance measurements differ on average with those of \citet{zuc2003} by only $-0.1$ dex but with a dispersion of 0.2 dex.
The increased signal to noise ratio in recent KOA data resulted in more accurate abundance measurements and a clear detection of the \ion{Ca}{1}$\lambda4226$ line.
The present analysis indicates a modest enrichment in magnesium relative to calcium and iron.
The present calcium and iron abundance measurements in WD~1455$+$298 supercede those presented earlier in \citet{kaw2011b}.
Figure~\ref{fig4} shows the abundance patterns in the five objects analyzed. The patterns for both G74-7, WD~0354$+$463, and WD~1455$+$298 do not suggest
calcium enhancement, but those of NLTT~25792 and WD~1257$+$278 show a clear enhancement relative to all other elements.
The calcium to magnesium ratio is the most revealing with an enhancement relative to solar
of $+0.40$ in NLTT~25792 and $+0.34$ in WD~1257$+$278.
Interestingly, the patterns for WD~0354+463 and WD~1455+298 show a reversed trend with calcium at its lowest
abundance relative to magnesium in the sample, $\approx -0.27$ and $-0.25$ below solar. Overall, the abundance pattern in G74-7 is flat, with
a slightly lower iron abundance than the average pattern. Diffusion effects, i.e., the effects of varying diffusion
time scales on the observed abundances, are likely to alter the observed abundance pattern relative to the supplied, i.e, accreted
pattern (see a discussion in Section 4).
\begin{figure}
\epsscale{1.15}
\plotone{f4.eps}
\caption{Comparative analysis of magnesium, aluminum, calcium, and iron abundances in the five selected stars.
The Ca abundance refers to abundance indicators other than Ca~H\&K, i.e, \ion{Ca}{1}$\lambda$4226 or the \ion{Ca}{2} IR triplet.
The pattern observed in NLTT~25792 is similar to that of WD~1257$+$278 and notably dissimilar to the other patterns (see Section 3.4).
\label{fig4}}
\end{figure}
\subsection{A \ion{Ca}{2} Nebular Component?}
Modelling of the Ca~K line remains unsatisfactory: The line profile appears deeper and blue-shifted
relative to the best fit model to all calcium lines (Fig.~\ref{fig5}). Incidentally,
the line did not vary in strength between the FORS1 and X-shooter observations.
The line width is dominated by collisions with hydrogen atoms. At the temperature and
density conditions prevalent in the atmosphere of NLTT~25792, we estimate that 80\%
of the total width $\Gamma_{\rm tot}$ is contributed by hydrogen atoms and only 20\% by electrons.
Increasing $\Gamma_{\rm tot}$ by a factor of two does increase the equivalent width
by 33\% although it does not displace the line further toward the blue. On the other hand,
strong lines, such as the Balmer or the Ca~H\&K lines, are visibly affected by the atmospheric
structure over a wide range of depths, but we noted that the
H$\alpha$ and H$\beta$ line wings and deep cores measured with X-shooter are
well modeled. An accumulation of calcium above the convection zone could result in
a stronger line core than predicted by our homogeneous models.
Alternatively, the excess absorption in the Ca~K line and the velocity offset could be
interpreted as evidence of a nebular component to the observed profile
with an equivalent width of $\approx 480\,$m\AA\ and at a relative velocity of $-20$ km~s$^{-1}$. Correcting for the gravitational redshift ($\sim30$ km~s$^{-1}$), the
average velocity of the gas relative to the surface is $+10$ km~s$^{-1}$.
Adopting velocity dispersion appropriate for local interstellar gas at $T\sim7000$\,K, i.e., $\sigma=1.7$\,km~s$^{-1}$,
an exceedingly large column density $\log{N}$(\ion{Ca}{2}\,cm$^{-2})\gtrsim 15$ would be required to fill in the observed absorption at
a relative velocity of $-20$ km~s$^{-1}$. However, allowing a larger velocity dispersion of $\sigma=30$\,km~s$^{-1}$, i.e., typical of the range of projected orbital velocity for gas
transiting in front of the stellar disk \citep[see][]{deb2012}, a lower density of $\log{N}$(\ion{Ca}{2}\,cm$^{-2})\approx 12.8$ is found.
This simple geometrical effect helps locate the gas at a radius $r = (R_{wd}/\sigma)^{2/3} \,(G\,M_{wd})^{1/3}\approx 20\,R_{wd}$ well
within a tidal disruption radius of 100\,$R_{wd}$.
Note that in this analysis it would be more appropriate to adopt rotational broadening function rather than Gaussian velocity distribution.
An origin in the interstellar medium is unlikely.
The absorption
largely exceeds measurements of interstellar \ion{Ca}{2} K line widths ($<300$ m\AA) up
to distances of 400 pc \citep{wel2010}.
Assuming a maximum \ion{Ca}{2} volume density of $10^{-8}$ cm$^{-3}$, the total
column density at the distance of NLTT~25792 ($d\approx 36$ pc) would be $10^{12}$ ions cm$^{-2}$.
In conditions typical of the local ISM, the corresponding line equivalent width would not
exceed 50 m\AA. At a lower, typical volume density of $10^{-9}$ cm$^{-3}$ the column density
is $10^{11}$ cm$^{-2}$ and the corresponding equivalent width is only a few m\AA.
It is therefore unlikely that the excess \ion{Ca}{2} absorption would originate in the
interstellar medium. We conclude that it probably originates in a gaseous circumstellar environment.
The presence of ionized gas in the circumstellar environment of DAZ white dwarfs is well documented \citep[see, e.g.,][]{gae2006,mel2012,deb2012} and revealed mostly by
IR calcium triplet emission, although \citet{gae2012} noted excess \ion{Si}{4} absorption in ultraviolet spectra of
the hot DAZ PG~0843$+$516 that could also originate in a hot circumstellar environment.
The X-shooter spectra of NLTT~25792 show the IR calcium triplet in absorption.
The DAZ WD~1257$+$278 also shows excess absorption in the Ca~H\&K lines although the HIRES spectrum does not show
an obvious line shift or asymmetry. Again, the absorption profile resulting from transiting gas follows a
rotational broadening function that may be hidden within the combined profile.
The presence of circumstellar Ca~K line in DAZ spectra is analogous to \ion{C}{4} absorption in the circumstellar environment of hot white dwarfs.
Ionized species of carbon and silicon are found in circumstellar environment of hot white dwarfs.
\citet{ban2003} ascribed those features to a Str\"omgren sphere in the interstellar medium excited by
ultraviolet radiation emanated by the white dwarf, although \citet{dic2012} also cite the possibility
of evaporating circumstellar debris in the intense ultraviolet radiation field. Clearly, young white dwarfs
may be surrounded by even denser material than their older, DAZ counterparts.
\begin{figure}
\epsscale{1.15}
\plotone{f5.eps}
\caption{The calcium lines in the X-shooter spectra (grey lines) are compared to spectral line syntheses (full lines) with $[{\rm Ca/H}] = -2.4$ (Table~\ref{tbl_abun}).
The Ca~K line is stronger than
predicted by the model and shows a possible nebular component shifted by $-20$ km~s$^{-1}$\ (short dashed line) that we added to the total profile (long dashed line).
\label{fig5}}
\end{figure}
\section{Summary and Discussion}
We measured the metallicity in the atmosphere of the DAZ white dwarf NLTT~25792 based on the detection
of magnesium, aluminum, calcium and iron lines in X-shooter spectra of this object.
The average abundance of these elements relative to solar on a logarithmic scale is $[{\rm X/H}]\approx -2.5$ dex.
On the same scale, the upper limits to the sodium and silicon abundances are $[{\rm Na/H}]\lesssim -3.1$ and $[{\rm Si/H}]\lesssim -3.0$.
The atmosphere of NLTT~25792 appears relatively rich in calcium, but relatively poorer in sodium, magnesium, and silicon.
Also, the Ca~K line appeared both deeper and offset than predicted by our models suggesting the presence of circumstellar
gas. The absence of IR calcium triplet emission implies a lack of ionizing radiation concordant with the relatively low
effective temperature of NLTT~25792 ($T_{\rm eff}$ $\approx 7900$\,K) compared to other DAZ white dwarfs with gaseous
disks \citep[$T_{\rm eff}$ $\gtrsim 20,000$\,K, see, e.g.,][]{gae2006}, or their He-rich counterparts with comparable
ionizing flux \citep[e.g., SDSS~J0738+1835 with $T_{\rm eff}$$\approx14,000$\, K,][]{duf2012}.
Disk modeling by \citet{har2011} shows that emission lines, such as the IR calcium triplet,
occurs within the gaseous disk at temperatures of $T_{\rm disk}\lesssim 7000$\,K
and inside the tidal radius.
Chemical diversity in the atmosphere of accreting white dwarfs may be attributed to diversity in source compositions.
For example, the DAZ NLTT~43806 \citep{kaw2006} appears to be
iron deficient prompting \citet{zuc2011} to propose that this deficiency along with a calcium/aluminum
enrichment implies that the white dwarf may
be accreting predominantly ``earth-type lithosphere" material.
Moreover, \citet{gae2012} found evidence of chemical diversity in a sample of
warm ($\sim 20\,000$ K) white dwarfs, particularly in the form of an
overabundance of iron that implies differentiation in the accretion flow.
Another explanation for chemical diversity is the time dependence of the accretion flow and atmospheric diffusion.
Diffusion time-scales at the bottom of the convection zone \citep[see][]{koe2009}
\footnote{See also updated time-scale calculations on-line at {\tt http://www1.astrophysik.uni-kiel.de/$\sim$koester/astrophysics/} and dated January 2013.}
range from $\tau\sim 10^3$ to $10^4$ yrs for a hydrogen-rich white dwarf with an effective temperature close to 8000\,K such as NLTT~25792.
This time-scale is nearly instantaneous relative to the age of the white dwarf ($t_{\rm cool} \approx 10^9$ yrs). However, we may well speculate that a time
scale of $10^3$ yrs is comparable or longer than the time lapse between single accretion events that would result in time-variable
abundances.
This complicated history may be summarized with two extreme examples related to calcium and iron: in the first, our observations took place shortly
($t<<\tau$) after the accretion event and the abundances simply reflect the accretion source. In this case, the body accreted onto NLTT~25792
must have been moderately enriched in calcium relative to iron.
In the second example, the observations took place long after the accretion event ($t>\tau$) and, following
\citet{koe2009}, the abundances follow an exponential decline $X/H \propto e^{-t/\tau}$, so that the
abundance ratio of element X relative to Y follows:
\begin{displaymath}
\log{\rm X/Y}-\log{\rm X/Y}_{\rm source} = -\frac{t}{\ln{10}}\,\Big{(}\frac{1}{\tau_{\rm X}} - \frac{1}{\tau_{\rm Y}} \Big{)}.
\end{displaymath}
If $\tau_{\rm X} > \tau_{\rm Y}$, as in the case of calcium (X) versus iron (Y), then X (calcium) would gradually dominate
over Y (iron) on a time-scale comparable to diffusion time-scales. Numerically, $\tau_{\rm Ca}/\tau_{\rm Fe}\approx 1.3-1.5$ in
conditions appropriate for NLTT~25792, so that before all elements disappear from the atmosphere at, say, $t\approx\tau_{\rm Ca}$
the calcium abundance would be enhanced by a factor of 1.4 to 1.6 relative to initial conditions. Assuming initially solar abundances
this calcium-to-iron abundance ratio is nearly the ratio observed in NLTT~25792 ($\approx 1.5$, see Table~\ref{tbl_abun}) and there would be no need to assume
a calcium-rich accretion source. A similar exercise involving the calcium to magnesium abundance ratio necessarily implies a shortage of magnesium in the
accretion source because their respective diffusion time-scales are nearly equal. The same situation holds for sodium.
In steady state accretion, the observed abundance ratios are simply given by
\begin{displaymath}
\frac{\rm X}{\rm Y} = \frac{\rm X}{\rm Y}_{\rm source} \,\frac{\tau_{\rm X}}{\tau_{\rm Y}}
\end{displaymath}
In this case we would have expected an excess of 0.11 to 0.17 dex of the calcium to iron ratio relative to a source assumed
to be solar, i.e., close to excess observed in NLTT~25792 ($0.23\pm0.10$).
However, examining other abundance ratios should help constrain the abundance pattern of the accretion source.
For example, sodium, magnesium, and calcium have nearly identical diffusion time scales but we found
a significant deficit in sodium and magnesium relative to solar with $[{\rm Na/Ca}] \lesssim -0.7$ and $[\rm Mg/Ca] = -0.4$.
Diffusion time scales for silicon are uncertain. The calculations of \cite{koe2009} indicate a longer time scale for silicon
than calcium, although the recent on-line data indicate the opposite. In either case, the silicon time scale is longer
than that of iron for conditions found in NLTT~25792 implying that the silicon deficit ($[{\rm Si/Fe}] \lesssim -0.4$, $[{\rm Si/Ca}] \lesssim -0.6$) can only be explained by its
absence in the accretion source.
Assuming steady state, we conclude that the accretion source shows a deficit
in sodium, magnesium and silicon relative to calcium and iron. The aluminum abundance does not significantly depart from
solar abundance ratios.
Interestingly, \citet{koe2011} finds that a deficit in sodium relative to solar in some DZ stars would occur while accreting ``bulk Earth'' material, although
an explanation for a similar deficit in silicon is not forthcoming.
In summary, and
following \citet{koe2009}, the estimated steady-state accretion rates onto the white dwarf NLTT~25792 are $\dot{M}_{\rm Mg} = 4.6\times10^7$\,g\,s$^{-1}$,
$\dot{M}_{\rm Ca} = 1.2\times10^7$\,g\,s$^{-1}$, and $\dot{M}_{\rm Fe} = 1.7\times10^8$\,g\,s$^{-1}$, and
using on-line data from D. Koester we estimate $\dot{M}_{\rm Al} = 5.3\times10^6$\,g\,s$^{-1}$.
This limited DAZ sample already suggests that the circumstellar environment varies significantly. Abundance ratios, more particularly
$[\rm Mg/Ca]$, vary considerably within this sample, from $[\rm Mg/Ca] = -0.4$ in NLTT~25792
to $[\rm Mg/Ca] = +0.27$ in WD~0354$+$463. The reasons for these variations are not known.
The ratios $[{\rm Na/Ca}]$ and $[{\rm Si/Ca}]$ in NLTT~25792 fall well
below solar ratios and this deficit must originate in the accretion source, although
the precise circumstances surrounding accretion of circumstellar material onto cool white dwarfs remain uncertain.
In this context, a correlation found between condensation temperature of accreted constituents and corresponding
photospheric abundances in a helium-rich polluted white dwarf (DBZ) may offer come clues to the exact nature of
the accretion mechanism \citep{duf2012}.
In a broader context, the calcium to iron abundance ratio in DZ and DAZ stars alike is known to vary by well over
an order of magnitude \citep[see][]{jur2013} although we find this ratio to be rather homogeneous within our sample.
On the other hand, \citet{koe2011} measured large dispersions (0.4 to 0.6 dex) in the abundance ratio distributions
(Na, Mg, Ca, and Fe)
of a large DZ sample.
The present study shows that such variations occur between magnesium and other elements within a sample of
closely related DAZ white dwarfs.
New spectroscopy with a higher dispersion than achieved with X-shooter ($R\approx 9000$) would be useful in
resolving the Ca~K line profile into its photospheric and circumstellar components, if any.
More importantly,
the existence of polluted, magnetic white dwarfs
\citep[e.g., NLTT~10480 and NLTT43806;][]{kaw2011,zuc2011} may be linked to field-generating
accretion or interaction events \citep{tou2008,nor2011}. The detection of magnetic fields weaker than 20 kG is
not possible at the resolution achieved with X-shooter but high-dispersion spectroscopy of cool
DAZ white dwarfs may help reinforce a link between accretion events and magnetic field generation in
compact objects.
\acknowledgments
A.K. and S.V. acknowledge support from the Grant Agency of the Czech Republic
(P209/12/0217 and 13-14581S). This work was also supported by the project
RVO:67985815 in the Czech Republic. We thank the referee for suggesting several
improvements to the paper.
This research has made use of the Keck Observatory Archive (KOA), which is operated
by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI),
under contract with the National Aeronautics and Space Administration.
This publication makes use of data products from the Wide-field Infrared Survey Explorer, which
is a joint project of the University of California, Los Angeles, and the Jet Propulsion
Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration,
and from the Two Micron All Sky Survey, which is a joint project of
the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of
Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
{\it Facilities:} \facility{VLT:Kueyen}
|
1,477,468,751,438 | arxiv | \section{Preliminaries}
We start with the definition of an infinitely divisible random measure, cf. \cite{RR89}, pp.~454. Let $E$ be an arbitrary non-empty set and $\mathcal{D}$ be a $\delta$-ring (i.~e. a ring which is closed under countable intersections) of subsets of $E$ such that there exists an increasing sequence $\{E_n\}_{n \in \ensuremath{\mathbb{N}}} \subset \mathcal{D}$ with $\bigcup_{n \in \ensuremath{\mathbb{N}}} E_n = E$. Recall that a ring of sets is a non-empty class of sets which is closed under the formation of unions and differences of sets, see e.~g. \cite{Hal74}, p.~19.
Let $\Lambda = \{\Lambda(A): A \in \mathcal{D}\}$ be a real stochastic process defined on some probability space $(\Omega,\mathcal{F}, \mathbb{P})$ such that for each sequence of disjoint sets $\{E_n\}_{n \in \ensuremath{\mathbb{N}}} \subset \mathcal{D}$, the following properties hold:
\begin{itemize}
\item $\Lambda$ is \textit{independently scattered}, i.~e. the random variables $\Lambda(E_n), n=1,2,\ldots$, are independent,
\item $\Lambda$ is \textit{$\sigma$-additive}, i.~e. $\Lambda(\bigcup_{n \in \ensuremath{\mathbb{N}}} E_n) = \sum_{n \in \ensuremath{\mathbb{N}}} \Lambda(E_n)$ almost surely if $\bigcup_{n \in \ensuremath{\mathbb{N}}} E_n \in \mathcal{D}$,
\item $\Lambda(A)$ is an \textit{infinitely divisible} (\textbf{ID}) random variable for each $A \in \mathcal{D}$, i.~e. $\Lambda(A)$ has the law of the sum of $n$ independent and identically distributed random variables for any natural number $n \in \ensuremath{\mathbb{N}}$.
\end{itemize}
Then $\Lambda$ is called \textit{infinitely divisible random measure}.
Let $\Psi_{\Lambda(A)}$ be the characteristic function of $\Lambda(A)$. Since $\Lambda(A)$ is \textbf{ID}, its characteristic function is given by the L\'evy-Khintchine representation
\begin{equation}
\Psi_{\Lambda(A)}(t) = \exp\left\{it\nu_0(A) - \frac{1}{2} t^2\nu_1(A) + \int_{\ensuremath{\mathbb{R}}}\left(e^{itx} - 1- it\tau(x)\right)F_A(dx)\right\}, \label{eq:cf_Lambda}
\end{equation}
where $\nu_0: \mathcal{D} \to \ensuremath{\mathbb{R}}$ is a signed measure, $\nu_1: \mathcal{D} \to [0,\infty)$ is a measure, $F_A:\ensuremath{\mathbb{R}} \to [0,\infty)$ is a L\'evy measure, i.~e.
$$\int_\ensuremath{\mathbb{R}} \min(1,z^2)F_A(dz) < \infty$$
and
$$ \tau(z) = \begin{cases}
z,& \vert z \vert \leq 1, \\
\frac{z}{\vert z \vert}, & \vert z \vert > 1.
\end{cases}$$
Define the measure $\lambda$ by
$$\lambda(A) := \vert \nu_0 \vert(A) + \nu_1(A) + \int_\ensuremath{\mathbb{R}} \min(1,z^2) F_A(dz), \quad A \in \mathcal{D}.$$
We call $\lambda$ \textit{control measure} of the \textbf{ID} random measure $\Lambda$.
Let $\sigma(\mathcal{D})$ be the $\sigma$-algebra generated by $\mathcal{D}$ and $I_A: E \to \{0,1\}$ the indicator function of a set $A \subset E$ with
\begin{equation*}
I_A(x) := \begin{cases}
1, & x \in A,\\
0, & x \notin A.
\end{cases} \label{eq:ind}
\end{equation*}
For disjoint sets $A_j \in \mathcal{D}$, real numbers $x_j$, $j=1,\ldots,n$, $n \in \ensuremath{\mathbb{N}}$, and simple functions of the form $f = \sum_{j=1}^n x_j I_{A_j}$, we define for every $A \in \sigma(\mathcal{D})$
$$\int_A f d\Lambda := \sum_{j=1}^n x_j \Lambda(A \cap A_j).$$
Let $f_t:E \to \ensuremath{\mathbb{R}}$, $t \in \ensuremath{\mathbb{R}}^d$, $d \in \ensuremath{\mathbb{N}}$, be a $\sigma(\mathcal{D})$-measurable function which is \textit{$\Lambda$-integrable}, that is there exists a sequence of simple functions $\{\tilde{f}_t^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ such that
\begin{enumerate}
\item $\tilde{f}_t^{(n)} \to f_t \quad \lambda-\text{a.e.},$
\item for every set $A \in \sigma(\mathcal{D})$, the sequence $\{\int\limits_{A} \tilde{f}_t^{(n)}(x) \Lambda(dx)\}_{n \in \ensuremath{\mathbb{N}}}$ converges in probability as $n \to \infty$.
\end{enumerate}
A family $X = \{X(t), t \in \ensuremath{\mathbb{R}}^d\}$ of real-valued random variables $X(t)$ defined on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ is called \textit{random field}. For each $t \in \ensuremath{\mathbb{R}}^d$, we define
\begin{equation}
\int\limits_{E} f_t(x) \Lambda(dx) := \underset{n \to \infty}{\text{plim}} \int\limits_{E} \tilde{f}_t^{(n)}(x) \Lambda(dx), \label{eq:def_int}
\end{equation}
where $plim$ means convergence in probability, and consider random fields of the form
\begin{equation}
X(t) = \int\limits_{E} f_t(x) \Lambda(dx), \quad t \in \ensuremath{\mathbb{R}}^d. \label{eq:spectralRepresentation}
\end{equation}
In \cite{UW67}, it is shown that (\ref{eq:def_int}) does not depend on the approximation sequence $\{\tilde{f_t}^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ and thus is well-defined.
Notice that by Lemma~2.3. in \cite{RR89}, we have
\begin{eqnarray*}
F_A(B) = F(A \times B) \quad \text{and} \quad F(dx,ds) = \rho(x,ds)\lambda(dx),
\end{eqnarray*}
where $F$ is a $\sigma$-finite measure on $\sigma(\mathcal{D}) \times \mathcal{B}(\ensuremath{\mathbb{R}})$. Furthermore, $\rho: E \times \mathcal{B}(\ensuremath{\mathbb{R}}) \to [0,\infty]$ is a function such that $\rho(x,\cdot)$ is a L\'evy measure on $\mathcal{B}(\ensuremath{\mathbb{R}})$ for every $x \in E$ and $\rho(\cdot,B)$ is a Borel measurable function for every $B \in \mathcal{B}(\ensuremath{\mathbb{R}})$. Moreover, $\nu_0$ and $\nu_1$ are absolutely continuous with respect to $\lambda$. We set $a := d\nu_0 / d\lambda$ and $\sigma^2 := d\nu_1 / d\lambda$.
Let us introduce a certain type of dependence structure, namely (positive or negative) association, which we will consider in the following section. Let $\vert I \vert$ denote the cardinality of a finite set $I \subset T$ and $X_I:= \{X(t), t \in I\}$.
\begin{definition}\label{def:association}
Let $\mathcal{M}(n)$ be the class of real-valued bounded coordinate-wise nondecreasing Borel functions on $\ensuremath{\mathbb{R}}^n$, $n \in \ensuremath{\mathbb{N}}$, and $T$ be an index set.
\begin{enumerate}
\item[(a)] A family $\{X(t), t \in T\}$ is called associated (\textbf{A}) if for every finite set $I \subset T$ and any functions $f,g \in \mathcal{M}(\vert I\vert)$, one has
$$ Cov(f(X_I),g(X_I)) \geq 0.$$
\item[(b)] A family $\{X(t), t \in T\}$ is called positively associated (\textbf{PA}) if for any disjoint finite sets $I,J \subset T$ and all functions $f \in \mathcal{M}(\vert I\vert)$, $g \in \mathcal{M}(\vert J\vert)$, one has
$$ Cov(f(X_I),g(X_J)) \geq 0.$$
\item[(c)] A family $\{X(t), t \in T\}$ is called negatively associated (\textbf{NA}) if for any disjoint finite sets $I,J \subset T$ and all functions $f \in \mathcal{M}(\vert I\vert)$, $g \in \mathcal{M}(\vert J\vert)$, one has
$$ Cov(f(X_I),g(X_J)) \leq 0.$$
\end{enumerate}
\end{definition}
In the above definition, any permutation of coordinates of the random vector $(X(t_1),\ldots,X(t_n))^\mathsf{T}$ is used for $X_K$, $K=\{t_1,\ldots,t_n\} \subset T$.
Finally, we provide the definition of stochastic continuity.
\begin{definition}
A random field $X=\{X(t), t \in \ensuremath{\mathbb{R}}^d\}$ is called \textit{stochastically continuous} at $t \in \ensuremath{\mathbb{R}}^d$ if $\underset{s \to t}{\text{plim}}X(s) = X(t)$.
\end{definition}
\section{Results}
\begin{result}
Let $X=\{X(t), t \in \ensuremath{\mathbb{R}}^d\}$ be a random field of the form (\ref{eq:spectralRepresentation}). Then $X$ is \textbf{ID}, that is the law of the random vector $(X(t_1),\ldots,X(t_n))^\mathsf{T}$, $n \in \ensuremath{\mathbb{N}}$, is an \textbf{ID} probability measure on $\ensuremath{\mathbb{R}}^n$ for all $t_1,\ldots,t_n \in \ensuremath{\mathbb{R}}^d$.
\end{result}
\begin{proof}
Let $\varphi_{(t_1,\ldots,t_n)}$ be the characteristic function of $(X(t_1),\ldots,X(t_n))^\mathsf{T}$. It is enough to show that $\varphi_{(t_1,\ldots,t_n)}^\gamma$ is a characteristic function for all $\gamma > 0$.
Due to the linearity of the spectral representation (\ref{eq:spectralRepresentation}) and the fact that any linear combination of $\Lambda$-integrable functions is $\Lambda$-integrable (cf. \cite{JW94}, p.~81), we have
\begin{eqnarray*}
\varphi_{(t_1,\ldots,t_n)}^\gamma(x) &=& \varphi_{\sum_{j=1}^n x_j X(t_j)}(1) \\
&=& \exp\left\{\int_E \left[\sum_{j=1}^n x_j f_{t_j}(y) a(y) - \frac{1}{2} t^2 \left(\sum_{j=1}^n x_j f_{t_j}(y)\right)^2 \sigma^2(y)\right.\right.\\
&&\left.\left. + \int_\ensuremath{\mathbb{R}} \left(e^{it \sum_{j=1}^n x_j f_{t_j}(y) s} - 1 - it \sum_{j=1}^n x_j f_{t_j}(y) \tau(s) \right) \rho(y,ds)\right] \gamma \lambda(dy)\right\},
\end{eqnarray*}
where the last equality follows from Proposition~2.6. in \cite{RR89}. Define $\nu_0^*:\mathcal{D} \to \ensuremath{\mathbb{R}}$, $\nu_1^*: \mathcal{D} \to [0,\infty)$, $F_A^*:\mathcal{B}(\ensuremath{\mathbb{R}}) \to [0,\infty)$ by $\nu_0^*(ds) := a(s) \gamma \lambda(ds) = \gamma \nu_0(ds)$, $\nu_1^*(ds):= \sigma^2(s) \gamma \lambda(ds) = \gamma \nu_1(ds)$ and
$$F_A^*(B) := \int_E \int_\ensuremath{\mathbb{R}} I_{A \times B}(s,x) \rho(s,dx) \gamma \lambda(ds) = \gamma \int_{A \times B} F(ds,dx) = \gamma F(A \times B) = \gamma F_A(B)$$
for all $A \in \mathcal{D}$ and $B \in \mathcal{B}(\ensuremath{\mathbb{R}})$, cf. Lemma~2.3. in \cite{RR89}. Since $\nu_0^*$ is a signed measure, $\nu_1^*$ is a measure, $F_A^*$ is a L\'evy measure on $\ensuremath{\mathbb{R}}$ for all $A \in \mathcal{D}$ and $F.(B)$ is a measure for all $B \in \mathcal{B}(\ensuremath{\mathbb{R}})$ whenever $0 \notin \bar{B}$, there exists an \textbf{ID} random measure $\Lambda^*$ with characteristic function (\ref{eq:cf_Lambda}) (and control measure $\lambda^*=\gamma \lambda$), where $\nu_0$, $\nu_1$ and $F_A$ in (\ref{eq:cf_Lambda}) is replaced by $\nu_0^*$, $\nu_1^*$ and $F_A^*$, respectively, see Proposition~2.1.~(b) in \cite{RR89}. Therefore, $\varphi_{(t_1,\ldots,t_n)}^\gamma$ is the characteristic function of $(Y(t_1),\ldots,Y(t_n))^\mathsf{T}$ with
$$Y(t) := \int_E f_t(x) \Lambda^*(dx).$$
\end{proof}
The following result provides conditions for $\Lambda$-integrability, cf. Lemma 1 in \cite{HPVJ08}.
\begin{result}\label{result:integrability}
Let $f:E \to \ensuremath{\mathbb{R}}$ be a $\sigma(\mathcal{D})$-measurable function. If
\begin{itemize}
\item[(i)] $\int_E \vert f(x) a(x) \vert \lambda(dx) < \infty$,
\item[(ii)] $\int_E f^2(x) \sigma^2(x) \lambda(dx) < \infty$,
\item[(iii)] $\int_E \int_\ensuremath{\mathbb{R}} \vert f(x) s \vert \rho(x,ds) \lambda(dx) < \infty$,
\end{itemize}
then $f$ is $\Lambda$-integrable and the characteristic function of $\int_E f(x) \Lambda(dx)$ is given by
\begin{eqnarray*}
&&\hspace*{-0.5cm}\Psi_{\int_E f(x) \Lambda(dx)}(t) \\
&&\hspace*{-0.5cm}= \exp\left\{ it \int_E f(x) \nu_0(dx) - \frac{1}{2} t^2 \int_E f^2(x) \nu_1(dx) + \int_E \int_\ensuremath{\mathbb{R}} \left(e^{itf(x) s}-1-itf(x) \tau(s)\right) F(dx,ds)\right\}.
\end{eqnarray*}
\end{result}
\begin{proof}
By Theorem~2.7 in \cite{RR89}, it suffices to show
\begin{itemize}
\item[(a)] $\int_E \vert U(f(x),x)\vert \lambda(dx) < \infty$,
\item[(b)] $\int_E \vert V_0(f(x),x)\vert \lambda(dx) < \infty$,
\end{itemize}
where
\begin{eqnarray*}
U(u,x) &=& u a(x) + \int_\ensuremath{\mathbb{R}} \left(\tau(su) - u\tau(s)\right)\rho(x,ds) \\
V_0(u,x) &=& \int_\ensuremath{\mathbb{R}} \min\{1,\vert su\vert^2\} \rho(x,ds).
\end{eqnarray*}
We follow the proof of Lemma~1 in \cite{HPVJ08}. It holds $\vert \tau(su) \vert \leq \vert su \vert$. This implies
\begin{equation*}
\vert U(f(x),x)\vert \leq \vert f(x) a(x) \vert + \int_\ensuremath{\mathbb{R}} \left(\vert f(x)s \vert + \vert f(x)s\vert \right)\rho(x,ds) = \vert f(x) a(x) \vert + 2 \int_\ensuremath{\mathbb{R}} \vert f(x)s\vert\rho(x,ds)
\end{equation*}
such that condition (a) is satisfied by (i) and (iii). Since $\min\{1,(s f(x))^2\} \leq \vert s f(x) \vert$, condition (b) is satisfied by (iii).
We now derive the formula for the characteristic function of $\int_E f(x) \Lambda(dx)$. By Proposition 2.6. in \cite{RR89}, it is given by
\begin{eqnarray*}
&&\hspace*{-0.5cm}\Psi_{\int_E f(x) \Lambda(dx)}(t) \\
&&\hspace*{-0.5cm}= \exp\left\{ it \int_E \left[f(x) a(x) - \frac{1}{2} t^2 f^2(x) \sigma^2(x) + \int_\ensuremath{\mathbb{R}} \left(e^{itf(x) s}-1-itf(x) \tau(s)\right) \rho(x,ds)\right] \lambda(dx)\right\}.
\end{eqnarray*}
We have $\int_E \vert f(x) a(x) \vert \lambda(dx) < \infty$ and $\int_E f^2(x) \sigma^2(x) \lambda(dx) < \infty$ by (i) and (ii). It remains to show that
$$\int_E \left\vert \int_\ensuremath{\mathbb{R}} \left(e^{itf(x) s}-1-itf(x) \tau(s)\right) \rho(x,ds) \right\vert \lambda(dx) < \infty.$$
Let $y \neq 0$. By using the mean value theorem, we get
\begin{eqnarray*}
\left\vert \frac{\sin(y) - \sin(0)}{y-0} \right\vert &=& \vert \sin(\xi_1)\vert \leq 1, \\
\left\vert \frac{\cos(y) - \cos(0)}{y-0} \right\vert &=& \vert \cos(\xi_2)\vert \leq 1,
\end{eqnarray*}
where $\xi_1,\xi_2 \in [0,y]$ if $y>0$ and $\xi_1,\xi_2 \in [y,0]$ if $y<0$. Therefore, we have for each $y \in \ensuremath{\mathbb{R}}$
\begin{eqnarray*}
\vert e^{iy} - 1\vert &\leq& \vert e^{iy} - e^{i0} \vert = \vert \cos(y) + i \sin(y) - \cos(0) - i \sin(0) \vert \\
&=& \sqrt{(\cos(y) - \cos(0))^2+(\sin(y)-\sin(0))^2} \leq \sqrt{y^2 + y^2} = \sqrt{2} \vert y \vert.
\end{eqnarray*}
This implies
\begin{eqnarray*}
&&\int_E \left\vert \int_\ensuremath{\mathbb{R}} \left(e^{itf(x) s}-1-itf(x) \tau(s)\right) \rho(x,ds) \right\vert \lambda(dx) \\
&\leq& \int_E\int_\ensuremath{\mathbb{R}}\left(\left\vert e^{itf(x) s}-1\right\vert + \vert tf(x) \tau(s)\vert\right) \rho(x,ds)\lambda(dx) \\
&\leq& \int_E\int_\ensuremath{\mathbb{R}}\left( \sqrt{2} \vert t f(x) s \vert + \vert t f(x) s\vert \right) \rho(x,ds)\lambda(dx) \\
&\leq& t (\sqrt{2}+1) \int_E\int_\ensuremath{\mathbb{R}} \vert f(x) s \vert \rho(x,ds)\lambda(dx) < \infty,
\end{eqnarray*}
where the last inequality follows from (iii).
\end{proof}
We now provide a sufficient condition for the independence of two families of random variables taken from the random field (\ref{eq:spectralRepresentation}). We denote the support of a function $f$ by $supp(f)$.
\begin{result}\label{lemma:IDindependence}
Let $X$ be a random field of the form (\ref{eq:spectralRepresentation}). Let $K,L \subset T=\{t_1,\ldots,t_k\}$, $T \subset \ensuremath{\mathbb{R}}^d$, $k\in\ensuremath{\mathbb{N}}$, with $K\cup L = T$, $K,L \neq \emptyset$ and $K\cap L = \emptyset$. If
\begin{equation}
\left(\bigcup_{t_i \in K} supp(f_{t_i})\right)\bigcap\left(\bigcup_{t_j \in L} supp(f_{t_j})\right) = \emptyset, \label{eq:support}
\end{equation}
then the families of random variables $\{X(t_i), t_i \in K\}$ and $\{X(t_j), t_j \in L\}$ are independent.
\end{result}
\begin{proof}
Let $\varphi_K$, $\varphi_L$ and $\varphi_{T}$ be the characteristic functions of a fixed permutation of the random vectors constructed from the families of random variables $\{X(t_i), t_i \in K\}$, $\{X(t_j), t_j \in L\}$ and $\{X(t), t \in T\}$. Furthermore, let $x_K \in \ensuremath{\mathbb{R}}^{\vert K \vert}$, $x_L \in \ensuremath{\mathbb{R}}^{\vert L \vert}$ and $x_T \in \ensuremath{\mathbb{R}}^{\vert T \vert}$ and define $K:\ensuremath{\mathbb{R}}\times E \to \ensuremath{\mathbb{C}}$ by
$$K(t,s) := it a(s) - \frac{1}{2} t^2 \sigma^2(s) + \int_\ensuremath{\mathbb{R}}\left(e^{itx} - 1 - it\tau(x)\right) \rho(s,dx).$$
We have
\begin{eqnarray*}
\varphi_{K} (x_K) &=& \varphi_{\sum\limits_{t_i \in K} x_{t_i} X(t_i)}(1) = \exp\left\{\int_E K\left(\sum_{t_i \in K} x_{t_i} f_{t_i}(s),s\right) \lambda(ds)\right\}, \\
\varphi_{L} (x_L) &=& \exp\left\{\int_E K\left(\sum_{t_j \in L} x_{t_j} f_{t_j}(s),s\right) \lambda(ds)\right\}, \\
\varphi_{T} (x_T) &=& \exp\left\{\int_E K\left(\sum_{t \in T} x_{t} f_{t}(s),s\right) \lambda(ds)\right\},
\end{eqnarray*}
see Proposition~2.6. in \cite{RR89}. By using condition (\ref{eq:support}), it is not difficult to check that for each $s \in \cup_{t_i \in K} supp(f_{t_i})$ and each $s \in \cup_{t_j \in L} supp(f_{t_j})$
\begin{equation}
K\left(\sum_{t_i \in K} x_{t_i} f_{t_i}(s),s\right) + K\left(\sum_{t_j \in L} x_{t_j} f_{t_j}(s),s\right) = K\left(\sum_{t \in T} x_{t} f_{t}(s),s\right). \label{eq:help28}
\end{equation}
Thus, (\ref{eq:help28}) holds for all $s \in E$. This implies
$$\varphi_K(x_K) \varphi_L(x_L) = \varphi_T(x_T) $$
such that $\{X(t_i), t_i \in K\}$ and $\{X(t_j), t_j \in L\}$ are independent (cf.~Theorem~4 in \cite{Shi96}, p.~286, and its proof).
\end{proof}
The following result provides a sufficient condition for a random field of the form (\ref{eq:spectralRepresentation}) to be associated.
\begin{result}\label{result:association}
Suppose that for all $t \in \ensuremath{\mathbb{R}}^d$, either $f_t(x) \geq 0$ for all $x \in E$ or $f_t(x) \leq 0$ for all $x \in E$. Then (\ref{eq:spectralRepresentation}) is an associated random field.
\end{result}
\begin{proof}
Let $f$ be $\Lambda$-integrable and non-negative. In the proof of Theorem~2.7. in \cite{RR89}, the corresponding approximating sequence $\{\tilde{f}^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ for $f$ is selected in the following way.
Let $A_n = \{x \in E: \vert f(x) \vert \leq n \} \cap E_n$. Choose a sequence of simple $\mathcal{D}$-measurable functions $\{\tilde{f}^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ such that
\begin{equation}
f^{(n)}(x) = 0 \text{ if }x \notin A_n, \quad \vert f^{(n)}(x) - f(x) \vert \leq \frac{1}{n} \text{ if } x \in A_n, \quad \ \vert f^{(n)}(x) \vert \leq \vert f(x) \vert\,\, \forall x \in E. \label{eq:approx_props}
\end{equation}
We now define a simple function $f_*^{(n)}$ by
$$f_*^{(n)}(x):=\begin{cases}
f^{(n)}(x), & f_n(x) \geq 0, \\
0, & f_n(x) < 0,
\end{cases} \quad x \in E.$$
Since $f$ is non-negative, it is easy to see that the sequence $\{f_*^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ fulfills the same properties (\ref{eq:approx_props}) as $\{\tilde{f}^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$. So $\{f_*^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ is an approximating sequence for $f$ which is additionally non-negative. As $f_*^{(n)}$ is simple for all $n \in \ensuremath{\mathbb{N}}$, we can write
$$f_*^{(n)} = \sum_{j=1}^{m(n)} x_j I_{B_j}$$
for some $m(n) \in \ensuremath{\mathbb{N}}$, $x_j \geq 0$ and disjoint $B_j \subset A_n$, $j=1,\ldots,m(n)$.
Assume now that for all $t \in \ensuremath{\mathbb{R}}^d$, $f_t(x) \geq 0$ for all $x \in E$. Let $t_1,\ldots,t_r \in \ensuremath{\mathbb{R}}^d$, $r \in \ensuremath{\mathbb{N}}$, and $\{f_{t_i}^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ be the approximating sequences of the kernel functions $f_{t_i}$ in the spectral representation
$$X(t_i) = \int_E f_{t_i} \Lambda(dx).$$
Consider
$$X^{(n)}(t_i) = \int_E f_{t_i}^{(n)}(x) \Lambda(dx) = \int_E \sum_{j=1}^{m(n,i)} x_j^{(i)} I_{B_j^{(i)}}(x)\Lambda(dx) = \sum_{j=1}^{m(n,i)} x_j^{(i)} \Lambda(B_j^{(i)})$$
and, as we have just seen, we can assume without loss of generality that $x_j^{(i)} \geq 0$. By further decomposing $B_j^{(i)}$ if necessary, we can find a set of disjoint sets $\{B_j,j=1,\ldots,m(n)\}$ for some $m(n) \in \ensuremath{\mathbb{N}}$ which does not depend on $i$ such that
$$X^{(n)}(t_i) = \sum_{j=1}^{m(n)} x_j^{(i)} \Lambda(B_j), \quad \forall i=1,\ldots,r.$$
The sets $B_j$, $j=1,\ldots,m(n)$, can be obtained by intersecting and subtracting the sets $B_j^{(i)}$ appropriately. This implies that $B_j \in \mathcal{D}$, $j=1,\ldots,m(n)$, by using the properties of rings of sets.
We now show that the random vector $(X^{(n)}(t_1),\ldots,X^{(n)}(t_r))^\mathsf{T}$ is \textbf{A} for all $n \in \ensuremath{\mathbb{N}}$. Consider the set $I=\{t_1,\ldots,t_r\}$ and let $f \in \mathcal{M}(r)$ and $g \in \mathcal{M}(r)$. We write $X^{(n)}(I)$ for the random vector consisting of an arbitrary permutation of the corresponding elements $X^{(n)}(s)$, $s \in I$. Consider the functions $k,l:\ensuremath{\mathbb{R}}^{m(n)} \to \ensuremath{\mathbb{R}}$ defined by
\begin{eqnarray*}
k(y_1,\ldots,y_{m(n)}) &:=& f\left(\sum_{j=1}^{m(n)} x_j^{(1)} y_j,\ldots,\sum_{j=1}^{m(n)} x_j^{(r)} y_j\right), \quad (y_1,\ldots,y_{m(n)})^\mathsf{T} \in \ensuremath{\mathbb{R}}^{m(n)}, \\
l(y_1,\ldots,y_{m(n)}) &:=& g\left(\sum_{j=1}^{m(n)} x_j^{(1)} y_j,\ldots,\sum_{j=1}^{m(n)} x_j^{(r)} y_j\right), \quad (y_1,\ldots,y_{m(n)})^\mathsf{T} \in \ensuremath{\mathbb{R}}^{m(n)}.
\end{eqnarray*}
Since the coefficients $x_j^{(i)}$ are non-negative for all $i=1,\ldots,r$ and $f \in \mathcal{M}(r)$, $g \in \mathcal{M}(r)$, we conclude that $k,l \in \mathcal{M}(m(n))$.
By definition, $\Lambda(B_1),\ldots,\Lambda(B_{m(n)})$ are independent and therefore \textbf{A}, cf. Theorem 1.8~(c) in \cite{BS07}, p.~6. This implies
$$\text{Cov}\left(f(X^{(n)}(I)),g(X^{(n)}(I))\right) = \text{Cov}\left(k(\Lambda(B_1),\ldots,\Lambda(B_{m(n)})),l(\Lambda(B_1),\ldots,\Lambda(B_{m(n)}))\right) \geq 0 $$
such that $(X^{(n)}(t_1),\ldots,X^{(n)}(t_r))^\mathsf{T}$ is associated.
For a vector $x=(x_1,\ldots,x_r)^\mathsf{T} \in \ensuremath{\mathbb{R}}^r$, set $\Vert x \Vert_1 := \sum_{i=1}^r \vert x_i \vert$. Since $X^{(n)}(t_i)$ converges in probability to $X(t_i)$ as $n \to \infty$ for $i=1,\ldots,r$, we conclude that $\Vert (X^{(n)}(t_1),\ldots,X^{(n)}(t_r))^\mathsf{T} \Vert_1$ converges to $\Vert (X(t_1),\ldots,X(t_r))^\mathsf{T} \Vert_1$ in probability due to Markov's inequality. This implies that $(X^{(n)}(t_1),\ldots,X^{(n)}(t_r))^\mathsf{T}$ converges in distribution to $(X(t_1),\ldots,X(t_r))^\mathsf{T}$, see \cite{Bil68}, p.~18. Therefore, $(X(t_1),\ldots,X(t_r))^\mathsf{T}$ is \textbf{A}, see \cite{BS07}, p.~7. Thus $X$ is an associated random field.
If for all $t \in \ensuremath{\mathbb{R}}^d$, the functions $f_t$ are non-positive for all $x \in E$, the proof is completely analogous by considering the fact that in Definition~\ref{def:association} (a) one can use coordinate-wise non-increasing functions instead of coordinate-wise non-decreasing functions, see Remark~1.4. in \cite{BS07}, p.~4. In this case, the approximating sequences $\{f_{t_i}^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ of the kernel functions $f_{t_i}$, $i=1,\ldots,r$, are chosen in such a way that $f_{t_i}^{(n)}$ is non-positive for each $n \in \ensuremath{\mathbb{N}}$.
\end{proof}
The following result provides sufficient conditions such that a random field of the form (\ref{eq:spectralRepresentation}) is stochastically continuous.
\begin{result}\label{lemma:stochCont}
Assume that the following conditions for a random field $X$ with spectral representation (\ref{eq:spectralRepresentation}) hold. For each $t \in \ensuremath{\mathbb{R}}^d$,
\begin{itemize}
\item[(a)] $f_s \to f_t$ $\lambda$-almost everywhere as $s \to t$,
\item[(b)] there exists some $\varepsilon > 0$ and a $\Lambda$-integrable function $g$ such that $\vert f_s - f_t \vert \leq g$ $\lambda$-almost everywhere and for all $s \in \ensuremath{\mathbb{R}}^d$ such that $\Vert s-t \Vert_2 \leq \varepsilon$,
\end{itemize}
where $\lambda$ is the control measure of the \textbf{ID} random measure $\Lambda$. Then $X$ is stochastically continuous.
\end{result}
\begin{proof}
In \cite{RR89}, the discussion before Theorem~3.3. and Theorem~3.3. itself imply that there is a function $\Phi_0:\ensuremath{\mathbb{R}} \times E \to [0,\infty)$ such that for a sequence of $\Lambda$-measurable functions $\{f_n\}_{n \in \ensuremath{\mathbb{N}}}$ we have the following implication:
\begin{equation}
\int_E \Phi_0(\vert f_n(x) \vert, x)\lambda(dx) \to 0, \quad n \to \infty \quad \Rightarrow\quad \underset{n \to \infty}{\text{plim}}\int_E f_n(x) \Lambda(dx) = 0.\label{eq:help26}
\end{equation}
Furthermore for every $x \in E$, $\Phi_0(\cdot,x)$ is a continuous non-decreasing function on $[0,\infty)$ with $\Phi_0(0,x)=0$, see Lemma~3.1. in \cite{RR89}. Since $g$ in assumption (b) is $\Lambda$-integrable, we have
$$\int_E \Phi_0(\vert g(x) \vert, x)\lambda(dx) < \infty,$$
cf. the definition of the Musielak-Orlicz space $L_{\Phi_0}(E,\lambda)$ on p.~466 and again Theorem~3.3 in \cite{RR89}.
Let $t \in \ensuremath{\mathbb{R}}^d$. As $\Phi_0(\cdot,x)$ is non-decreasing, we get
$$\int_E \Phi_0(\vert f_s(x)-f_t(x) \vert, x)\lambda(dx) \leq \int_E \Phi_0(\vert g(x) \vert, x)\lambda(dx)$$
for all $s \in \ensuremath{\mathbb{R}}^d$ such that $\Vert s-t \Vert_2 \leq \varepsilon$ by assumption (b). Furthermore
$$ \Phi_0(\vert f_s(x)-f_t(x) \vert, x) \to \Phi_0(0,x) = 0 \quad \lambda-a.e.$$
as $s \to t$ due to the continuity of $\Phi_0(\cdot,x)$ and assumption (a). Therefore, we can apply the dominated convergence theorem and get
$$\int_E \Phi_0(\vert f_s(x)-f_t(x) \vert, x)\lambda(dx) \to 0, \quad s \to t.$$
Then (\ref{eq:help26}) implies
$$\underset{s \to t}{\text{plim}} \int_E (f_s(x)-f_t(x))\Lambda(dx) = \underset{s \to t}{\text{plim}} \left(\int_E f_s(x)\Lambda(dx) - \int_E f_t(x)\Lambda(dx)\right) = 0,$$
that is
$$\underset{s \to t}{\text{plim}} X(s) = \underset{s \to t}{\text{plim}}\int_E f_s(x)\Lambda(dx) = \int_E f_t(x)\Lambda(dx) = X(t).$$
\end{proof}
If $\Lambda=M$ is an $\alpha$-stable random measure (cf. \cite{ST94}, pp. 118), then
$$X(t) = \int_E f_t(x) M(dx), \quad t \in \ensuremath{\mathbb{R}}^d,$$
is an $\alpha$-stable random field since the random vector $(X(t_1),\ldots,X(t_n))^\mathsf{T}$ is multivariate $\alpha$-stable distributed for all $t_1,\ldots,t_n \in \ensuremath{\mathbb{R}}^d$, $n \in \ensuremath{\mathbb{N}}$, cf. Proposition 3.4.3 in \cite{ST94}, p.~125.
Recall that the characteristic function of a stable random vector $\text{\boldmath{$X$}}=(X_1,\ldots,X_n)^\mathsf{T}$, $n \in \ensuremath{\mathbb{N}}$, is given by
\begin{eqnarray*}
\varphi_{\text{\boldmath{$X$}}}(\text{\boldmath{$\theta$}})=E\left(e^{i \cdot \text{\boldmath{$\theta$}}^\mathsf{T} \text{\boldmath{$X$}}}\right) =\begin{cases}
e^{-\int_{S_n}|\text{\boldmath{$\theta$}}^\mathsf{T} \text{\boldmath{$s$}}|^{\alpha} (1-i(\text{sign}\,
\text{\boldmath{$\theta$}}^\mathsf{T} \text{\boldmath{$s$}})\tan \frac{\pi \alpha}{2}\Gamma(d\text{\boldmath{$s$}}) + i
\text{\boldmath{$\theta$}}^\mathsf{T} \text{\boldmath{$\mu$}} } \quad \text{if } \alpha \ne 1,\\
e^{-\int_{S_n}|\text{\boldmath{$\theta$}}^\mathsf{T} \text{\boldmath{$s$}}| (1+i\frac{2}{\pi}(\text{sign}\,
\text{\boldmath{$\theta$}}^\mathsf{T} \text{\boldmath{$s$}})\ln |\text{\boldmath{$\theta$}}^\mathsf{T}
\text{\boldmath{$s$}}|\Gamma(d\text{\boldmath{$s$}}) + i \text{\boldmath{$\theta$}}^\mathsf{T} \text{\boldmath{$\mu$}} } \quad
\text{\hspace*{-0.23cm} if } \alpha = 1, \\
\end{cases}\quad \forall \text{\boldmath{$\theta$}} \in \ensuremath{\mathbb{R}}^n,
\end{eqnarray*}
where $\Gamma$ is a finite measure on the unit sphere $S_n$ of $\ensuremath{\mathbb{R}}^n$ and $\text{\boldmath{$\mu$}} \in \ensuremath{\mathbb{R}}^n$.
An $\alpha$-stable random vector $\text{\boldmath{$X$}}=(X_1,\ldots,X_n)^\mathsf{T}$, $n \in \ensuremath{\mathbb{N}}$, is \textbf{A} if and only if $\Gamma(S_-) = 0$,
where $S_- := \{ (s_1,\ldots,s_n) \in S_n: s_i s_j < 0 \text{ for some } i,j\}$. It is \textbf{NA} if and only if $\Gamma(S_+) = 0$, where $S_+ := \{ (s_1,\ldots,s_n) \in
S_n: s_i s_j > 0 \text{ for some } i,j\}$, see Theorem 4.6.1, p.~204, and Theorem 4.6.3, p.~208, in \cite{ST94}. The following result yields the same sufficient condition for association as Result \ref{result:association}, but provides a more straightforward proof.
\begin{result} \label{result:posNeg}
Suppose that for all $t \in \ensuremath{\mathbb{R}}^d$, either $f_t(x) \geq 0$ for all $x \in E$ or $f_t(x) \leq 0$ for all $x \in E$. Then
$$ X(t) = \int_E f_t(x) M(dx), \quad t \in \ensuremath{\mathbb{R}}^d,$$
is an associated $\alpha$-stable random field.
\end{result}
\begin{proof}
Consider the finite random vector $(X(t_1),\ldots,X(t_n))^\mathsf{T}$, $n \in \ensuremath{\mathbb{N}}$, and define the set $E_+$ and the function
$g=(g_1,\ldots,g_n):E_+ \to \ensuremath{\mathbb{R}}^n$ by
\begin{eqnarray*}
E_+ &:=& \{x \in E: \sum_{k=1}^n f_{t_k}(x)^2 > 0\}, \\
g_j(x) &:=& \frac{f_{t_j}(x)}{(\sum_{k=1}^n f_{t_k}(x)^2)^{1/2}}, \quad j=1,\ldots,n.
\end{eqnarray*}
Then, for any Borel set $A$ in $S_n$, we have with $-A:=\{-a: a \in A\}$
$$\Gamma(A) = \int_{g^{-1}(A)} \frac{1+\beta(x)}{2} m_1(dx) + \int_{g^{-1}(-A)} \frac{1-\beta(x)}{2} m_1(dx),$$
where
\begin{eqnarray*}
m_1(dx) &=& \left(\sum_{k=1}^n f_{t_k}(x)^2\right)^{\alpha/2} m(dx), \\
g^{-1}(A) &=& \{x \in E_+: (g_1(x),\ldots,g_n(x)) \in A\},
\end{eqnarray*}
see \cite{ST94}, pp.~115. Since $g_i(x) g_j(x) \geq 0$ for all $x \in E_+$ and $i,j \in \{1,\ldots,n\}$, we get
$g^{-1}(S_-) = \emptyset$ and $g^{-1}(-S_-) = \emptyset$ and therefore $\Gamma(S_-) = 0$. Thus, $X$ is \textbf{A}.
\end{proof}
\begin{result}\label{result:as_null}
Let $M$ be an $\alpha$-stable random measure with control measure $m$ and let $f$ be $M$-integrable. If $\int_E \vert f(x)\vert^\alpha m(dx) = 0$, then $\int_E f(x) M(dx) = 0$ almost surely.
\end{result}
\begin{proof}
Let $g \in F$, where $F$ is the set of all $M$-integrable functions. Since $\alpha$-stable integrals are linear (see~\cite{ST94}, p.~125), we have
$$\int_E 0 M(dx) = \int_E 0 g(x) M(dx) = 0 \int_E g(x) M(dx) = 0 \quad a.s.$$
Notice also that for the null function $h:E \to \ensuremath{\mathbb{R}}$ with $h(x):=0$ for all $x \in E$, we have $h \in F$ since $F$ is a linear space (see \cite{ST94}, p.~122). The assumption $\int_E \vert f(x)\vert^\alpha m(dx) = 0$ implies that $f=0$ $m$-almost everywhere. We can therefore use $\{f^{(n)}\}_{n \in \ensuremath{\mathbb{N}}}$ with $f^{(n)} = h$ as an approximating sequence for $f$ which has the properties (3.4.7) and (3.4.8) in \cite{ST94}, p.~122. We have
$$\underset{n \to \infty}{\text{plim}}\int_E f^{(n)}(x) M(dx) = \underset{n \to \infty}{\text{plim}} 0 = 0,$$
but also
$$\underset{n \to \infty}{\text{plim}}\int_E f^{(n)}(x) M(dx) = \int_E f(x) M(dx),$$
see \cite{ST94}, p.~124. Since convergence in probability implies convergence in distribution and the corresponding limit distribution is unique (see \cite{Bil68}, p.~11 and p.~18), the result is proven.
\end{proof}
|
1,477,468,751,439 | arxiv | \section{Introduction}
The representation theory of the De Concini-Kac type specialization of a quantized enveloping algebra at a root of unity was initiated by De Concini-Kac \cite{DK}.
It is quite different from and much more complicated than the generic parameter case.
A special feature at a root of unity is that
the center of the quantized enveloping algebra becomes much larger than the generic parameter case. An explicit description of the center of the De Concini-Kac type specialization at a root of unity was given by De Concini-Procesi \cite{DP} when the order of the root of unity is odd.
In this paper we give a similar description of the center in the even order case.
We point out that there already exists partial results in the even order case in Beck \cite{Beck}.
Let $U_q=U_q(\Delta)$ be the simply-connected quantized enveloping algebra associated to a finite irreducible root system $\Delta$
(the Cartan part is isomorphic to the group algebra of the weight lattice).
For $z\in{\mathbb{C}}^\times$ we denote by $U_z=U_z(\Delta)$ the specialization at $q=z$ of the De Concini-Procesi form of $U_q$.
Set $d=1$ (resp.\ 2, resp.\ 3) when $\Delta$ is of type $A, D, E$ (resp.\ $B, C, F$, resp.\ $G_2$).
We note that $U_z$ coincides with the specialization of the more standard De Concini-Kac form if $z^{2d}\ne1$.
Let $\ell$ be a positive integer, and let $\zeta\in{\mathbb{C}}^\times$ be a primitive $\ell$-th root of 1.
We assume that the order of $\zeta^2$ is greater than $d$.
Assume that $\ell$ is odd.
If $\Delta$ is of type $G_2$, we also assume that $\ell$ is prime to 3.
In this case De Concini-Procesi \cite{DP} gave an explicit description of the center $Z(U_\zeta)$ as explained in the following.
Denote by $Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)$ the subalgebra of $Z(U_\zeta)$ consisting of reductions of central elements of $U_q$ contained in the De Concini-Procesi form.
Then we have a Harish-Chandra type isomorphism $Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)\cong{\mathbb{C}}[2P]^W$, where $P$ is the weight lattice, $W$ is the Weyl group, and the action of $W$ on the group algebra ${\mathbb{C}}[2P]$ is a twisted one.
On the other hand we have a Frobenius homomorphism $F:U_1\to U_\zeta$, which is an injective Hopf algebra homomorphism whose image is contained in $Z(U_\zeta)$.
Set $Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)=\mathop{\rm Im}\nolimits(F)$.
Then De Concini-Procesi proved that the canonical homomorphism
\[
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)\otimes_
{Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)\cap Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)}
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)
\to
Z(U_\zeta)
\]
is an isomorphism.
They have also given the following geometric description of $Z(U_\zeta)$.
Denote by $G$ the connected simply-connected simple algebraic group over ${\mathbb{C}}$ with root system $\Delta$.
Take Borel subgroups $B^+$ and $B^-$ of $G$ such that $B^+\cap B^-$ is a maximal torus of $G$.
We set $H=H(\Delta)=B^+\cap B^-$.
Denote by $N^\pm$ the unipotent radical of $B^\pm$.
Define a subgroup $K=K(\Delta)$ of $B^+\times B^-$ by
\[
K=\{(tx,t^{-1}y)\in B^+\times B^-\mid
t\in H, x\in N^+, y\in N^-\}.
\]
Then we have
\begin{align*}
&Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)
\cong U_1\cong{\mathbb{C}}[K],\qquad
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)
\cong
{\mathbb{C}}[H/W],\\
&
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)
\cap
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)
\cong
{\mathbb{C}}[H/W],
\end{align*}
and the morphisms
$K\to H/W$, $H/W\to H/W$ corresponding to the embeddings $Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)
\cap
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)
\subset
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)$
and
$Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)
\cap
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)
\subset
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)$
are given by $(g_1,g_2)\mapsto {\mathop{\rm Ad}\nolimits}(G)((g_1g_2^{-1})_s)\cap H$, and $[t]\mapsto[t^\ell]$, respectively.
Here, $g_s$ for $g\in G$ denotes the semisimple part of $g$ in its Jordan decomposition.
In conclusion, we obtain
\[
Z(U_\zeta)\cong{\mathbb{C}}[K\times_{H/W}{H/W}].
\]
Now assume that $\ell$ is even, or $\Delta$ is of type $G_2$ and $\ell$ is an odd multiple of 3.
We can similarly define $Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta(\Delta))$ as a subalgebra of $Z(U_\zeta(\Delta))$ isomorphic to ${\mathbb{C}}[2P]^W\cong{\mathbb{C}}[H(\Delta)/W]$.
However, it is a more delicate problem to define $Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))$.
We have an injective Hopf algebra homomorphism $F:U_\varepsilon(\Delta')\to U_\zeta(\Delta)$,
where $\varepsilon\in\{\pm1\}$, $\Delta'\in\{\Delta,\Delta^\vee\}$ are determined from $\Delta$ and $\ell$.
Here, $\Delta^\vee$ denotes the set of coroots.
This $F$ is a dual version of the Frobenius homomorphism for the Lusztig forms defined in \cite{Lbook}.
In the case $\Delta$ is of type $G_2$ and $\ell$ is an odd multiple of 3 we have $\varepsilon=1$, $\Delta'=\Delta^\vee$ and $\mathop{\rm Im}\nolimits(F)\subset Z(U_\zeta(\Delta))$.
In the case $\ell$ is even and $\varepsilon=1$, $U_1(\Delta')$ is commutative, but $\mathop{\rm Im}\nolimits(F)$ is not a subalgebra of $Z(U_\zeta(\Delta))$.
In the case $\varepsilon=-1$ $U_{-1}(\Delta')$ is non-commutative.
We define $Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))$ to be the intersection $\mathop{\rm Im}\nolimits(F)\cap Z(U_\zeta(\Delta))$.
Then the conclusion is similar to the odd order case.
Namely,
the canonical homomorphism
\[
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))\otimes_
{Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))\cap Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta(\Delta))}
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta(\Delta))
\to
Z(U_\zeta(\Delta))
\]
turns out to be an isomorphism.
Moreover,
we have
\begin{align*}
&Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))
\cong {\mathbb{C}}[K(\Delta')/\Gamma],\qquad
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta(\Delta))
\cong
{\mathbb{C}}[H(\Delta)/W],\\
&
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))
\cap
Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta(\Delta))
\cong
{\mathbb{C}}[H(\Delta')/W],
\end{align*}
where,
$\Gamma$ is a certain finite group acting on the algebraic variety $K(\Delta')$, and
the morphism
$K(\Delta')/\Gamma\to H(\Delta')/W$ is induced by $K(\Delta')\to H(\Delta')/W$.
The definition of
$H(\Delta)/W\to H(\Delta')/W$ is more involved and omitted here.
In conclusion, we obtain
\[
Z(U_\zeta(\Delta))\cong{\mathbb{C}}[(K(\Delta')/\Gamma)\times_{H(\Delta')/W}{H(\Delta)/W}].
\]
The proof is partially similar to that for the odd order case in De Concini-Procesi \cite{DP}.
However, some arguments are simplified using certain bilinear forms arising from the Drinfeld pairing.
We also note that we have avoided the usage of quantum coadjoint orbits in this paper.
We hope to investigate the quantum coadjoint orbits in the even order case in the near future
since they should be indispensable in developing the representation theory.
In dealing with the case $\varepsilon=-1$ we use $U_{-1}(\Delta')^\Gamma\cong U_{1}(\Delta')^\Gamma$.
We establish it using a result of \cite{KKO} relating $U_{-q}$ with $U_{q}$.
I would like to thank Masaki Kashiwara for explaining it to me.
\section{Quantized enveloping algebras}
\subsection{}
Let $\Delta$ be a (finite) reduced irreducible root system in a vector space ${\mathfrak{h}}^*_{\mathbb{Q}}$ over ${\mathbb{Q}}$ (we assume that ${\mathfrak{h}}^*_{\mathbb{Q}}$ is spanned by the elements of $\Delta$).
We denote by $W$ the Weyl group.
We fix a $W$-invariant positive definite symmetric bilinear form
\begin{equation}
\label{eq:bilinear}
(\;,\;):{\mathfrak{h}}_{\mathbb{Q}}^*\times{\mathfrak{h}}_{\mathbb{Q}}^*\to{\mathbb{Q}}.
\end{equation}
For $\alpha\in\Delta$ we set $\alpha^\vee=2\alpha/(\alpha,\alpha)\in{\mathfrak{h}}^*_{\mathbb{Q}}$.
Then $\Delta^\vee=\{\alpha^\vee\mid\alpha\in\Delta\}$ is also an irreducible root system in a vector space ${\mathfrak{h}}^*_{\mathbb{Q}}$.
Set
\begin{align*}
&Q=\sum_{\alpha\in\Delta}{\mathbb{Z}}\alpha,\qquad
Q^\vee=\sum_{\alpha\in\Delta}{\mathbb{Z}}\alpha^\vee,\\
&P=\{\lambda\in{\mathfrak{h}}_{\mathbb{Q}}^*\mid(\lambda,\alpha^\vee)\in{\mathbb{Z}}\;\;(\alpha\in\Delta)\},\\
&P^\vee=\{\lambda\in{\mathfrak{h}}_{\mathbb{Q}}^*\mid(\lambda,\alpha)\in{\mathbb{Z}}\;\;(\alpha\in\Delta)\}.
\end{align*}
Take a set $\Pi=\{\alpha_i\}_{i\in I}$ of simple roots of $\Delta$, and denote by $\Delta^+$ the corresponding set of positive roots of $\Delta$.
Then $\Pi^\vee=\{\alpha_i^\vee\}_{i\in I}$ is a set of simple roots of $\Delta^\vee$, and
$\Delta^{\vee +}=\{\alpha^\vee\mid\alpha\in\Delta^+\}$ is the corresponding set of positive roots of $\Delta^{\vee}$.
We set
\begin{align*}
&Q^+=\sum_{\alpha\in\Delta^+}{\mathbb{Z}}_{\geqq0}\alpha,\\
&
P^+=\{\lambda\in{\mathfrak{h}}_{\mathbb{Q}}^*\mid(\lambda,\alpha^\vee)\in{\mathbb{Z}}_{\geqq0}\;\;(\alpha\in\Delta^+)\}.
\end{align*}
For $i\in I$ let $s_i\in W$ be the corresponding simple reflection.
We denote the standard partial order on $W$ by $\geqq$.
We denote by $\Delta_{\mathop{\rm short}\nolimits}$ (resp.\ $\Delta_{\mathop{\rm long}\nolimits}$) the set of short (resp.\ long) roots.
In our convention we have $\Delta_{\mathop{\rm short}\nolimits}=\Delta_{\mathop{\rm long}\nolimits}=\Delta$ if $\Delta$ is of type $A, D, E$.
We set
\begin{align*}
&
d=\frac{(\alpha,\alpha)}{(\beta,\beta)}
\quad(\alpha\in\Delta_{{\mathop{\rm long}\nolimits}},\; \beta\in\Delta_{{\mathop{\rm short}\nolimits}}),\\
&
d_\alpha=\frac{(\alpha,\alpha)}{(\beta,\beta)}
\quad(\alpha\in\Delta,\; \beta\in\Delta_{{\mathop{\rm short}\nolimits}}),
\qquad
d_i=d_{\alpha_i}\quad(i\in I).
\end{align*}
Define $\rho\in P\cap \frac12Q$ by
$(\rho,\alpha_i^\vee)=1\;(i\in I)$.
Define $\tilde{\rho}\in\frac12Q^\vee$ by
$
\tilde{\rho}=\frac12\sum_{\alpha\in\Delta^+}d_\alpha\alpha^\vee.
$
We have $\rho=\frac{(\alpha,\alpha)}2\tilde{\rho}$ for $\alpha\in\Delta_{{\mathop{\rm short}\nolimits}}$.
For $n\in{\mathbb{Z}}_{\geqq0}$ we set
\[
[n]_t=\frac{t^n-t^{-n}}{t-t^{-1}}\in{\mathbb{Z}}[t, t^{-1}],\qquad
[n]_t!=[n]_t[n]_{t-1}\cdots[1]_t\in{\mathbb{Z}}[t,t^{-1}].
\]
\subsection{}
Let ${\mathbb{F}}={\mathbb{Q}}(q)$ be the rational function field in the variable $q$, and set
\[
q_\alpha=q^{d_\alpha}
\quad(\alpha\in\Delta),
\qquad
q_i=q_{\alpha_i}\quad(i\in I).
\]
We denote by $U=U(\Delta)$ the corresponding simply-connected quantized enveloping algebra over ${\mathbb{F}}$, i.e., $U$ is an associative algebra over ${\mathbb{F}}$ generated by the elements
$k_\lambda\;(\lambda\in P)$, $e_i, f_i\;(i\in I)$ satisfying the fundamental relations
\begin{align*}
&k_0=1, \qquad k_\lambda k_\mu=k_{\lambda+\mu}\quad(\lambda, \mu\in P),
\\
&k_\lambda e_ik_\lambda^{-1}=q_i^{(\lambda,\alpha_i^\vee)}e_i
\qquad(\lambda\in P,\;i\in I),
\\
&k_\lambda f_ik_\lambda^{-1}=q_i^{-(\lambda,\alpha_i^\vee)}f_i
\qquad(\lambda\in P,\;i\in I),
\\
&e_if_j-f_je_i=\delta_{ij}
(k_i-k_i^{-1})/(q_i-q_i^{-1})
\qquad(i, j\in I),
\\
&\sum_{n=0}^{1-a_{ij}}(-1)^ne_i^{(1-a_{ij}-n)}e_je_i^{(n)}=0
\qquad(i,j\in I,\,i\ne j),
\\
&\sum_{n=0}^{1-a_{ij}}(-1)^nf_i^{(1-a_{ij}-n)}f_jf_i^{(n)}=0
\qquad(i,j\in I,\,i\ne j),
\end{align*}
where $k_i=k_{\alpha_i}\;(i\in I)$,\quad $a_{ij}=(\alpha^\vee_i,\alpha_j)\;\;(i, j\in I)$, \quad$e_i^{(n)}=e_i^n/[n]_{q_i}!,\;f_i^{(n)}=f_i^n/[n]_{q_i}!\;\;(i\in I, n\in{\mathbb{Z}}_{\geqq0})$.
Note that the above definition of $U(\Delta)$ does not depend on the choice of the symmetric bilinear form $(\,,\,)$.
We regard $U$ as a Hopf algebra by
\begin{align*}
&\Delta(k_\lambda)=k_\lambda\otimes k_\lambda\quad(\lambda\in P),\\
&\Delta(e_i)=e_i\otimes1+k_i\otimes e_i,\qquad
\Delta(f_i)=f_i\otimes k_i^{-1}+1\otimes f_i\quad(i\in I),\\
&\varepsilon(k_\lambda)=1\quad(\lambda\in P),\qquad
\varepsilon(e_i)=
\varepsilon(f_i)=0\quad(i\in I),\\
&S(k_\lambda)=k_\lambda^{-1}\quad(\lambda\in P),\qquad
S(e_i)=-k_i^{-1}e_i,\quad
S(f_i)=-f_ik_i\quad(i\in I).
\end{align*}
Define subalgebras $U^0, U^+, U^-, U^{\geqq0}, U^{\leqq0}$ of $U$ by
\begin{align*}
&U^0=
\langle k_\lambda \mid
\lambda\in P\rangle,\qquad
U^+=
\langle e_i \mid
i\in I\rangle,\qquad
U^-=
\langle f_i \mid
i\in I\rangle,\\
&
U^{\geqq0}=
\langle k_\lambda,\; e_i \mid
\lambda\in P,\;i\in I\rangle,\qquad
U^{\leqq0}=
\langle k_\lambda,\; f_i \mid
\lambda\in P,\;i\in I\rangle.
\end{align*}
We have $U^0=\bigoplus_{\lambda\in P}{\mathbb{F}} k_\lambda$, and the multiplication of $U$ induces isomorphisms
\begin{align*}
&
U^+\otimes U^0\otimes U^-\cong U^-\otimes U^0\otimes U^+\cong U,\\
&
U^+\otimes U^0\cong U^0\otimes U^+\cong U^{\geqq0},\qquad
U^-\otimes U^0\cong U^0\otimes U^-\cong U^{\leqq0}
\end{align*}
of vector spaces.
We denote by $U_{{\mathop{\rm ad}\nolimits}}$ the ${\mathbb{F}}$-subalgebra of $U$ generated by
$k_\lambda\;(\lambda\in Q)$, $e_i, f_i\;(i\in I)$.
We also set
\begin{align*}
&U_{{\mathop{\rm ad}\nolimits}}^0=
\langle k_\lambda \mid
\lambda\in Q\rangle,\qquad
\\
&
U_{{\mathop{\rm ad}\nolimits}}^{\geqq0}=
\langle k_\lambda,\; e_i \mid
\lambda\in Q,\;i\in I\rangle,\qquad
U_{{\mathop{\rm ad}\nolimits}}^{\leqq0}=
\langle k_\lambda,\; f_i \mid
\lambda\in Q,\;i\in I\rangle.
\end{align*}
Then we have
\begin{align*}
&
U^+\otimes U_{{\mathop{\rm ad}\nolimits}}^0\otimes U^-\cong U^-\otimes U_{{\mathop{\rm ad}\nolimits}}^0\otimes U^+\cong U_{{\mathop{\rm ad}\nolimits}},\\
&
U^+\otimes U_{{\mathop{\rm ad}\nolimits}}^0\cong U_{{\mathop{\rm ad}\nolimits}}^0\otimes U^+\cong U_{{\mathop{\rm ad}\nolimits}}^{\geqq0},\qquad
U^-\otimes U_{{\mathop{\rm ad}\nolimits}}^0\cong U_{{\mathop{\rm ad}\nolimits}}^0\otimes U^-\cong U_{{\mathop{\rm ad}\nolimits}}^{\leqq0}.
\end{align*}
\subsection{}
The modified quantized enveloping algebra $\dot{U}=\dot{U}(\Delta)$ is defined as follows (see Lusztig \cite{Lbook}).
For $\gamma\in Q$ set
$U_{{\mathop{\rm ad}\nolimits},\gamma}=\{u\in U_{\mathop{\rm ad}\nolimits}\mid k_iuk_i^{-1}=q_i^{(\gamma,\alpha_i^\vee)}u\;(i\in I)\}$.
For $\lambda, \mu\in P$ we set
\[
{}_\lambda \overline{U}_\mu=
U_{\mathop{\rm ad}\nolimits}/
(
\sum_{i\in I}(k_i-q_i^{(\lambda,\alpha_i^\vee)})U_{\mathop{\rm ad}\nolimits}
+
\sum_{i\in I}U_{\mathop{\rm ad}\nolimits}(k_i-q_i^{(\mu,\alpha_i^\vee)})
)
\]
(note ${}_\lambda \overline{U}_\mu=0$ unless $\lambda-\mu\in Q$), and let
${}_\lambda p_\mu:U_{\mathop{\rm ad}\nolimits}\to{}_\lambda \overline{U}_\mu$ be the natural map.
For $\lambda\in P$ set $1_\lambda={}_\lambda p_\lambda(1)$.
Set
\[
\dot{U}=\bigoplus_{\lambda,\mu\in P}{}_\lambda \overline{U}_\mu.
\]
Then $\dot{U}$ is an associative algebra (without 1) by
\[
{}_\lambda p_\mu(x){}_{\lambda'} p_{\mu'}(y)=
\begin{cases}
{}_\lambda p_{\mu'}(xy)
\quad&(\mu=\lambda')\\
0&(\mu\ne\lambda')
\end{cases}
\]
for $x\in U_{{\mathop{\rm ad}\nolimits},\lambda-\mu},\;y\in U_{{\mathop{\rm ad}\nolimits},\lambda'-\mu'}$.
Moreover, $\dot{U}$ is a $U_{\mathop{\rm ad}\nolimits}$-bimodule by
\[
u\cdot
{}_\lambda p_\mu(x)
\cdot u'
={}_{\lambda+\gamma} p_{\mu-\gamma'}(uxu')
\qquad
(x\in U_{{\mathop{\rm ad}\nolimits},\lambda-\mu}, u\in U_{{\mathop{\rm ad}\nolimits},\gamma}, u'\in U_{{\mathop{\rm ad}\nolimits},\gamma'}).
\]
Then we have an isomorphism
\[
\bigoplus_{\lambda\in P} (U^-\otimes U^+)\cong\dot{U}
\qquad
((u_\lambda\otimes u'_\lambda)_{\lambda\in P}\longleftrightarrow
\sum_{\lambda\in P}u_\lambda1_\lambda u'_\lambda).
\]
We denote by $\mathop{\rm Mod}\nolimits(\dot{U})$ the category of
finite-dimensional $\dot{U}$-modules $M$ with weight space decomposition
$
M=\bigoplus_{\lambda\in P}1_\lambda M$.
For each $\lambda\in P^+$ there exists uniquely (up to isomorphism) a finite-dimensional irreducible $\dot{U}$-module $L(\lambda)$ such that
\[
L(\lambda)=\bigoplus_{\mu\in\lambda-Q^+}1_\mu L(\lambda),\qquad
\dim 1_\lambda L(\lambda)=1.
\]
Moreover, any $M\in\mathop{\rm Mod}\nolimits(\dot{U})$ is isomorphic to a direct sum of $L(\lambda)$'s for $\lambda\in P^+$.
\subsection{}
We denote by $V=V(\Delta)$
the associative algebra over ${\mathbb{F}}$ generated by the elements
$t_\lambda\;(\lambda\in P)$, $x_i, y_i\;(i\in I)$ satisfying the fundamental relations
\begin{align*}
&t_0=1, \qquad t_\lambda t_\mu=t_{\lambda+\mu}\quad(\lambda, \mu\in P),
\\
&t_\lambda x_it_\lambda^{-1}=q_i^{(\lambda,\alpha_i^\vee)}x_i
\qquad(\lambda\in P,\;i\in I),
\\
&t_\lambda y_it_\lambda^{-1}=q_i^{(\lambda,\alpha_i^\vee)}y_i
\qquad(\lambda\in P,\;i\in I),
\\
&x_iy_j-y_jx_i=0
\qquad(i, j\in I),
\\
&\sum_{n=0}^{1-a_{ij}}(-1)^nx_i^{(1-a_{ij}-n)}x_jx_i^{(n)}=0
\qquad(i,j\in I,\,i\ne j),
\\
&\sum_{n=0}^{1-a_{ij}}(-1)^ny_i^{(1-a_{ij}-n)}y_jy_i^{(n)}=0
\qquad(i,j\in I,\,i\ne j),
\end{align*}
where $x_i^{(n)}=x_i^n/[n]_{q_i}!,\;y_i^{(n)}=y_i^n/[n]_{q_i}!\;\;(i\in I, n\in{\mathbb{Z}}_{\geqq0})$.
We set $t_i=t_{\alpha_i}$ for $i\in I$.
Define subalgebras $V^0, V^+, V^-, V^{\geqq0}, V^{\leqq0}$ of $V$ by
\begin{align*}
&V^0=
\langle t_\lambda \mid
\lambda\in P\rangle,\qquad
V^+=
\langle x_i \mid
i\in I\rangle,\qquad
V^-=
\langle y_i \mid
i\in I\rangle,\\
&
V^{\geqq0}=
\langle t_\lambda,\; x_i \mid
\lambda\in P,\;i\in I\rangle,\qquad
V^{\leqq0}=
\langle t_\lambda,\; y_i \mid
\lambda\in P,\;i\in I\rangle.
\end{align*}
We have $V^0=\bigoplus_{\lambda\in P}{\mathbb{F}} t_\lambda$, and the multiplication of $V$ induces isomorphisms
\begin{align*}
&
V^+\otimes V^0\otimes V^-\cong V^-\otimes V^0\otimes V^+\cong V,\\
&
V^+\otimes V^0\cong V^0\otimes V^+\cong V^{\geqq0},\qquad
V^-\otimes V^0\cong V^0\otimes V^-\cong V^{\leqq0}.
\end{align*}
of vector spaces.
Moreover, we have algebra isomorphisms
\[
\jmath^+:V^+\to U^+\quad(x_i\mapsto e_i),
\qquad
\jmath^-:V^-\to U^-\quad(y_i\mapsto f_i).
\]
\begin{remark}
{\rm
$V$ is a $q$-analogue of the enveloping algebra of a certain solvable Lie subalgebra of ${\mathfrak{g}}\oplus{\mathfrak{g}}$, where ${\mathfrak{g}}$ is a simple Lie algebra with root system $\Delta$ (see \ref{subsec:K} below).
}
\end{remark}
\subsection{}
The modified version $\dot{V}=\dot{V}(\Delta)$ is defined similarly to $\dot{U}$ as follows.
Denote by $V_{{\mathop{\rm ad}\nolimits}}$ the ${\mathbb{F}}$-subalgebra of $V$ generated by
$t_\lambda\;(\lambda\in Q)$, $x_i, y_i\;(i\in I)$.
For $\gamma\in Q^+$ set
$V_{{\mathop{\rm ad}\nolimits},\gamma}=\{v\in V_{\mathop{\rm ad}\nolimits}\mid t_ivt_i^{-1}=q_i^{(\gamma,\alpha_i^\vee)}v\;(i\in I)\}$.
For $\lambda, \mu\in P$ we set
\[
{}_\lambda \overline{V}_\mu=
V_{\mathop{\rm ad}\nolimits}/
(
\sum_{i\in I}(t_i-q_i^{(\lambda,\alpha_i^\vee)})V_{\mathop{\rm ad}\nolimits}
+
\sum_{i\in I}V_{\mathop{\rm ad}\nolimits}(t_i-q_i^{(\mu,\alpha_i^\vee)})
)
\]
(note ${}_\lambda \overline{V}_\mu=0$ unless $\lambda-\mu\in Q^+$), and let
${}_\lambda \pi_\mu:V_{\mathop{\rm ad}\nolimits}\to{}_\lambda \overline{V}_\mu$ be the natural map.
For $\lambda\in P$ set $1_\lambda={}_\lambda \pi_\lambda(1)$.
Set
\[
\dot{V}=\bigoplus_{\lambda,\mu\in P}{}_\lambda \overline{V}_\mu.
\]
Then $\dot{V}$ is an associative algebra (without 1) by
\[
{}_\lambda \pi_\mu(x){}_{\lambda'} \pi_{\mu'}(y)=
\begin{cases}
{}_\lambda \pi_{\mu'}(xy)
\quad&(\mu=\lambda')\\
0&(\mu\ne\lambda')
\end{cases}
\]
for $x\in V_{{\mathop{\rm ad}\nolimits},\lambda-\mu},\;y\in V_{{\mathop{\rm ad}\nolimits},\lambda'-\mu'}$.
Moreover, $\dot{V}$ is a $V_{\mathop{\rm ad}\nolimits}$-bimodule by
\[
v\cdot
{}_\lambda \pi_\mu(x)
\cdot v'
={}_{\lambda+\gamma} \pi_{\mu-\gamma'}(vxv')
\qquad
(x\in V_{{\mathop{\rm ad}\nolimits},\lambda-\mu}, u\in V_{{\mathop{\rm ad}\nolimits},\gamma}, u'\in V_{{\mathop{\rm ad}\nolimits},\gamma'}).
\]
Then we have an isomorphism
\[
\bigoplus_{\lambda\in P} (V^-\otimes V^+)\cong\dot{V}
\qquad
((v_\lambda\otimes v'_\lambda)_{\lambda\in P}\longleftrightarrow
\sum_{\lambda\in P}v_\lambda v'_\lambda1_\lambda).
\]
\subsection{}
We denote by
\[
\tau:U_{{\mathop{\rm ad}\nolimits}}^{\geqq0}\times U_{{\mathop{\rm ad}\nolimits}}^{\leqq0}\to{\mathbb{F}}
\]
the Drinfeld pairing.
It is a bilinear form uniquely determined by the properties
\begin{align}
&\tau(1,1)=1,\\
&\tau(x,y_1y_2)=(\tau\otimes\tau)(\Delta(x),y_1\otimes y_2)
&(x\in U_{{\mathop{\rm ad}\nolimits}}^{\geqq0},\,y_1,y_2\in U_{{\mathop{\rm ad}\nolimits}}^{\leqq0}),\\
&\tau(x_1x_2,y)=(\tau\otimes\tau)(x_2\otimes x_1,\Delta(y))
&(x_1, x_2\in U_{{\mathop{\rm ad}\nolimits}}^{\geqq0},\,y\in U_{{\mathop{\rm ad}\nolimits}}^{\leqq0}),\\
&\tau(k_i,k_j)=q_i^{-(\alpha_i^\vee,\alpha_j)}
&(i, j\in I),\\
&\tau(k_\lambda, f_i)=\tau(e_i,k_\lambda)=0
&(\lambda\in Q,\,i\in I),\\
&\tau(e_i,f_j)=\delta_{ij}/(q_i^{-1}-q_i)
&(i,j\in I).
\end{align}
We define a bilinear form
\[
\sigma:U\times \dot{V}\to{\mathbb{F}}
\]
by
\begin{align*}
&\sigma(u_+k_\mu (Su_-), v_-v_+1_\lambda)
=
\tau(u_+,\eta^-(v_-))\delta_{\lambda,\mu}\tau(\eta^+(v_+), u_-)\\
&\qquad\qquad
(u_{\pm}\in U^{\pm},\; v_{\pm}\in V^{\pm},\;\lambda,\mu\in P).
\end{align*}
The following result is a consequence of Gavarini \cite[Theorem 6.2]{Gav} (see also
\cite[Proposition 3.4]{TM}).
\begin{proposition}
\label{prop:sigma-inv}
We have
\[
\sigma(u,vv')=
(\sigma\otimes\sigma)(\Delta(u),v\otimes v')
\qquad
(u\in U,\;v, v'\in \dot{V}).
\]
\end{proposition}
\subsection{}
For a Hopf algebra $H$ we define a left action of $H$ on $H$
by
\[
{\mathop{\rm ad}\nolimits}(h)(h')=\sum_{j}h_{0j}h'(Sh_{1j})
\quad(h, h'\in H,\;\Delta(h)=\sum_jh_{0j}\otimes h_{1j}).
\]
We define a right action of $U_{\mathop{\rm ad}\nolimits}$ on $\dot{U}$ by
\[
x\cdot\widetilde{{\mathop{\rm ad}\nolimits}}(u)=\sum_{j}(Su_{0j})xu_{1j}
\quad(x\in \dot{U},\; u\in U_{\mathop{\rm ad}\nolimits},\;\Delta(u)=\sum_ju_{0j}\otimes u_{1j}).\]
We set
\[
{}^eU=\sum_{\lambda\in P}U^+k_{2\lambda}(SU^-)\subset U.
\]
Then ${}^eU$ is a subalgebra of $U$ satisfying
$
{\mathop{\rm ad}\nolimits}(U)({}^eU)\subset {}^eU.
$
Define a bilinear form
\[
\omega:{}^eU\times \dot{U}\to{\mathbb{F}}
\]
by
\begin{align*}
&\omega(u_+k_{2\mu} (Su_-), w_-1_{\lambda}(Sw_+))
=
\tau(u_+,w_-)
\delta_{\lambda,-\mu}
\tau(w_+, u_-)\\
&\qquad\qquad
(u_{\pm}, w_\pm\in U^{\pm},\; \lambda,\mu\in P).
\end{align*}
The following result is a consequence of \cite[Proposition 2.2.1]{T0}.
\begin{proposition}
\label{prop:omega-inv}
We have
\[
\omega({\mathop{\rm ad}\nolimits}(u')(u),x)=
\omega(u,x\cdot\widetilde{{\mathop{\rm ad}\nolimits}}(u'))
\qquad
(u\in{}^eU, u'\in U,\;x\in \dot{U}).
\]
\end{proposition}
Set
\[
{}^fU=\{u\in U\mid\dim{\mathop{\rm ad}\nolimits}(U)(u)<\infty\}.
\]
Then ${}^fU$ is a subalgebra of ${}^eU$ and we have
\[
{}^fU=\sum_{\lambda\in P^+}{\mathop{\rm ad}\nolimits}(U)(k_{-2\lambda})
\]
(see \cite{JL}).
\subsection{}
We denote Lusztig's braid group action on $U$ by $T_i\;(i\in I)$.
Namely, $T_i:U\to U$ is
the algebra automorphism given by
\begin{align*}
&T_i(k_\lambda)=k_{s_i(\lambda)}\quad(\lambda\in P),\\
&T_i(e_j)=
\begin{cases}
-f_ik_i&(i=j)\\
\sum_{r=0}^{-a_{ij}}(-1)^{-a_{ij}-r}q_i^{-r}e_i^{(-a_{ij}-r)}e_je_i^{(r)}
\qquad&(i\ne j),
\end{cases}\\
&T_i(f_j)=
\begin{cases}
-k_i^{-1}e_i&(i=j)\\
\sum_{r=0}^{-a_{ij}}(-1)^{r}q_i^{-a_{ij}-r}f_i^{(-a_{ij}-r)}f_jf_i^{(r)}
\qquad&(i\ne j).
\end{cases}
\end{align*}
We denote by $w_0$ the longest element of $W$.
We fix a reduced expression
$w_0=s_{i_1}\cdots s_{i_N}\;(i_1,\dots, i_N\in I)$ in the following.
For $j=1,\dots, N$ set $\beta_j=s_{i_1}\cdots s_{i_{j-1}}(\alpha_{i_j})$, and
\begin{align*}
&e_{\beta_j}=T_{i_1}\cdots T_{i_{j-1}}(e_{i_j}),
\qquad
f_{\beta_j}=T_{i_1}\cdots T_{i_{j-1}}(f_{i_j}),\\
&e_{\beta_j}^{(n)}=T_{i_1}\cdots T_{i_{j-1}}(e_{i_j}^{(n)}),
\qquad
f_{\beta_j}^{(n)}=T_{i_1}\cdots T_{i_{j-1}}(f_{i_j}^{(n)})\qquad(n\in{\mathbb{Z}}_{\geqq0}).
\end{align*}
Then we have $\Delta^+=\{\beta_j\mid j=1,\dots, N\}$, and
$e_\beta\in U^+,\; f_\beta\in U^-\;(\beta\in\Delta^+)$.
Moreover, the set
$\{
e_{\beta_N}^{m_N}\cdots e_{\beta_1}^{m_1}
\mid m_j\in{\mathbb{Z}}_{\geqq0}
\}$
(resp.\
$\{
f_{\beta_N}^{m_N}\cdots f_{\beta_1}^{m_1}
\mid m_j\in{\mathbb{Z}}_{\geqq0}
\}$
)
is known to be a basis of $U^+$ (resp.\ $U^-$).
\subsection{}
We set ${\mathcal G}={\mathcal G}(\Delta)=P/P_0$, where
\[
P_0=\{\lambda\in P
\mid
d_i(\lambda,\alpha_i^\vee)\in 2{\mathbb{Z}}\;(i\in I)\}.
\]
Note that ${\mathcal G}$ is a 2-elementary finite group.
For $\lambda\in P$ we denote by $\delta_\lambda\in{\mathcal G}$ the element represented by $\lambda$.
We define an action of ${\mathcal G}$ on the algebra $U$ by
\begin{align*}
\delta_\lambda(k_\mu)=k_\mu,\quad
\delta_\lambda(e_i)=(-1)^{d_i(\lambda,\alpha_i^\vee)}e_i,\quad
\delta_\lambda(f_i)=(-1)^{d_i(\lambda,\alpha_i^\vee)}f_i
\end{align*}
for $\lambda, \mu\in P, i\in I$.
We define an ${\mathbb{F}}$-algebra structure of $\widetilde{U}=\widetilde{U}(\Delta)=U\otimes{\mathbb{F}}[{\mathcal G}]$ by
\[
(u\otimes \delta)(v\otimes \delta')=u \delta(v)\otimes \delta\delta'
\qquad(u, v\in U,\quad\delta, \delta'\in{\mathcal G}).
\]
We will identify $U$ and ${\mathbb{F}}[{\mathcal G}]$ with the subalgebras
$U\otimes 1$ and $1\otimes {\mathbb{F}}[{\mathcal G}]$
of $\widetilde{U}$ respectively.
We extend the ${\mathcal G}$-action on $U$ to that on $\widetilde{U}$ by
$\delta(x)=\delta x\delta^{-1}\;(\delta\in{\mathcal G}, x\in\widetilde{U})$.
Set
\[
U^{\mathcal G}=\{u\in U\mid \delta(u)=u\;(\delta\in{\mathcal G})\},\quad
\widetilde{U}^{\mathcal G}=\{x\in \widetilde{U}\mid \delta(x)=x\;(\delta\in{\mathcal G})\}.
\]
Then we see easily that
$
\widetilde{U}^{\mathcal G}={U}^{\mathcal G}{\mathbb{F}}[{\mathcal G}].
$
\subsection{}
Let $\theta$ be the automorphism of the field ${\mathbb{F}}$ sending $q$ to $-q$.
For an ${\mathbb{F}}$-algebra $R$ we
denote by ${}^\theta R$ the ${\mathbb{F}}$-algebra obtained by twisting the ${\mathbb{F}}$-module structure of $R$ via $\theta$.
Namely, ${}^\theta R$ is isomorphic to $R$ as a ring via the correspondence
$R\ni x\leftrightarrow{}^\theta x\in{}^\theta R$, and the ${\mathbb{F}}$-module structure is given by
$c\,{}^\theta x={}^\theta(\theta(c)x)\;(c\in{\mathbb{F}}, x\in R)$.
Now we are going to define an embedding of ${}^\theta U$ into $\widetilde{U}$ following \cite{KKO}.
We can take a subset $J$ of $I$ such that for $i, j\in I$ with $i\ne j$ we have
\[
d_i(\alpha_i^\vee,\alpha_j)\not\in 2{\mathbb{Z}}
\Longrightarrow
|\{i, j\}\cap J|=1.
\]
For $i\in I$ set
\[
\varphi_i=
\begin{cases}
\delta_{\alpha_i}\qquad&(i\in J)\\
1&(i\not\in J),
\end{cases}
\qquad
\psi_i=(-1)^{d_i}\varphi_i\delta_{\alpha_i}.
\]
For $\gamma=\sum_{i\in I}m_i\alpha_i\in Q$ we further set
\[
\varphi_\gamma=\prod_{i\in I}\varphi_i^{m_i},\qquad
\psi_\gamma=\prod_{i\in I}\psi_i^{m_i}.
\]
\begin{proposition}[\cite{KKO}]
\label{prop:kashiwara1}
An embedding ${}^\theta U\to\widetilde{U}$ of ${\mathbb{F}}$-algebras is given by
\[
{}^\theta k_\lambda\mapsto k_\lambda \delta_\lambda,\qquad
{}^\theta e_i\mapsto e_i\varphi_i,\qquad
{}^\theta f_i\mapsto f_i\psi_i.
\]
\end{proposition}
\begin{remark}
{\rm
In \cite{KKO} Kashiwara-Kang-Oh established using Proposition \ref{prop:kashiwara1}
the equivalence $\mathop{\rm Mod}\nolimits(U)\cong\mathop{\rm Mod}\nolimits({}^\theta U)$, where
$\mathop{\rm Mod}\nolimits(U)$ (resp.\ $\mathop{\rm Mod}\nolimits({}^\theta U)$) denotes the category of $U$-modules (resp.\ ${}^\theta U$-modules) with weight space decompositions (see also Andersen \cite{Andersen}).
}
\end{remark}
We will identify ${}^\theta U$ with a subalgebra of $\widetilde{U}$.
We can easily check the following.
\begin{lemma}
\label{lem:kashiwara0}
\begin{itemize}
\item[(i)]
The multiplication of $\widetilde{U}$ gives an isomorphism
${}^\theta U\otimes {\mathbb{F}}[{\mathcal G}]\cong\widetilde{U}$ of ${\mathbb{F}}$-modules.
\item[(ii)]
For any $\delta\in{\mathcal G}$ and ${}^\theta u\in{}^\theta U$ we have
$
\delta \,{}^\theta u \delta^{-1}={}^\theta(\delta(u))
$.
\end{itemize}
\end{lemma}
\begin{proposition}
\label{prop:kashiwara2}
For any $\beta\in\Delta^+$ we have
\[
{}^\theta e_\beta=\pm e_\beta\varphi_\beta,\qquad
{}^\theta(Sf_\beta)=\pm (Sf_\beta)\varphi_\beta.
\]
\end{proposition}
\begin{proof}
For $i\in I$ define ${}^\theta T_i:{}^\theta U\to{}^\theta U$ by
${}^\theta T_i({}^\theta u)={}^\theta(T_i(u))\;(u\in U)$.
For $\gamma\in Q^+$ set
\[
U^+_\gamma=\{x\in U^+\mid k_ixk_i^{-1}=q_i^{(\alpha_i^\vee,\gamma)}x\;(i\in I)\}.
\]
In order to show the statement for $e_\beta$, it is sufficient to show that for $\gamma\in Q^+$ and $i\in I$ there exists $c_{i,\gamma}\in\{\pm1\}$ satisfying
\begin{equation}
\label{eq:p1}
{}^\theta T_i(x\varphi_\gamma)=c_{i,\gamma}T_i(x)\varphi_{s_i\gamma}
\qquad(x\in U^+_\gamma).
\end{equation}
The proof of \eqref{eq:p1} is reduced to showing that
for $i, j\in I$ there exists $c_{i,j}\in\{\pm1\}$ satisfying
\begin{equation}
\label{eq:p2}
{}^\theta T_i(e_j\varphi_j)=c_{i,j}T_i(e_j)\varphi_{s_i\alpha_j}.
\end{equation}
In fact
for $\gamma=\sum_{p=1}^n\alpha_{j_p}\in Q^+$
we have
\[
c_{i,\gamma}
=\left(\prod_{p}c_{ij_p}\right)
\left(\prod_{p<p'}(-1)^{d_i(\alpha_i^\vee,\alpha_{j_p})(\alpha_i^\vee,\alpha_{j_{p'}})}\right).
\]
The verification of \eqref{eq:p2} in the case $i=j$ is easy.
In the case $i\ne j$ one needs some case by case calculation according to the relative position of $\alpha_i$ and $\alpha_j$.
Details are omitted.
The proof of the assertion for $f_\beta$ is similar.
\end{proof}
\subsection{}
Set ${\mathcal H}={\mathcal H}(\Delta)=Q^\vee/2Q^\vee$.
For $\nu\in Q^\vee$ we denote by $\gamma_\nu$ the element of ${\mathcal H}$ represented by $\nu$.
Define an action of ${\mathcal H}$ on
the ${\mathbb{F}}$-algebra $U^0=\bigoplus_{\lambda\in P}{\mathbb{F}} k_\lambda\cong{\mathbb{C}}[P]$ by
\[
{\gamma}_{\nu}\cdot k_\lambda
=(-1)^{(\nu,\lambda)}
k_\lambda
\qquad
(\nu\in Q^\vee,\; \lambda\in P).
\]
We can extend this ${\mathcal H}$-action on $U^0$ to
that on the algebra
$U\cong
U^+\otimes
SU^-
\otimes
U^0$
by
\[
\gamma\cdot (ut)=u(\gamma\cdot t)
\qquad(\gamma\in{\mathcal H}, u\in U^+(SU^-), t\in U^0).
\]
Since this action commutes with that of ${\mathcal G}$, we get an action of
${\mathcal G}\times {\mathcal H}$ on $U$.
\subsection{}
Set ${\mathbb{A}}={\mathbb{Q}}[q^{\pm1}]$.
Following De Concini-Procesi \cite{DP} we define $U_{\mathbb{A}}$ to be the smallest ${\mathbb{A}}$-subalgebra of $U$ that contains $k_\lambda\;(\lambda\in P),\; (q_i-q_i^{-1})e_i, \; (q_i-q_i^{-1})f_i\;(i\in I)$ and is stable under the action of $T_i\;(i\in I)$.
It is a Hopf algebra over ${\mathbb{A}}$.
Set
\[
U_{\mathbb{A}}^0=U_{\mathbb{A}}\cap U^0,\quad
U_{\mathbb{A}}^\pm=U_{\mathbb{A}}\cap U^\pm,\quad
U_{\mathbb{A}}^{\geqq0}=U_{\mathbb{A}}\cap U^{\geqq0},\quad
U_{\mathbb{A}}^{\leqq0}=U_{\mathbb{A}}\cap U^{\leqq0}.
\]
Then we have $U_{\mathbb{A}}^0=\bigoplus_{\lambda\in P}{\mathbb{A}} k_\lambda$, and the multiplication of $U_{\mathbb{A}}$ induces isomorphisms
\begin{align*}
&
U_{\mathbb{A}}^+\otimes U_{\mathbb{A}}^0\otimes U_{\mathbb{A}}^-\cong U_{\mathbb{A}}^-\otimes U_{\mathbb{A}}^0\otimes U_{\mathbb{A}}^+\cong U_{\mathbb{A}},\\
&
U_{\mathbb{A}}^+\otimes U_{\mathbb{A}}^0\cong U_{\mathbb{A}}^0\otimes U_{\mathbb{A}}^+\cong U_{\mathbb{A}}^{\geqq0},\qquad
U_{\mathbb{A}}^-\otimes U_{\mathbb{A}}^0\cong U_{\mathbb{A}}^0\otimes U_{\mathbb{A}}^-\cong U_{\mathbb{A}}^{\leqq0}.
\end{align*}
of ${\mathbb{A}}$-modules.
For $\beta\in \Delta^+$ we define $a_\beta\in U_{\mathbb{A}}^+,\;b_\beta\in U_{\mathbb{A}}^-$ by
\[
a_{\beta}=(q_{\beta}-q_{\beta}^{-1})e_{\beta},\qquad
b_{\beta}=(q_{\beta}-q_{\beta}^{-1})f_{\beta}.
\]
Then
$\{
a_{\beta_N}^{m_N}\cdots a_{\beta_1}^{m_1}
\mid m_j\in{\mathbb{Z}}_{\geqq0}\}$
(resp.\
$\{
b_{\beta_N}^{m_N}\cdots b_{\beta_1}^{m_1}
\mid m_j\in{\mathbb{Z}}_{\geqq0}\}$
)
is a free ${\mathbb{A}}$-basis of $U_{\mathbb{A}}^+$
(resp.\ $U_{\mathbb{A}}^-$).
Set
\[
U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}=U_{\mathbb{A}}\cap U_{\mathop{\rm ad}\nolimits},\qquad
U^\flat_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}=U_{\mathbb{A}}\cap U^\flat_{\mathop{\rm ad}\nolimits}\quad(\flat=0,\geqq0, \leqq0).
\]
Then we have $U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^0=\bigoplus_{\lambda\in Q}{\mathbb{A}} k_\lambda$, and
\begin{align*}
&
U_{\mathbb{A}}^+\otimes U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^0\otimes U_{\mathbb{A}}^-\cong U_{\mathbb{A}}^-\otimes U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^0\otimes U_{\mathbb{A}}^+\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}},\\
&
U_{\mathbb{A}}^+\otimes U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^0\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^0\otimes U_{\mathbb{A}}^+\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{\geqq0},\qquad
U_{\mathbb{A}}^-\otimes U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^0\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^0\otimes U_{\mathbb{A}}^-\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{\leqq0}.
\end{align*}
Denote by $U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L}$ the ${\mathbb{A}}$-subalgebra of $U$ generated by
the elements $\{e_i^{(n)}, f_i^{(n)}, k_\lambda\mid i\in I, n\in{\mathbb{Z}}_{\geqq0}, \lambda\in Q\}$, and set
\[
U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,\flat}=U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L}\cap U_{\mathop{\rm ad}\nolimits}^\flat\quad
(\flat=0,\geqq0, \leqq0),\qquad
U_{{\mathbb{A}}}^{L,\pm}=U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L}\cap U^\pm.
\]
Then we have
\begin{align*}
&
U_{\mathbb{A}}^{L,+}\otimes U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,0}\otimes U_{\mathbb{A}}^{L,-}\cong U_{\mathbb{A}}^{L,-}\otimes U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,0}\otimes U_{\mathbb{A}}^{L,+}\cong U^L_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}},\\
&
U_{\mathbb{A}}^{L,+}\otimes U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,0}\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,0}\otimes U_{\mathbb{A}}^{L,+}\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,\geqq0},\\
&
U_{\mathbb{A}}^{L,-}\otimes U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,0}\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,0}\otimes U_{\mathbb{A}}^{L,-}\cong U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,\leqq0}.
\end{align*}
Moreover, $U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,0}$ is generated by the elements of the form
$k_\lambda\; (\lambda\in Q)$,
\[
\begin{bmatrix}
k_i\\m
\end{bmatrix}
=\prod_{s=0}^{m-1}
\frac{q_i^{-s}k_i-q_i^sk_i^{-1}}{q_i^{s+1}-q_i^{-s-1}}
\qquad(i\in I, m\geqq0),
\]
and
$\{
e_{\beta_N}^{(m_N)}\cdots e_{\beta_1}^{(m_1)}
\mid m_j\in{\mathbb{Z}}_{\geqq0}\}$
(resp.\
$\{
f_{\beta_N}^{(m_N)}\cdots f_{\beta_1}^{(m_1)}
\mid m_j\in{\mathbb{Z}}_{\geqq0}\}$
)
is a free ${\mathbb{A}}$-basis of $U_{\mathbb{A}}^{L,+}$
(resp.\ $U_{\mathbb{A}}^{L,-}$).
We define $\dot{U}_{\mathbb{A}}$ to be the ${\mathbb{A}}$-subalgebra of $\dot{U}$ consisting of elements of the form
\[
\sum_{\lambda\in P}u_\lambda1_\lambda u'_\lambda
\qquad
(u_\lambda\in U_{\mathbb{A}}^{L,-}, \;u'_\lambda\in U_{\mathbb{A}}^{L,+}).
\]
For $\lambda\in P^+$ we define an ${\mathbb{A}}$-form $L_{\mathbb{A}}(\lambda)$ of $L(\lambda)$ by
\[
L_{\mathbb{A}}(\lambda)=\dot{U}_{\mathbb{A}} v
\qquad(1_\lambda L(\lambda)={\mathbb{F}} v).
\]
We define $\dot{V}_{\mathbb{A}}$ to be the ${\mathbb{A}}$-subalgebra of $\dot{V}$ consisting of elements of the form
\[
\sum_{\lambda\in P}v_\lambda v'_\lambda1_\lambda
\qquad
(v_\lambda\in (\jmath^-)^{-1}(U_{\mathbb{A}}^{L,-}), \;v'_\lambda\in (\jmath^+)^{-1}(U_{\mathbb{A}}^{L,+})).
\]
We set
\[
{}^eU_{\mathbb{A}}={}^eU\cap U_{\mathbb{A}},\qquad
{}^fU_{\mathbb{A}}={}^fU\cap U_{\mathbb{A}}.
\]
By \cite{KR}, \cite{KT}, \cite{LS}
we have
\begin{align}
\label{eq:Drinfeld-value}
&\tau(
e_{\beta_N}^{(m_N)}\cdots e_{\beta_1}^{(m_1)},
b_{\beta_N}^{n_N}\cdots b_{\beta_1}^{n_1})
=
\tau(
a_{\beta_N}^{m_N}\cdots a_{\beta_1}^{m_1},
f_{\beta_N}^{(n_N)}\cdots f_{\beta_1}^{(n_1)})\\
\nonumber
=&
\prod_{s=1}^N
\delta_{m_s,n_s}(-1)^{m_s}q_{\beta_s}^{m_s(m_s-1)/2},
\end{align}
and hence $\tau$ induces bilinear forms
\[
\tau^{\emptyset,L}_{\mathbb{A}}:
U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{\geqq0}\times U^{L,\leqq0}_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}\to{\mathbb{A}},\qquad
\tau^{L,\emptyset}_{\mathbb{A}}:
U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^{L,\geqq0}\times U^{\leqq0}_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}\to{\mathbb{A}}.
\]
It follows that $\sigma$ and $\omega$ also induce perfect bilinear forms
\begin{align*}
\sigma_{\mathbb{A}}:U_{\mathbb{A}}\times \dot{V}_{\mathbb{A}}\to{\mathbb{A}},\qquad
\omega_{\mathbb{A}}:{}^eU_{\mathbb{A}}\times \dot{U}_{\mathbb{A}}\to{\mathbb{A}}.
\end{align*}
Set $\widetilde{U}_{\mathbb{A}}=U_{\mathbb{A}}\otimes{\mathbb{A}}[{\mathcal G}]$.
It is an ${\mathbb{A}}$-subalgebra of $\widetilde{U}$.
We also have an obvious ${\mathbb{A}}$-form ${}^\theta U_{\mathbb{A}}$ of ${}^\theta U$.
By Proposition \ref{prop:kashiwara2} the embedding
${}^\theta U\subset\widetilde{U}$
induces
${}^\theta U_{\mathbb{A}}\to \widetilde{U}_{\mathbb{A}}$.
\subsection{}
Let $z\in{\mathbb{C}}^\times$, and set
\begin{equation}
\label{eq:kappa-beta}
z_\beta=z^{d_\beta}\quad(\beta\in\Delta),\qquad
z_i=z_{\alpha_i}\quad(i\in I).
\end{equation}
Set
\[
U_z=U_z(\Delta)={\mathbb{C}}\otimes_{\mathbb{A}} U_{\mathbb{A}},
\]
where ${\mathbb{A}}\to{\mathbb{C}}$ is given by $q\mapsto z$.
We also set
\begin{align*}
&U^\flat_z={\mathbb{C}}\otimes_{\mathbb{A}} U^\flat_{\mathbb{A}}\qquad
(\flat=\emptyset, +, -, 0, \geqq0, \leqq0),\\
&U_{{\mathop{\rm ad}\nolimits},z}^{\flat}={\mathbb{C}}\otimes_{\mathbb{A}} U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^\flat,\quad
U_{{\mathop{\rm ad}\nolimits},z}^{L,\flat}={\mathbb{C}}\otimes_{\mathbb{A}} U_{{\mathop{\rm ad}\nolimits},{\mathbb{A}}}^\flat\qquad
(\flat=\emptyset, 0, \geqq0, \leqq0),\\
&U^{L,\pm}_z={\mathbb{C}}\otimes_{\mathbb{A}} U^{L,\pm}_{\mathbb{A}},
\\
&
\dot{U}_z={\mathbb{C}}\otimes_{\mathbb{A}} \dot{U}_{\mathbb{A}}, \qquad
\dot{V}_z={\mathbb{C}}\otimes_{\mathbb{A}} \dot{V}_{\mathbb{A}},\\
&{}^eU_z={\mathbb{C}}\otimes_{\mathbb{A}}{}^eU_{\mathbb{A}},\qquad
{}^fU_z={\mathbb{C}}\otimes_{\mathbb{A}}{}^fU_{\mathbb{A}}.
\end{align*}
Then we have
\[
U_z\cong U_z^-\otimes U_z^0\otimes U_z^+,\qquad
\dot{U}_z\cong \bigoplus_{\lambda\in P}U_z^{L,-}1_\lambda U_z^{L,+}.
\]
Since $U_{\mathbb{A}}$ is a free ${\mathbb{A}}$-module, we have
$
{}^fU_z\subset{}^eU_z\subset U_z.
$
We denote by $\mathop{\rm Mod}\nolimits(\dot{U}_z)$ the category of
finite-dimensional $\dot{U}_z$-modules $M$ with weight space decomposition
$
M=\bigoplus_{\lambda\in P}1_\lambda M$.
For $\lambda\in P^+$ we define $L_z(\lambda)\in\mathop{\rm Mod}\nolimits(\dot{U}_z)$ by
\[
L_z(\lambda)={\mathbb{C}}\otimes_{\mathbb{A}} L_{\mathbb{A}}(\lambda).
\]
Note that $\tau^{\emptyset,L}_{\mathbb{A}}$, $\tau^{L,\emptyset}_{\mathbb{A}}$, $\sigma_{\mathbb{A}}$ and $\omega_{\mathbb{A}}$ induce bilinear forms
\begin{align*}
&\tau^{\emptyset,L}_z:
U_{{\mathop{\rm ad}\nolimits},z}^{\geqq0}\times U^{L,\leqq0}_{{\mathop{\rm ad}\nolimits},z}\to{\mathbb{C}},\qquad
\tau^{L,\emptyset}_z:
U_{{\mathop{\rm ad}\nolimits},z}^{L,\geqq0}\times U^{\leqq0}_{{\mathop{\rm ad}\nolimits},z}\to{\mathbb{C}},
\\
&\sigma_z:U_z\times \dot{V}_z\to{\mathbb{C}},\qquad
\omega_z:{}^eU_z\times \dot{U}_{z}\to{\mathbb{C}}.
\end{align*}
By \eqref{eq:Drinfeld-value}
$\tau^{\emptyset,L}_z|_{
U_{z}^{+}\times U^{L,-}_{z}}$,
$\tau^{L,\emptyset}_z|
{U_{z}^{L,+}\times U^{-}_{z}}$, $\sigma_z$, $\omega_z$ are perfect.
Set $\widetilde{U}_z={\mathbb{C}}\otimes_{\mathbb{A}}\widetilde{U}_{\mathbb{A}}=U_z\otimes{\mathbb{C}}[{\mathcal G}]$.
Then we have a natural embedding ${U}_z\subset\widetilde{U}_z$, which is compatible with the ${\mathcal G}$-actions.
Note that the embedding ${}^\theta U_{\mathbb{A}}\to \widetilde{U}_{\mathbb{A}}$ also induces an embedding $U_{-z}\subset \widetilde{U}_z$, which is compatible with ${\mathcal G}$-actions.
Hence setting
\begin{align*}
U_{z}^{\mathcal G}&=
\{u\in U_{z}\mid \delta(u)=u\;(\delta\in{\mathcal G})\},\\
\widetilde{U}_z^{\mathcal G}&=
\{x\in \widetilde{U}_z\mid \delta(x)=x\;(\delta\in{\mathcal G})\},
\end{align*}
we obtain embeddings
\[
U_{-z}\subset \widetilde{U}_z\supset U_z,\qquad
U_{-z}^{\mathcal G}\subset \widetilde{U}_z^{\mathcal G}\supset U_z^{\mathcal G}.
\]
We denote by
$
\widetilde{\Xi}_z:U_{-z}\to U_z
$
the restriction of the linear map $\widetilde{U}_z\to U_z$,
which sends
$u\delta_\lambda$ for $u\in U_z,\; \lambda\in P$ to $u$.
\begin{proposition}
\label{prop:Xi}
The linear map
$\widetilde{\Xi}_z$ induces an isomorphism
\begin{equation}
\Xi_z:U_{-z}^{\mathcal G}\to U_{z}^{\mathcal G}
\end{equation}
of ${\mathbb{C}}$-algebras, which is compatible with the ${\mathcal H}$-actions.
\end{proposition}
\begin{proof}
Since $\widetilde{\Xi}_z$ is a linear isomorphism compatible with ${\mathcal G}$-actions,
it induces a
linear isomorphism
$
\Xi_z:U_{-z}^{\mathcal G}\to U_{z}^{\mathcal G}
$.
Note that
$U_{-z}^{\mathcal G}\subset \widetilde{U}_z^{\mathcal G}=U_z^{\mathcal G}{\mathbb{C}}[{\mathcal G}]$.
For $u, u'\in U_z^{\mathcal G}$, $\delta, \delta'\in{\mathcal G}$
we have
\[
\widetilde{\Xi}_z((u\delta)(u'\delta'))
=\widetilde{\Xi}_z(uu'\delta\delta')
=uu'
=\widetilde{\Xi}_z(u\delta)
\widetilde{\Xi}_z(u'\delta').
\]
Hence $\widetilde{\Xi}_z|U_z^{\mathcal G}{\mathbb{C}}[{\mathcal G}]:U_z^{\mathcal G}{\mathbb{C}}[{\mathcal G}]\to U_z^{\mathcal G}$ is an algebra homomorphism.
It follows that its restriction $
\Xi_z:U_{-z}^{\mathcal G}\to U_{z}^{\mathcal G}
$
is also an algebra homomorphism.
The remaining statement about the action of ${\mathcal H}$ is obvious.
\end{proof}
\subsection{}
\label{subsec:K}
Let $G=G(\Delta)$ be a connected, simply-connected semisimple algebraic group over ${\mathbb{C}}$ with root system $\Delta$.
Take a maximal torus $H=H(\Delta)$ of $G$ and Borel subgroups $B^+, B^-$ of $G$ such that $B^+\cap B^-=H$.
Set $N^\pm=[B^\pm,B^\pm]$, and define a closed subgroup $K=K(\Delta)$ of $B^+\times B^-$ by
\[
K=\{(gh,g'h^{-1})\mid h\in H, g\in N^+, g'\in N^-\}.
\]
Then $\dot{V}_1$ is identified with the modified enveloping algebra of the Lie algebra of $K$.
Hence we obtain an isomorphism
\begin{equation}
\label{eq:U1K}
U_1\cong{\mathbb{C}}[K]
\end{equation}
of coalgebras by Proposition \ref{prop:sigma-inv}.
Since $U_1$ and ${\mathbb{C}}[K]$ are commutative, we see easily that \eqref{eq:U1K} is an isomorphism of Hopf algebras (see \cite{DP}, \cite{Gav}, \cite{TM}).
\section{Harish-Chandra center}
\label{sec:Har}
\subsection{}
For a ring $R$ we denote its center by $Z(R)$.
Consider the composite of
\[
Z(U)\hookrightarrow U\cong U^-\otimes U^0\otimes U^+\xrightarrow{\varepsilon\otimes1\otimes\varepsilon} U^0
\cong{\mathbb{F}}[P],
\]
where ${\mathbb{F}}[P]=\bigoplus_{\lambda\in P}{\mathbb{F}} e(\lambda)$ is the group algebra of $P$, and the isomorphism
$U^0\cong{\mathbb{F}}[P]$ is given by $k_\lambda\leftrightarrow e(\lambda)$.
By \cite{DK}, \cite{JL}, \cite{T0} this linear map $Z(U)\to{\mathbb{F}}[P]$ is an injective algebra homomorphism whose image coincides with
\[
{\mathbb{F}}[2P]^{W\circ}=\{x\in{\mathbb{F}}[2P]\mid w\circ x=x\;(w\in W)\},
\]
where the action of $W$ on ${\mathbb{F}}[2P]$ is given by
\[
w\circ e(2\lambda)=q^{(w\lambda-\lambda,2\tilde{\rho})} e(2w\lambda)\qquad
(w\in W,\;\lambda\in P).
\]
Hence we have an isomorphism
\begin{equation}
\iota:Z(U)\to{\mathbb{F}}[2P]^{W\circ}.
\end{equation}
We recall here a description of $Z(U)$ in terms of the characters of finite-dimensional ${U}$-modules.
For $M\in\mathop{\rm Mod}\nolimits(\dot{U})$
we define $\tilde{t}_M\in\dot{U}^*$ by
\[
\langle \tilde{t}_M,x\rangle
={\rm{Tr}}(xk_{2\rho},M)\qquad(x\in\dot{U}).
\]
Then there exists uniquely an element $t_M\in {}^eU$ satisfying
\[
\omega(t_M,x)=\langle\tilde{t}_M,x\rangle\qquad
(x\in \dot{U}).
\]
More explicitly, we have
\begin{align*}
t_M
=
\sum_{\lambda\in2P,\sum_{j=1}^N(m_j-m'_j)\beta_j=0}
c_{\lambda,\{m_j\}_{j=1}^N,\{m'_j\}_{j=1}^N}
a_{\beta_1}^{m_1}\cdots a_{\beta_N}^{m_N}
k_\lambda
S(b_{\beta_N}^{m'_N}\cdots b_{\beta_1}^{m'_1}),
\end{align*}
where
\begin{align*}
&c_{\lambda,\{m_j\}_{j=1}^N,\{m'_j\}_{j=1}^N}\\
=&
\prod_{j=1}^N
(-1)^{m_j+m'_j}q_{\beta_j}^{-m_j(m_j-1)/2-m'_j(m'_j-1)/2}\\
&
\times
{\rm{Tr}}\left(\pi\left\{
f_{\beta_1}^{(m_1)}\cdots f_{\beta_N}^{(m_N)}
1_{-\frac{\lambda}2}
S(e_{\beta_N}^{(m'_N)}\cdots e_{\beta_1}^{(m'_1)})k_{2\rho}
\right\},1_{-\frac{\lambda}2-\sum_jm'_j\beta_j}M
\right).
\end{align*}
We can show $t_M\in Z(U)$ using $k_{2\rho}^{-1}uk_{2\rho}=S^2u\;(u\in U)$,
$Z(U)=\{v\in U\mid{\mathop{\rm ad}\nolimits}(u)(v)=\varepsilon(u)v\;(u\in U)\}$, and
Proposition \ref{prop:omega-inv} (see \cite{T0}).
We have
\[
\iota(t_M)=
\sum_{\lambda\in P}
(\dim 1_\lambda M)
q^{(\lambda,2\tilde{\rho})}e({-2\lambda}).
\]
\begin{proposition}
\label{prop:ZU}
\begin{itemize}
\item[(i)]
$Z(U)\subset U^{\mathcal G}$.
\item[(ii)]
We have
\[
Z({}^\theta U)=Z(\widetilde{U})=Z(U)
\]
as subalgebras of $\widetilde{U}$.
Moreover, the composite of
\[
{\mathbb{F}}[2P]^{W\circ}\cong Z(U)=
Z({}^\theta U)\cong {}^\theta Z(U)\cong
{}^\theta ({\mathbb{F}}[2P]^{W\circ})
\]
is induced by the ${\mathbb{F}}$-linear isomorphism
\[
{\mathbb{F}}[2P]\ni e(2\lambda)\mapsto {}^\theta e(2\lambda)\in {}^\theta {\mathbb{F}}[2P].
\]
\end{itemize}
\end{proposition}
\begin{proof}
(i) Let $\delta\in{\mathcal G}$.
Since $\delta$ acts on $U$ as an algebra automorphism, we have $\delta(Z(U))=Z(U)$.
It is easily seen from the definition of $\delta$ that
$\iota(\delta(z))=\iota(z)$ for any $z\in Z(U)$.
Hence $\delta$ acts as identity on $Z(U)$.
(ii) By (i) we have $Z(U)\subset Z(\widetilde{U})$.
Let us show $Z(U)\supset Z(\widetilde{U})$.
Let $z=\sum_{\delta\in{\mathcal G}}u_\delta\delta\in Z(\widetilde{U})$, where $u_\delta\in U$.
By $uz=zu$ for $u\in U$ we have $uu_\delta=u_\delta\delta(u)$.
By considering the corresponding identity in the associated graded algebra ${\mathop{\rm{Gr}}\nolimits}\, U$ introduced in \cite{DP} we see easily that
$u_\delta=0$ for $\delta\ne1$.
Hence $z\in Z(U)$.
The proof of $Z({}^\theta U)=Z(\widetilde{U})$ is similar.
The remaining statement is a consequence of ${}^\theta k_{2\lambda}=k_{2\lambda}$ for $\lambda\in P$.
\end{proof}
\subsection{}
By $Z(U_{\mathbb{A}})=U_{\mathbb{A}}\cap Z(U)$ $\iota$ induces an injective algebra homomorphism
\[
\iota_{\mathbb{A}}:Z(U_{\mathbb{A}})\to{\mathbb{A}}[2P]^{W\circ}
\]
\begin{proposition}
\label{prop:HCA}
$\iota_{\mathbb{A}}$ is an isomorphism of ${\mathbb{A}}$-algebras.
\end{proposition}
\begin{proof}
For $\lambda\in P^+$ we have $t_{L(\lambda)}\in U_{\mathbb{A}}$, and
${\mathbb{A}}[2P]^{W\circ}$ is spanned over ${\mathbb{A}}$ by $\iota(t_{L(\lambda)})$ for $\lambda\in P^+$.
\end{proof}
\subsection{}
Let $z\in{\mathbb{C}}^\times$.
We denote by $Z_{{\mathop{\rm Har}\nolimits}}(U_z)$ the image of $Z(U_{\mathbb{A}})\to Z(U_z)$, and call it the Harish-Chandra center of $U_z$.
We can similarly consider the composite of
\[
Z_{{\mathop{\rm Har}\nolimits}}(U_z)\hookrightarrow U_z\cong U_z^-\otimes U_z^0\otimes U_z^+\xrightarrow{\varepsilon\otimes1\otimes\varepsilon} U_z^0
\cong{\mathbb{C}}[P].
\]
We define an action $\circ_z$ of $W$ on ${\mathbb{C}}[2P]$ by
\[
w\circ_z e(2\lambda)=z^{(w\lambda-\lambda,2\tilde{\rho})} e(2w\lambda)\qquad
(w\in W,\;\lambda\in P).
\]
\begin{proposition}
\label{prop:HCzeta}
The above linear map $Z_{{\mathop{\rm Har}\nolimits}}(U_z)\to{\mathbb{C}}[P]$ induces an isomorphism
\[
\iota_z:
Z_{{\mathop{\rm Har}\nolimits}}(U_z)\to{\mathbb{C}}[2P]^{W\circ_z}
\]
of ${\mathbb{C}}$-algebras.
\end{proposition}
\begin{proof}
By $Z(U_{\mathbb{A}})=U_{\mathbb{A}}\cap Z(U)$
the canonical map
${\mathbb{C}}\otimes_{\mathbb{A}} Z(U_{\mathbb{A}})\to U_z$ is injective.
Hence $Z_{{\mathop{\rm Har}\nolimits}}(U_z)\cong {\mathbb{C}}\otimes_{\mathbb{A}} Z(U_{\mathbb{A}})\cong{\mathbb{C}}[2P]^{W\circ_z}$.
\end{proof}
For $M\in\mathop{\rm Mod}\nolimits(\dot{U}_z)$
we can similarly define $t_M\in {}^eU_z$ by
\[
\omega_z(t_M,x)
={\rm{Tr}}(xk_{2\rho},M)\qquad(x\in\dot{U}_z).
\]
By our construction $\{t_{L_z(\lambda)}\mid\lambda\in P^+\}$ is a basis of $Z_{{\mathop{\rm Har}\nolimits}}(U_z)$.
Indeed for $M\in\mathop{\rm Mod}\nolimits(\dot{U}_z)$
we can write
\[
[M]=\sum_{\lambda\in P^+}m_\lambda[L_z(\lambda)]
\qquad(m_\lambda\in{\mathbb{Z}})
\]
in an appropriate Grothendieck group, and in this case we have
\[
t_M=\sum_{\lambda\in P^+}m_\lambda t_{L_z(\lambda)}\in Z_{{\mathop{\rm Har}\nolimits}}(U_z).
\]
Note that for $z\in{\mathbb{C}}^\times$ the two actions $\circ_z$ and $\circ_{-z}$ of $W$ on ${\mathbb{C}}[2P]$ are the same.
By Proposition \ref{prop:ZU} we have the following.
\begin{proposition}
For $z\in{\mathbb{C}}^\times$ we have
$U_z^{{\mathcal G}}
\supset
Z_{{\mathop{\rm Har}\nolimits}}(U_z),
$
and the isomorphism $\Xi_z:U_{-z}^{\mathcal G}\to U_{z}^{\mathcal G}$ induces the isomorphism
$Z_{{\mathop{\rm Har}\nolimits}}(U_{-z})
\cong
Z_{{\mathop{\rm Har}\nolimits}}(U_z)$ given by
\[
Z_{{\mathop{\rm Har}\nolimits}}(U_{-z})
\xrightarrow{\iota_{-z}}
{\mathbb{C}}[2P]^{W\circ_{-z}}
=
{\mathbb{C}}[2P]^{W\circ_{z}}
\xleftarrow{\iota_{z}}
Z_{{\mathop{\rm Har}\nolimits}}(U_z)
\]
\end{proposition}
\subsection{}
We consider the case where $z=1$.
Since the action $\circ_1$ of $W$ on ${\mathbb{C}}[2P]$ is nothing but the ordinary one, we have
\[
Z_{{\mathop{\rm Har}\nolimits}}(U_1)\cong
{\mathbb{C}}[2P]^{W}
\cong {\mathbb{C}}[P]^W\cong
{\mathbb{C}}[H]^W
\cong
{\mathbb{C}}[H/W].
\]
Here the second isomorphism is induced by ${\mathbb{C}}[2P]\ni e(2\lambda)\leftrightarrow e(\lambda)\in{\mathbb{C}}[P]$.
Recall also that we have an isomorphism
\[
U_1\cong {\mathbb{C}}[K].
\]
Hence the inclusion $Z_{{\mathop{\rm Har}\nolimits}}(U_1)\to U_1$ induces a morphism $f:K\to H/W$ of algebraic varieties.
Let us give an explicit description of this morphism.
Define a morphism
$
\kappa:K\to G
$
of algebraic varieties by
$\kappa((g_1, g_2)=g_1g_2^{-1}$.
We also define $\upsilon:G\to H/W$ as follows.
Let $g\in G$.
Let $g_s$ be the semisimple part of $g$ with respect to the Jordan decomposition.
Then ${\mathop{\rm Ad}\nolimits}(G)(g_s)\cap H$ consists of a single $W$-orbit.
We define $\upsilon(g)$ to be this $W$-orbit.
\begin{proposition}[\cite{DP}]
\label{prop:DP-Frob}
The morphism $f:K\to H/W$ is the composite of $\kappa:K\to G$ and $\upsilon:G\to H/W$.
\end{proposition}
\begin{proof}
For the convenience of the readers we give a sketch of the proof using the bilinear forms $\omega_1$ ant $\theta_1$.
First note that
\[
Z_{{\mathop{\rm Har}\nolimits}}(U_1)\subset{}^fU_1\subset{}^eU_1\subset U_1.
\]
Via $\omega_1:{}^eU_1\times\dot{U}_1\to{\mathbb{C}}$ we obtain embeddings ${}^fU_1\subset{}^eU_1\subset(\dot{U}_1)^*$.
Identifying $\dot{U}_1$ with the modified enveloping algebra of $\mathop{\rm Lie}\nolimits(G)$ we have ${}^fU_1\cong {\mathbb{C}}[G]$ (see \cite{C}).
On the other hand we see from $\dot{U}_1\cong \bigoplus_{\lambda\in P}U_1^{L,-}1_\lambda U_1^{L,+}$ that
${}^eU_1$ is identified with ${\mathbb{C}}[N^-\times H\times N^+]$.
Consequently we obtain a sequence
\[
{\mathbb{C}}[H/W]\to{\mathbb{C}}[G]\to{\mathbb{C}}[N^-\times H\times N^+]\to{\mathbb{C}}[K]
\]
of algebra embeddings.
We can easily check that the corresponding morphisms of algebraic varieties are given by
\begin{align*}
&K\ni(g_+g_0,g_-g_0^{-1})\mapsto (g_-,g_0^{-2},g_+^{-1})\in N^-\times H\times N^+
\quad(g_\pm\in N^\pm, g_0\in H),\\
&N^-\times H\times N^+\ni(x_-,x_0,x_+)\mapsto x_-x_0x_+\in G,\\
&G\ni g\mapsto\upsilon(g)^{-1}\in H/W.
\end{align*}
\end{proof}
\section{Frobenius center}
\subsection{}
Fix a positive integer $\ell$.
If $\ell$ is odd (resp.\ even), then we set $r=\ell$ (resp.\ $r=\ell/2$).
Note that $r$ is the order of $\zeta^2$.
We assume
\begin{equation}
r>d
\end{equation}
in the following.
We take $\zeta\in{\mathbb{C}}$ to be a primitive $\ell$-th root of 1.
Define
$\zeta_\beta\;(\beta\in\Delta)$,
$\zeta_i\;(i\in I)$
as in \eqref{eq:kappa-beta} for $z=\zeta$.
For $\beta\in\Delta$
we denote the orders of
$\zeta_\beta, \zeta_\beta^2$ by $\ell_\beta, r_\beta$ respectively.
For $i\in I$ we set
$\ell_i=\ell_{\alpha_i}$, $r_i=r_{\alpha_i}$.
\subsection{}
For $\alpha\in\Delta$ set $\alpha'=r_\alpha\alpha\in{\mathfrak{h}}_{\mathbb{Q}}^*$.
Then ${\Delta}'=\{r_\alpha\alpha\mid\alpha\in\Delta\}$ is a root system with $\{\alpha_i'\mid i\in I\}$ a set of simple roots.
Note that as an abstract root system (disregarding the inner product) we have $\Delta'\cong\Delta$ or $\Delta'\cong\Delta^\vee$.
Set
\begin{align*}
{P}'=
\{\mu\in {\mathfrak{h}}_{\mathbb{Q}}^*\mid (\mu,\alpha^\vee)\in r_\alpha{\mathbb{Z}}\quad(\forall\alpha\in\Delta)\}.\end{align*}
Then $P'$ is the weight lattice for $\Delta'$, and we have
$P'\subset P$.
Set
\begin{equation}
\varepsilon=
\zeta_\alpha^{r_\alpha^2}\qquad
(\alpha\in\Delta,\;\alpha'\in(\Delta')_{{\mathop{\rm short}\nolimits}}).
\end{equation}
Then we have $\varepsilon=\pm1$.
Furthermore, $\varepsilon=-1$ if and only if we have either
\begin{itemize}
\item[(a)]
$r$ is odd and $\ell=2r$,
\end{itemize}
or
\begin{itemize}
\item[(b)]
$d=2$, $r$ is even with $r/2$ odd.
\end{itemize}
Set
\[
\varepsilon_{\alpha'}=\varepsilon^{(\alpha',\alpha')/(\beta',\beta')}\qquad
(\alpha'\in\Delta',\quad\beta'\in(\Delta')_{{\mathop{\rm short}\nolimits}}).
\]
Then we have
\begin{equation}
\varepsilon_{\alpha'}=\zeta_\alpha^{r_\alpha^2}\qquad(\alpha\in \Delta).
\end{equation}
\subsection{}
Similarly to the Frobenius homomorphism
\begin{equation}
\label{eq:Fr}
{\mathop{\rm Fr}\nolimits}:\dot{U}_\zeta(\Delta)\to\dot{U}_{\varepsilon}(\Delta')
\end{equation}
given in \cite[Theorem 35.1.9]{Lbook} we can define an algebra homomorphism
\begin{equation}
\xi:\dot{V}_\zeta(\Delta)\to\dot{V}_{\varepsilon}(\Delta')
\end{equation}
such that
\begin{itemize}
\item
for $\lambda\notin P'$ we have
$\xi(x_i^{(n)}1_\lambda)=\xi(y_i^{(n)}1_\lambda)=0$\quad($i\in I, n\in{\mathbb{Z}}_{\geqq0}$),
\item
for $\lambda\in P'$ we have
\begin{align*}
\xi(x_i^{(n)}1_\lambda)&=
\begin{cases}
x_i^{(n/r_i)}1_\lambda\qquad&(r_i| n)\\
0\qquad&(\text{otherwise}),\\
\end{cases}
\\
\xi(y_i^{(n)}1_\lambda)&=
\begin{cases}
y_i^{(n/r_i)}1_\lambda\qquad&(r_i| n)\\
0\qquad&(\text{otherwise}).
\end{cases}
\end{align*}
\end{itemize}
The fact that $\xi$ is well-defined follows easily from the corresponding fact for
$
{\mathop{\rm Fr}\nolimits}
$.
Moreover, for
$\lambda\in P'$ and $\beta\in\Delta^+$
\begin{align*}
\xi(x_\beta^{(n)}1_\lambda)&=
\begin{cases}
x_{\beta'}^{(n/r_\beta)}1_\lambda\qquad&(r_\beta| n)\\
0\qquad&(\text{otherwise}),\\
\end{cases}
\\
\xi(y_\beta^{(n)}1_\lambda)&=
\begin{cases}
y_{\beta'}^{(n/r_\beta)}1_\lambda\qquad&(r_\beta| n)\\
0\qquad&(\text{otherwise})
\end{cases}
\end{align*}
by
\cite[41.1.9]{Lbook}.
\begin{proposition}
There exists uniquely an injective homomorphism
\[
{}^t\xi:U_\varepsilon(\Delta')\to U_\zeta(\Delta)
\]
of coalgebras satisfying
\begin{equation}
\label{eq:txi1}
\sigma_\zeta({}^t\xi(u),v)=
\sigma_\varepsilon(u,\xi(v))
\qquad
(u\in U_\varepsilon(\Delta'), v\in\dot{V}_\zeta(\Delta)).
\end{equation}
Moreover, we have
\begin{align}
\label{eq:txi2}
&{}^t\xi(a_{\beta'_N}^{n_N}\cdots a_{\beta'_1}^{n_1}
k_\mu
S(b_{\beta'_N}^{n'_N}\cdots b_{\beta'_1}^{n'_1}))\\
\nonumber
=&
c_{\beta_1}^{n_1+n'_1}\cdots
c_{\beta_N}^{n_N+n'_N}
a_{\beta_N}^{r_{\beta_N}n_N}\cdots a_{\beta_1}^{r_{\beta_1}n_1}
k_{\mu}
S(b_{\beta_N}^{r_{\beta_N}n'_N}\cdots b_{\beta_1}^{r_{\beta_1}n'_1})\\
\nonumber
&\hspace{3cm}
(\mu\in P', n_1,\dots, n_N, n'_1,\dots, n'_N\in{\mathbb{Z}}_{\geqq0}),
\end{align}
where
\begin{align*}
c_\beta
&=
(-1)^{r_\beta+1}
\zeta_{\beta}^{-r_\beta(r_\beta-1)/2}
\qquad(\beta\in\Delta^+).
\end{align*}
\end{proposition}
\begin{proof}
It is easily seen from \eqref{eq:Drinfeld-value} that there exists uniquely a linear map
$
{}^t\xi:U_\varepsilon(\Delta')\to U_\zeta(\Delta)
$
satisfying \eqref{eq:txi1}, and it is given by \eqref{eq:txi2}.
Then we conclude from Proposition \ref{prop:sigma-inv} that
${}^t\xi$ is a homomorphism of coalgebras.
\end{proof}
Similarly we have the following.
\begin{proposition}
\label{prop:omagaA}
We have
${}^t\xi({}^eU_\varepsilon(\Delta'))\subset{}^eU_\zeta(\Delta)$, and
\[
\omega_\zeta({}^t\xi(u),x)=
\omega_\varepsilon(u,{\mathop{\rm Fr}\nolimits}(x))
\qquad
(u\in {}^eU_\varepsilon(\Delta'), x\in\dot{U}_\zeta(\Delta)).
\]
\end{proposition}
\subsection{}
For
$\beta\in\Delta$ we set
$\eta_\beta=\zeta_\beta^{r_\beta}$.
We have $\eta_\beta=\pm1$, and $\eta_\beta=-1$ if and only $\ell_\beta$ is even.
\begin{proposition}[De Concini-Kac \cite{DK}]
\label{prop:rel-zeta}
For
$\alpha, \beta\in\Delta^+$, $\lambda\in P$, $\mu\in P'$
we have
\begin{align*}
&a_{\alpha}^{r_\alpha}a_{\beta}=
\eta_\alpha^{(\alpha^\vee,\beta)}
a_{\beta}a_{\alpha}^{r_\alpha},\qquad
(Sb_{\alpha}^{r_\alpha})(Sb_{\beta})=
\eta_\alpha^{(\alpha^\vee,\beta)}
(Sb_{\beta})(Sb_{\alpha}^{r_\alpha}),\\
&a_{\alpha}^{r_\alpha}(Sb_{\beta})=
\eta_\alpha^{(\alpha^\vee,\beta)}
(Sb_{\beta})a_{\alpha}^{r_\alpha},\qquad
(Sb_{\alpha}^{r_\alpha})a_{\beta}=
\eta_\alpha^{(\alpha^\vee,\beta)}
a_{\beta}(Sb_{\alpha}^{r_\alpha}),\\
&k_\lambda a_\alpha^{r_\alpha}
=\eta_\alpha^{(\lambda,\alpha^\vee)}
a_\alpha^{r_\alpha}k_\lambda,\qquad
k_\lambda (Sb_\alpha^{r_\alpha})
=\eta_\alpha^{(\lambda,\alpha^\vee)}
(Sb_\alpha^{r_\alpha})k_\lambda,\\
&k_{\mu}a_\alpha
=\eta_\alpha^{(\mu,\alpha^\vee)/r_\alpha}
a_\alpha k_{\mu}
,\qquad
k_{\mu} (Sb_\alpha)
=\eta_\alpha^{(\mu,\alpha^\vee)/r_\alpha}
(Sb_\alpha)k_{\mu}
\end{align*}
in $U_\zeta(\Delta)$.
\end{proposition}
\begin{proposition}
\label{prop:rel-epsilon}
For
$\alpha', \beta'\in(\Delta')^+$, $\mu\in P'$
we have
\begin{align*}
&a_{\alpha'}a_{\beta'}=
\varepsilon_{\alpha'}^{((\alpha')^\vee,\beta')}
a_{\beta'}a_{\alpha'},\qquad
(Sb_{\alpha'})(Sb_{\beta'})=
\varepsilon_{\alpha'}^{((\alpha')^\vee,\beta')}
(Sb_{\beta'})(Sb_{\alpha'}),\\
&a_{\alpha'}(Sb_{\beta'})=
\varepsilon_{\alpha'}^{((\alpha')^\vee,\beta')}
(Sb_{\beta'})a_{\alpha'},\qquad
(Sb_{\alpha'})a_{\beta'}=
\varepsilon_{\alpha'}^{((\alpha')^\vee,\beta')}
a_{\beta'}(Sb_{\alpha'}),\\
&k_{\mu}a_{\alpha'}
=\varepsilon_{\alpha'}^{(\mu,(\alpha')^\vee)}
a_{\alpha'} k_{\mu}
,\qquad
k_{\mu} (Sb_{\alpha'})
=\varepsilon_{\alpha'}^{(\mu,(\alpha')^\vee)}
(Sb_{\alpha'})k_{\mu}
\end{align*}
in $U_\varepsilon(\Delta')$.
\end{proposition}
\begin{proof}
Let
\[
(\,|\,)':{\mathfrak{h}}_{\mathbb{Q}}\times{\mathfrak{h}}_{\mathbb{Q}}\to{\mathbb{Q}}
\]
be the $W$-invariant non-degenerate symmetric bilinear form such that $(\alpha'|\alpha')'=2$ for $\alpha'\in(\Delta')_{{\mathop{\rm short}\nolimits}}$.
Then we have $\varepsilon_{\alpha'}^{((\alpha')^\vee,\beta')}=\varepsilon^{(\alpha'|\beta')'}$ for $\alpha', \beta'\in(\Delta')^+$.
In order to show the first formula
$a_{\alpha'}a_{\beta'}=
\varepsilon^{(\alpha'|\beta')'}
a_{\beta'}a_{\alpha'}$ for
$\alpha', \beta'\in(\Delta')^+$,
it is sufficient to show
\[
\tau^{\emptyset,L}_\epsilon(a_{\alpha'}a_{\beta'},y)=
\varepsilon^{(\alpha'|\beta')'}
\tau^{\emptyset,L}_\epsilon(a_{\beta'}a_{\alpha'},y)
\]
for any $y\in U_\varepsilon^{L,-}=U_\varepsilon^{L,-}(\Delta')$,
where $\tau^{\emptyset,L}_\epsilon$ is defined for $\Delta'$.
Write
\[
\Delta(y)=\sum_{\gamma,\delta\in(Q')^+}u^y_{\gamma,\delta}(k_\delta^{-1}\otimes1)
\qquad(u^y_{\gamma,\delta}\in U^{L,-}_{\varepsilon,-\gamma}\otimes U^{L,-}_{\varepsilon,-\delta}),
\]
where for $\gamma=\sum_{i\in I}m_{i}\alpha_i'\in(Q')^+$
we set
\[
U^{L,-}_{\varepsilon,-\gamma}
=
\sum_{\sum_{k_j=i}n_{j}=m_i}{\mathbb{C}} f_{k_1}^{(n_{1})}\cdots f_{k_s}^{(n_{s})}
\subset
U^{L,-}_{\varepsilon}.
\]
Then we have
\begin{align*}
&\tau^{\emptyset,L}_\epsilon(a_{\alpha'}a_{\beta'},y)
=
(\tau^{\emptyset,L}_\epsilon\otimes\tau^{\emptyset,L}_\epsilon)(a_{\alpha'}\otimes a_{\beta'},\Delta'(y))\\
=&(\tau^{\emptyset,L}_\epsilon\otimes\tau^{\emptyset,L}_\epsilon)(a_{\alpha'}\otimes a_{\beta'},{\mathcal P}(u_{\beta',\alpha'}^y)),
\end{align*}
where ${\mathcal P}(y_1\otimes y_2)=y_2\otimes y_1$.
Similarly, we have
\[
\tau^{\emptyset,L}_\epsilon(a_{\beta'}a_{\alpha'},y)
=(\tau^{\emptyset,L}_\epsilon\otimes\tau^{\emptyset,L}_\epsilon)(a_{\alpha'}\otimes a_{\beta'},u_{\alpha',\beta'}^y).
\]
Hence it is sufficient to show
\begin{equation}
\label{eq:com}
{\mathcal P}(u_{\gamma,\delta}^y)
=
\varepsilon^{(\gamma|\delta)'}u_{\delta,\gamma}^y
\qquad(y\in U^{L,-}_\varepsilon,\;\gamma, \delta\in (Q')^+).
\end{equation}
We can easily check that if \eqref{eq:com} holds for $y=y_1, y_2$, then it also holds for $y=y_1y_2$.
Hence the assertion follows from \eqref{eq:com} for $y=f_i^{(n)}$, which is easily checked.
The second formula is equivalent to
$b_{\alpha'}b_{\beta'}=
\varepsilon^{(\alpha'|\beta')'}
b_{\beta'}b_{\alpha'}$
for $\alpha', \beta'\in(\Delta')^+$, and is proved similarly to the first formula.
Let us show the third and the fourth formula.
They are equivalent to
$a_{\alpha'}b_{\beta'}=
b_{\beta'}a_{\alpha'}$ for
$\alpha', \beta'\in(\Delta')^+$.
Take $1\leqq j,k\leqq N$ such that
$\alpha'=\beta_j', \beta'=\beta_k'$.
If $j=k$, then the assertion is a consequence of
$a_{\alpha_i'}b_{\alpha_i'}=
b_{\alpha_i'}a_{\alpha_i'}$ in $U_\varepsilon(\Delta')$ for
$i\in I$.
Assume $j>k$.
Setting
\[
w=s_{i_1}\cdots s_{i_{k-1}},\qquad
y=s_{i_k}\cdots s_{i_{j-1}},\qquad
i_j=m,\qquad
i_k=n
\]
we have
\[
a_{\alpha'}=T_wT_y(a_{\alpha'_m}),\qquad
b_{\beta'}=T_w(b_{\alpha'_n}),
\]
and hence it is sufficient to show
\[
b_{\alpha'_n}T_y(a_{\alpha'_m})=T_y(a_{\alpha'_m})b_{\alpha'_n}.
\]
By $s_ny<y$ this is equivalent to
\[
T_n^{-1}(b_{\alpha'_n})T_{s_ny}(a_{\alpha'_m})=T_{s_ny}(a_{\alpha'_m})T_n^{-1}(b_{\alpha'_n}).
\]
By
$T_n^{-1}(b_{\alpha'_n})=-a_{\alpha'_n}k_n$ this is again equivalent to
\[
a_{\alpha'_n}T_{s_ny}(a_{\alpha'_m})=
\varepsilon^{(\alpha_n'|s_ny(\alpha'_m))'}
T_{s_ny}(a_{\alpha'_m})a_{\alpha'_n}.
\]
By $s_ny<s_nys_m$ we have $s_ny(\alpha'_m)\in(\Delta')^+$ and
$T_{s_ny}(a_{\alpha'_m})$ is a linear combination of the elements of the form
$
a_{\beta'_N}^{m_N}\cdots a_{\beta'_1}^{m_1}
$
with $\sum_jm_j\beta'_j=s_ny(\alpha'_m)$.
Hence the assertion follows from the first formula.
The case $j<k$ can be handled in a similar way.
The remaining formulas are obvious.
\end{proof}
We see easily from Proposition \ref{prop:rel-zeta}, Proposition \ref{prop:rel-epsilon} the following.
\begin{proposition}
${}^t\xi$ is a homomorphism of Hopf algebras.
\end{proposition}
\subsection{}
We call $Z_{\mathop{\rm Fr}\nolimits}(U_\zeta)=\mathop{\rm Im}\nolimits({}^t\xi)\cap Z(U_\zeta)$ the Frobenius center of $U_\zeta$.
Note
\[
\mathop{\rm Im}\nolimits({}^t\xi)\cong
\left(\bigotimes_{\alpha\in\Delta^+}{\mathbb{C}}[a_\alpha^{r_\alpha},Sb_\alpha^{r_\alpha}]\right)
\otimes
{\mathbb{C}}[P'].
\]
Namely, the image of ${}^t\xi$ consists of the linear combinations of the monomials of the form
\begin{equation}
\label{eq:monomial}
z=
a_{\beta_1}^{r_{\beta_1}m_{\beta_1}}\cdots
a_{\beta_N}^{r_{\beta_N}m_{\beta_N}}
k_{\mu}
(Sb_{\beta_1}^{r_{\beta_1}m'_{\beta_1}})\cdots
(Sb_{\beta_N}^{r_{\beta_1}m'_{\beta_N}})
\qquad(\mu\in P').
\end{equation}
If $\ell$ is odd, then we have $\eta_\alpha=1$ for any $\alpha\in\Delta^+$, and hence
$Z_{\mathop{\rm Fr}\nolimits}(U_\zeta)=\mathop{\rm Im}\nolimits({}^t\xi)$ by Proposition \ref{prop:rel-zeta}.
Assume $\ell$ is even.
By Proposition \ref{prop:rel-zeta} we see easily that
$Z_{\mathop{\rm Fr}\nolimits}(U_\zeta)$ consists of the linear combinations of the monomials of the form \eqref{eq:monomial} satisfying
\begin{align}
\label{eq:monomial1}
&
\sum_{\alpha\in\Delta^+_1}(m_\alpha+m_\alpha')\alpha^\vee\in 2Q^\vee,
\\
\label{eq:monomial2}
&
(\mu,\gamma^\vee)/r_\gamma
\in 2{\mathbb{Z}}\qquad(\forall\gamma\in\Delta^+_1),
\end{align}
where
\[
\Delta^+_1=\{\alpha\in\Delta^+\mid
\eta_\alpha=-1\}
=
\begin{cases}
\Delta_{\mathop{\rm short}\nolimits}\cap\Delta^+\;&(r\not\in2{\mathbb{Z}}, \ell=2r, d=2)\\
\Delta^+\;&(\text{otherwise}).
\end{cases}
\]
Note that \eqref{eq:monomial2} is equivalent to $\mu\in P''$, where
\begin{equation}
\label{eq:monomial2A}
P''=
\begin{cases}
2P'_0\;&(r\not\in2{\mathbb{Z}}, \ell=2r, d=2)\\
2P'\;&(\text{otherwise}).
\end{cases}
\end{equation}
Here
\[
P'_0=\{\lambda\in{\mathfrak{h}}_{\mathbb{Q}}^*\mid
d_\alpha(\lambda,\alpha^\vee)\in r{\mathbb{Z}}\quad(\alpha\in\Delta)\}.
\]
Define subgroups $\Gamma_1$ and $\Gamma_2$ of
${\mathcal G}(\Delta')$ and ${\mathcal H}(\Delta')$ respectively
by
\begin{align*}
\Gamma_1&=
\begin{cases}
\{1\}&(\ell\not\in2{\mathbb{Z}})\\
{\mathcal G}(\Delta')
\quad&(\ell\in2{\mathbb{Z}}),
\end{cases}
\\
\Gamma_2&=
\begin{cases}
\{1\}&(\ell\not\in2{\mathbb{Z}})\\
(Q')^\vee_{\mathop{\rm short}\nolimits}/2(Q')^\vee_{\mathop{\rm short}\nolimits}
\quad&(\ell\in2{\mathbb{Z}}, r\not\in2{\mathbb{Z}}, d=2)\\
{\mathcal H}(\Delta')
\qquad&(\text{otherwise}),
\end{cases}
\end{align*}
where
\[
(Q')^\vee_{\mathop{\rm short}\nolimits}
=
\sum_{\alpha'\in(\Delta')_{\mathop{\rm short}\nolimits}}{\mathbb{Z}}(\alpha')^\vee.
\]
Set
\[
\Gamma=\Gamma_1\times \Gamma_2.
\]
By the above argument we have the following.
\begin{proposition}
\label{prop:Frob}
Under the identification $\mathop{\rm Im}\nolimits({}^t\xi)\cong U_\varepsilon(\Delta')$ we have
\begin{align*}
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))\cong &
(U^+_\varepsilon(\Delta')\otimes SU^-_\varepsilon(\Delta'))^{\Gamma_1}
\otimes{\mathbb{C}}[P'']
\\
=&
(U^+_\varepsilon(\Delta')\otimes SU^-_\varepsilon(\Delta'))^{\Gamma_1}
\otimes U^0_\varepsilon(\Delta')^{\Gamma_2}\\
=&
U_\varepsilon(\Delta')^{{\Gamma}}.
\end{align*}
\end{proposition}
\begin{proposition}
\label{prop:Frob2}
We have an isomorphism
\begin{equation}
\label{eq:Z0}
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))\cong{\mathbb{C}}[K(\Delta')]^{{\Gamma}}\;(={\mathbb{C}}[K(\Delta')/{\Gamma}])
\end{equation}
of algebras.
\end{proposition}
\begin{proof}
Assume $\varepsilon=1$.
Then the action of the group ${\Gamma}$ on the algebra $U_1(\Delta')$ induces the action of ${\Gamma}$ on the algebraic variety $K(\Delta')$
via the algebra isomorphism \eqref{eq:U1K} for $\Delta'$, and hence we have
$U_1(\Delta')^\Gamma\cong{\mathbb{C}}[K(\Delta')]^\Gamma\cong{\mathbb{C}}[K(\Delta')/\Gamma]$.
Assume $\varepsilon=-1$.
In this case we have
$U_{-1}(\Delta')^\Gamma\cong
U_1(\Delta')^\Gamma$ by Proposition \ref{prop:Xi}.
Hence we have also
$U_{-1}(\Delta')^\Gamma\cong{\mathbb{C}}[K(\Delta')]^\Gamma\cong{\mathbb{C}}[K(\Delta')/\Gamma]$.
\end{proof}
By Proposition \ref{prop:Frob} and \cite{HE}
we obtain the following.
\begin{corollary}
\label{cor:CM}
$Z_{\mathop{\rm Fr}\nolimits}(U_\zeta)$ is Cohen-Macaulay.
\end{corollary}
\section{Main result}
Since the action $\circ_{\varepsilon}$ of $W$ on ${\mathbb{C}}[2P']$ is the ordinary one, we have
\begin{equation}
\label{eq:Z1}
Z_{\mathop{\rm Har}\nolimits}(U_\varepsilon(\Delta'))\cong
{\mathbb{C}}[2P']^W\cong{\mathbb{C}}[P']^W\cong
{\mathbb{C}}[H(\Delta')/W],
\end{equation}
where the second isomorphism is induced by ${\mathbb{C}}[P']\cong{\mathbb{C}}[2P']\;(e(\lambda)\leftrightarrow e(2\lambda))$.
Similarly, we have
\begin{equation}
\label{eq:Z2}
Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta))\cong{\mathbb{C}}[H(\Delta)/W].
\end{equation}
Note that the action of $W$ on $H(\Delta')$ in \eqref{eq:Z1} is the ordinary one, while that on $H(\Delta)$ in \eqref{eq:Z2} is the twisted one given by
\[
w:h\mapsto w(h_1h)h_1^{-1}
\qquad(w\in W, h\in H(\Delta)),
\]
where $h_1\in H(\Delta)$ is given by $\lambda(h_1)=\zeta^{2(\lambda,\tilde{\rho})}\;(\lambda\in P=\mathop{\rm Hom}\nolimits(H(\Delta),{\mathbb{C}}^\times))$.
\begin{proposition}
We have
\[
Z_{\mathop{\rm Fr}\nolimits}(U_\zeta(\Delta))\cap Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta))
={}^t\xi(Z_{\mathop{\rm Har}\nolimits}(U_\varepsilon(\Delta'))),
\]
and hence
\begin{equation}
\label{eq:Z3}
Z_{\mathop{\rm Fr}\nolimits}(U_\zeta(\Delta))\cap Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta))\cong{\mathbb{C}}[H(\Delta')/W].
\end{equation}
\end{proposition}
\begin{proof}
Note that $Z_{\mathop{\rm Har}\nolimits}(U_\varepsilon(\Delta'))$ is spanned by $\{t_{L_\varepsilon(\lambda)}\}_{\lambda\in (P')^+}$.
By Proposition \ref{prop:omagaA} we have ${}^t\xi(t_{L_\varepsilon(\lambda)})=t_{{\mathop{\rm Fr}\nolimits}^*L_\varepsilon(\lambda)}$, where ${\mathop{\rm Fr}\nolimits}^*L_\varepsilon(\lambda)$ is the $\dot{U}_\zeta(\Delta)$-module induced via ${\mathop{\rm Fr}\nolimits}:\dot{U}_\zeta(\Delta)\to\dot{U}_\varepsilon(\Delta')$.
Hence we have
\[
Z_{\mathop{\rm Fr}\nolimits}(U_\zeta(\Delta))\cap Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta))
\supset{}^t\xi(Z_{\mathop{\rm Har}\nolimits}(U_\varepsilon(\Delta'))),
\]
and
$\iota_\zeta({}^t\xi(Z_{\mathop{\rm Har}\nolimits}(U_\varepsilon(\Delta'))))={\mathbb{C}}[2P']^W$.
On the other hand by Proposition \ref{prop:Frob} we have
$\iota_\zeta(Z_{\mathop{\rm Fr}\nolimits}(U_\zeta(\Delta))\cap Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta)))\subset
{\mathbb{C}}[2P]^{W\circ_\zeta}\cap{\mathbb{C}}[P'']=
{\mathbb{C}}[2P']^W$.
\end{proof}
By the definition of the Harish-Chandra isomorphism we have the following.
\begin{proposition}
The morphism $H(\Delta)/W\to H(\Delta')/W$, which is associated to the inclusion
$Z_{\mathop{\rm Fr}\nolimits}(U_\zeta(\Delta))\cap Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta))\subset
Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta))$ together with the isomorphisms \eqref{eq:Z2} and \eqref{eq:Z3},
is the natural one induced from the canonical morphism $H(\Delta)\to H(\Delta')$ associated to the embedding
$P'\subset P$.
\end{proposition}
Note that we have the following commutative diagram
\[
\begin{CD}
Z_{\mathop{\rm Fr}\nolimits}(U_\zeta(\Delta))\cap Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta))
@>>>
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta(\Delta))
\\
@AAA@AAA
\\
Z_{{\mathop{\rm Har}\nolimits}}(U_\varepsilon(\Delta'))
@>>>
U_\varepsilon(\Delta')^\Gamma
\\
@AAA@AAA
\\
Z_{{\mathop{\rm Har}\nolimits}}(U_1(\Delta'))
@>>>
U_1(\Delta')^\Gamma
@>>>
U_1(\Delta')
\\
@AAA@AAA@AAA
\\
{\mathbb{C}}[H(\Delta')/W]
@>>>
{\mathbb{C}}[K(\Delta')/\Gamma]
@>>>
{\mathbb{C}}[K(\Delta')]
\end{CD}
\]
where horizontal arrows are inclusions, and vertical arrows are isomorphisms.
Note also that the inclusion
${\mathbb{C}}[H(\Delta')/W]
\to
{\mathbb{C}}[K(\Delta')]$
is induced by $\upsilon\circ\kappa$, where
$
\kappa:K(\Delta')\to G(\Delta')
$ and
$\upsilon:G(\Delta')\to H(\Delta')/W$
are morphisms of algebraic varieties we have already defined.
Hence we have the following.
\begin{proposition}
The morphism $K(\Delta')/{\Gamma}\to H(\Delta')/W$, which is associated to the inclusion
$Z_{\mathop{\rm Fr}\nolimits}(U_\zeta(\Delta))\cap Z_{\mathop{\rm Har}\nolimits}(U_\zeta(\Delta))\subset
Z_{\mathop{\rm Fr}\nolimits}(U_\zeta(\Delta))$ together with the isomorphisms \eqref{eq:Z0} and \eqref{eq:Z3},
is induced by
$\upsilon\circ\kappa:K(\Delta')\to H(\Delta')/W$.
\end{proposition}
The main result of this paper is the following.
\begin{theorem}
\label{thm:main}
The natural homomorphism
\[
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)\cap Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)}Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)
\to
Z(U_\zeta)
\]
is an isomorphism.
In particular, we have
\[
Z(U_\zeta)
\cong
{\mathbb{C}}[(K(\Delta')/{\Gamma})\times_{H(\Delta')/W}(H(\Delta)/W)].
\]
\end{theorem}
The rest of the paper is devoted to the proof of Theorem \ref{thm:main}.
The arguments below mostly follow that in De Concini-Procesi \cite{DP}.
We set for simplicity
\begin{align*}
Z&=Z(U_\zeta),\\
Z_{{\mathop{\rm Fr}\nolimits}}&=Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)\cong{\mathbb{C}}[K(\Delta')/{\Gamma}],\\
Z_{{\mathop{\rm Har}\nolimits}}&=Z_{{\mathop{\rm Har}\nolimits}}(U_\zeta)
\cong{\mathbb{C}}[H(\Delta)/W],
\end{align*}
so that
\[
Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}\cong{\mathbb{C}}[H(\Delta')/W].
\]
We are going to show that the canonical homomorphism
\[
j:Z_{{\mathop{\rm Fr}\nolimits}}\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}}Z_{{\mathop{\rm Har}\nolimits}}
\to
Z
\]
is an isomorphism.
\begin{proposition}
\label{prop:normal}
$Z_{{\mathop{\rm Fr}\nolimits}}\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}}Z_{{\mathop{\rm Har}\nolimits}}$ is a normal domain.
\end{proposition}
\begin{proof}
By Serre's criterion it is sufficient to show that the scheme
$(K(\Delta')/{\Gamma})\times_{H(\Delta')/W}(H(\Delta)/W)$ is smooth in codimension one and Cohen-Macaulay.
We first show that $(K(\Delta')/{\Gamma})\times_{H(\Delta')/W}(H(\Delta)/W)$ is smooth in codimension one.
Since $H(\Delta)/W$ is smooth and $H(\Delta)/W\to H(\Delta')/W$ is a finite morphism, it is sufficient to show that there exists a subvariety $X$ of $K(\Delta')/{\Gamma}$ with codimension greater than one such that
$(K(\Delta')/{\Gamma})\setminus X\to H(\Delta')/W$ is smooth.
Consider first $K(\Delta')\to H(\Delta')/W$.
Then there exists a subvariety $X_1$ of $K(\Delta')$ with codimension greater than one such that
$K(\Delta')\setminus X_1\to H(\Delta')/W$ is smooth since a similar result is known to hold for $G(\Delta')\to H(\Delta')/W$ and $K(\Delta')\to G(\Delta')$ is smooth.
Hence it is sufficient to show that there exists a subvariety $X_2$ of $K(\Delta')$ with codimension greater than one such that $K(\Delta')\setminus X_2\to (K(\Delta')/{\Gamma}$ is smooth since $K(\Delta')\to (K(\Delta')/{\Gamma}$ is a finite morphism.
We may assume ${\Gamma}\ne\{1\}$.
In this case we have
\[
K(\Delta')=Y\times {\rm{Spec}}\;{\mathbb{C}}[P'],\quad
K(\Delta')/{\Gamma}
=Y/{P'}\times {\rm{Spec}}\;{\mathbb{C}}[P''],
\]
where
$
Y=\prod_{\alpha\in\Delta^+}{\mathbb{C}}^2
$
and the action of $P'$ on $Y$ is given by
\[
\lambda:(x_\alpha)_{\alpha\in\Delta^+}\mapsto
((-1)^{d_{\alpha'}(\lambda,(\alpha')^\vee)}x_\alpha)_{\alpha\in\Delta^+}
\qquad
(\lambda\in P',\;x_\alpha\in{\mathbb{C}}^2).
\]
Since ${\rm{Spec}}\;{\mathbb{C}}[P']\to{\rm{Spec}}\;{\mathbb{C}}[P'']$ is smooth, it is sufficient to show that there exists a subvarriety $Z$ of $Y$ with codimension greater than one such that $Y\setminus Z\to Y/{P'}$ is smooth.
Note that the obvious action of $\prod_{\alpha\in\Delta^+}GL_2({\mathbb{C}})$ on $Y$ commutes with the action of $P'$.
Hence $Y\to Y/{P'}$ is smooth on the open orbit
$
Y_0=\prod_{\alpha\in\Delta^+}({\mathbb{C}}^2\setminus\{0\})
$.
Our claim is a consequence of
$\dim(Y\setminus Y_0)\leqq \dim Y-2$.
Let us show that $Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)\otimes_{{\mathbb{C}}[2P']^W}{{\mathbb{C}}[2P]^{W\circ_\zeta}}$ is Cohen-Macaulay.
By \cite{St} ${\mathbb{C}}[2P']^W$ and ${\mathbb{C}}[2P]^{W\circ_\zeta}$ are both isomorphic to the polynomial ring in $|I|$-variables.
Hence we have
\[
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)\otimes_{{\mathbb{C}}[2P']^W}{{\mathbb{C}}[2P]^{W\circ_\zeta}}
\cong
Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)[X_1,\dots, X_{|I|}]/(f_1,\dots, f_{|I|})
\]
for some $f_1,\dots, f_{|I|}\in Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)[X_1,\dots, X_{|I|}]$.
Moreover, we have obviously $\dim Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)\otimes_{{\mathbb{C}}[2P']^W}{{\mathbb{C}}[2P]^{W\circ_\zeta}}=\dim Z_{{\mathop{\rm Fr}\nolimits}}(U_\zeta)$.
Hence our claim is a consequence of Corollary \ref{cor:CM} and well-known results on Cohen-Macaulay rings.
\end{proof}
\begin{lemma}
\label{lem:rank}
$Z_{{\mathop{\rm Fr}\nolimits}}\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}}Z_{{\mathop{\rm Har}\nolimits}}$ is a free $Z_{{\mathop{\rm Fr}\nolimits}}$-module of rank $P/P'$.
\end{lemma}
\begin{proof}
It is sufficient to show that
$Z_{{\mathop{\rm Har}\nolimits}}$
is a free $Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}$-module of rank $P/P'$.
Namely, we have only to show that
${\mathbb{C}}[2P]^{W\circ_\zeta}$
is a free ${\mathbb{C}}[2P']^{W}$-module of rank $P/P'$.
We may replace ${\mathbb{C}}[2P]^{W\circ_\zeta}$ with ${\mathbb{C}}[2P]^{W}$ by applying an automorphism of ${\mathbb{C}}[P]$ which sends ${\mathbb{C}}[2P]^{W\circ_\zeta}$ and ${\mathbb{C}}[2P']^{W}$ to
${\mathbb{C}}[2P]^{W}$ and ${\mathbb{C}}[2P']^{W}$ respectively.
By Steinberg \cite{St}
${\mathbb{C}}[2P]$ (resp.\ ${\mathbb{C}}[2P']$) is a free ${\mathbb{C}}[2P]^{W}$-module
(resp.\ ${\mathbb{C}}[2P']^{W}$-module) of rank $|W|$.
Since ${\mathbb{C}}[2P]$ is a free ${\mathbb{C}}[2P']$-module of rank $|P/P'|$,
${\mathbb{C}}[2P]$ is a free ${\mathbb{C}}[2P']^W$-module of rank $|W|\times|P/P'|$.
Note that
${\mathbb{C}}[2P]^W$ is a direct summand of the free ${\mathbb{C}}[2P]^W$-module ${\mathbb{C}}[2P]$ by \cite{St}.
Hence ${\mathbb{C}}[2P]^W$ is also a direct summand of the free ${\mathbb{C}}[2P']^W$-module ${\mathbb{C}}[2P]$.
It follows that ${\mathbb{C}}[2P]^W$ is a projective ${\mathbb{C}}[2P']^W$-module of rank $|P/P'|$.
Since ${\mathbb{C}}[2P']^W$ is isomorphic to a polynomial ring by \cite{St}, we conclude
from the Serre conjecture that
${\mathbb{C}}[2P]^{W}$
is a free ${\mathbb{C}}[2P']^{W}$-module of rank $P/P'$.
\end{proof}
Set
\begin{equation}
\label{eq:m}
m=
\begin{cases}
1\qquad&(\ell\not\in 2{\mathbb{Z}})\\
2^{|\Delta_{{\mathop{\rm short}\nolimits}}\cap \Pi|}
&(\ell\in2{\mathbb{Z}}, r\not\in 2{\mathbb{Z}}, d=2)\\
2^{|\Pi|}
&(\text{otherwise}).
\end{cases}
\end{equation}
For a commutative domain $S$ we denote by $Q(S)$ the quotient field.
\begin{lemma}
\label{lem:rank2}
$U_\zeta$ is a finitely generated $Z_{\mathop{\rm Fr}\nolimits}$-module, and we have
\[
\dim_{Q(Z_{{\mathop{\rm Fr}\nolimits}})}Q(Z_{{\mathop{\rm Fr}\nolimits}})\otimes_{Z_{\mathop{\rm Fr}\nolimits}}U_\zeta=\left(m\prod_{\alpha\in\Delta^+}r_\alpha\right)^2\times |P/P'|.
\]
\end{lemma}
\begin{proof}
Denote by $C$ the image of ${}^t\xi:U_\varepsilon(\Delta')\to U_\zeta(\Delta)$.
Then we have
\[
Z_{{\mathop{\rm Fr}\nolimits}}\subset C\subset U_\zeta.
\]
Since $U_\zeta$ is a free $C$-module of rank $\left(\prod_{\alpha\in\Delta^+}r_\alpha\right)^2\times |P/P'|$, it is sufficient to show that $C$ is a finitely generated $Z_{\mathop{\rm Fr}\nolimits}$-module and
\[
\dim_{Q(Z_{{\mathop{\rm Fr}\nolimits}})}Q(Z_{{\mathop{\rm Fr}\nolimits}})\otimes_{Z_{\mathop{\rm Fr}\nolimits}}C=m^2.
\]
If $\ell$ is odd, we have $C=Z_{\mathop{\rm Fr}\nolimits}$, and hence we may assume that $\ell$ is even.
By the explicit description of $Z_{\mathop{\rm Fr}\nolimits}$ given by \eqref{eq:monomial1}, \eqref{eq:monomial2}
we have
\[
C\cong
{\mathbb{C}}[P']\otimes{\mathbb{C}}[({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}],\qquad
Z_{\mathop{\rm Fr}\nolimits}\cong
{\mathbb{C}}[P'']\otimes{\mathbb{C}}[L],
\]
where
\[
L=\{(m_\beta,m'_\beta)_{\beta\in\Delta^+}
\in({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}
\mid
\sum_{\beta\in\Delta_1^+}(m_\beta+m'_\beta)\beta^\vee\in 2Q^\vee\},
\]
and ${\mathbb{C}}[({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}]$ and ${\mathbb{C}}[L]$ are the semigroup algebras of the semigroups $({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}$ and $L$ respectively.
Note that ${\mathbb{C}}[P']$ is a free ${\mathbb{C}}[P'']$-module of rank $|P'/P''|$.
Since ${\mathbb{C}}[({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}]$ is a finitely generated ${\mathbb{C}}[2({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}]$-module, it is also a finitely generated ${\mathbb{C}}[L]$-module by $2({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}\subset L$.
Hence $C$ is a finitely generated $Z_{\mathop{\rm Fr}\nolimits}$-module.
Set
\[
\tilde{L}=\{(m_\beta,m'_\beta)_{\beta\in\Delta^+}
\in({\mathbb{Z}}^2)^{\Delta^+}
\mid
\sum_{\beta\in\Delta_1^+}(m_\beta+m'_\beta)\beta^\vee\in 2Q^\vee\}.
\]
Then we have $({\mathbb{Z}}^2)^{\Delta^+}/\tilde{L}\cong Q_1^\vee/(Q_1^\vee\cap 2Q^\vee)$, where $Q_1^\vee=\sum_{\beta\in\Delta_1^+}{\mathbb{Z}}\beta^\vee$.
Hence ${\mathbb{C}}[({\mathbb{Z}}^2)^{\Delta^+}]$ is a free ${\mathbb{C}}[\tilde{L}]$-module of rank $|Q_1^\vee/(Q_1^\vee\cap 2Q^\vee)|$.
Since ${\mathbb{C}}[({\mathbb{Z}}^2)^{\Delta^+}]$ and ${\mathbb{C}}[\tilde{L}]$ are localizations of ${\mathbb{C}}[({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}]$ and ${\mathbb{C}}[L]$ respectively with respect to the multiplicative set $S=2({\mathbb{Z}}_{\geqq0}^2)^{\Delta^+}$ of ${\mathbb{C}}[L]$, we obtain that $S^{-1}C$ is a free $S^{-1}Z_{\mathop{\rm Fr}\nolimits}$-module of rank $|P'/P''|\times |Q_1^\vee/(Q_1^\vee\cap 2Q^\vee)|$.
Therefore, $Q(Z_{\mathop{\rm Fr}\nolimits})\otimes_{Z_{\mathop{\rm Fr}\nolimits}}C$ is a free $Q(Z_{\mathop{\rm Fr}\nolimits})$-module of rank $|P'/P''|\times |Q_1^\vee/(Q_1^\vee\cap 2Q^\vee)|$.
It remains to show $m^2=|P'/P''|\times |Q_1^\vee/(Q_1^\vee\cap 2Q^\vee)|$.
In the case $\Delta^+_1=\Delta^+$ we have $P''=2P'$, $Q_1^\vee=Q^\vee$, and hence the assertion is obvious.
In the case $\Delta^+_1=\Delta^+\cap\Delta_{\mathop{\rm short}\nolimits}$ we have $P'=rP$, $P''=r(P\cap 2P_1)$, where
\[
P_1=\{\mu\in {\mathfrak{h}}_{\mathbb{Q}}^*\mid(\mu,\alpha^\vee)\in{\mathbb{Z}}\;(\alpha\in\Delta_{\mathop{\rm short}\nolimits})\},
\]
and hence $P'/P''\cong P/(P\cap 2P_1)$.
On the other hand we have
\[
Q_1^\vee/(Q_1^\vee\cap 2Q^\vee)
\cong
(Q_1^\vee+2Q^\vee)/2Q^\vee
\cong
(\frac12Q_1^\vee+Q^\vee)/Q^\vee.
\]
Since $P$ and $P\cap2P_1$ are lattices in ${\mathfrak{h}}_{\mathbb{Q}}^*$ dual to $Q^\vee$ and $\frac12Q_1^\vee+Q$ respectively,
we obtain
$|P'/P''|=|Q_1^\vee/(Q_1^\vee\cap 2Q^\vee)|$.
It remains to check $m=|(Q_1^\vee+2Q^\vee)/2Q^\vee|$.
For that it is sufficient to show
\[
Q_1^\vee+2Q^\vee
=\sum_{\alpha\in\Delta_{\mathop{\rm short}\nolimits}\cap\Pi}{\mathbb{Z}}\alpha^\vee+2Q^\vee.
\]
In order to prove this we have only to show that the right-hand side is stable under the action of the Weyl group.
Hence it is sufficient to show $s_j(\alpha^\vee_i)\in\sum_{\alpha\in\Delta_{\mathop{\rm short}\nolimits}\cap\Pi}{\mathbb{Z}}\alpha^\vee+2Q^\vee$ for any $i, j\in I$ satisfying $\alpha_i\in\Delta_{\mathop{\rm short}\nolimits}$.
This is obvious if $\alpha_j\in\Delta_{\mathop{\rm short}\nolimits}$.
In the case where $\alpha_j\in\Delta_{\mathop{\rm long}\nolimits}$ we have $s_j(\alpha_i^\vee)=
\alpha_i^\vee-(\alpha_i^\vee,\alpha_j)\alpha_j^\vee$ with $(\alpha_i^\vee,\alpha_j)\in\{0,-2\}$.
We are done.
\end{proof}
In general let $R$ be a ${\mathbb{C}}$-algebra.
Assume that $R$ is prime (i.e. $x, y\in R,\;xRy=\{0\}$ implies $x=0$ or $y=0$), and is finitely generated as a
$Z(R)$-module.
Then
$Q(Z(R))\otimes _{Z(R)}R$ is a finite-dimensional central simple algebra over the field $Q(Z(R))$.
Hence $\overline{Q(Z(R))}\otimes _{Z(R)}R$ is isomorphic to the matrix algebra $M_n(\overline{Q(Z(R))})$ for some $n$, where $\overline{Q(Z(R))}$ denotes the algebraic closure of $Q(Z(R))$.
Then this $n$ is called the degree of $R$.
Namely, the degree $n$ of $R$ is given by
\[
\dim_{Q(Z(R))}Q(Z(R))\otimes _{Z(R)}R=n^2.
\]
Note that $U_\zeta$ is a finitely generated $Z(U_\zeta)$-module by Lemma \ref{lem:rank2}.
In \cite{DK} De Concini-Kac have shown that $U_\zeta$ has no zero divisors using a certain degeneration ${\mathop{\rm{Gr}}\nolimits}\, U_\zeta$ of $U_\zeta$.
In particular, it is a prime algebra.
Hence we have the notion of the degree of $U_\zeta$.
In \cite{DP} De Concini-Procesi proved that the degree of $U_\zeta$ is less than or equal to that of ${\mathop{\rm{Gr}}\nolimits}\, U_\zeta$.
They have also shown that the degree of ${\mathop{\rm{Gr}}\nolimits}\, U_\zeta$ can be computed from the elementary divisors of a certain matrix with integral coefficients.
The actual computation of the elementary divisors was done in \cite{DP} when $\ell$ is odd, and in Beck \cite{Beck} in the remaining cases.
From these results we have the following.
\begin{proposition}
\label{prop:rank3}
We have
\[
\dim_{Q(Z)}
Q(Z)\otimes _{Z}U_\zeta\leqq
\left(m\prod_{\alpha\in\Delta^+}r_\alpha\right)^2.
\]
\end{proposition}
Let us show that $j$ is injective.
By Proposition \ref{prop:normal}
$Z_{{\mathop{\rm Fr}\nolimits}}\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}}Z_{{\mathop{\rm Har}\nolimits}}$ is a domain.
Note also that $Z$ is a domain since $U_\zeta$ has no zero divisors.
Hence we have only to show that
\[
j^*:{\rm{Spec}} \;Z\to{\rm{Spec}}\;Z_{{\mathop{\rm Fr}\nolimits}}\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}}Z_{{\mathop{\rm Har}\nolimits}}
\]
has a dense image.
Consider the embedding $j':Z_{{\mathop{\rm Fr}\nolimits}}\to Z$.
Since $j'$ is injective, $(j')^*:{\rm{Spec}} \;Z\to{\rm{Spec}}\;Z_{{\mathop{\rm Fr}\nolimits}}$ has a dense image.
Note that $(j')^*$ is the composite of $j^*$ with the natural morphism
\[
\varphi:
{\rm{Spec}}\;Z_{{\mathop{\rm Fr}\nolimits}}\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}}Z_{{\mathop{\rm Har}\nolimits}}
\to
{\rm{Spec}}\;Z_{{\mathop{\rm Fr}\nolimits}}.
\]
Since
${\rm{Spec}}\;Z_{{\mathop{\rm Fr}\nolimits}}\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}}Z_{{\mathop{\rm Har}\nolimits}}$ is irreducible and $\varphi$ is a finite morphism by Lemma \ref{lem:rank}, we conclude that $j^*$ must have a dense image.
The injectivity of $j$ is verified.
Set for simplicity
\[
Z'=Z_{{\mathop{\rm Fr}\nolimits}}\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}\cap Z_{{\mathop{\rm Har}\nolimits}}}Z_{{\mathop{\rm Har}\nolimits}}.
\]
Then we have
\[
Z_{{\mathop{\rm Fr}\nolimits}}\subset Z'\subset Z\subset U_\zeta.
\]
We need to show $Z'=Z$.
Assume that
\begin{equation}
\label{eq:q5}
Q(Z)=Q(Z')
\end{equation}
holds.
Since
$U_\zeta$ is a finitely generated $Z_{{\mathop{\rm Fr}\nolimits}}$-module,
$Z$ is a finitely generated $Z'$-module.
It follows that $Z=Z'$ by Proposition \ref{prop:normal}.
Hence it is sufficient to show \eqref{eq:q5}.
Since
$Z'$ is a free $Z_{{\mathop{\rm Fr}\nolimits}}$-module of rank $|P/P'|$, we have
$
[Q(Z'):Q(Z_{Fr})]\geqq |P/P'|$.
Hence it is sufficient to show
\begin{equation}
\label{eq:q4}
[Q(Z):Q(Z_{Fr})]\leqq|P/P'|.
\end{equation}
Note that we have
$
Q(Z_{{\mathop{\rm Fr}\nolimits}})\otimes_{Z_{{\mathop{\rm Fr}\nolimits}}}Z\cong Q(Z)
$
since $Z$ is a finitely generated $Z_{{\mathop{\rm Fr}\nolimits}}$-module.
Hence
\[
Q(Z_{Fr})\otimes _{Z_{Fr}}U_\zeta
\cong
Q(Z_{Fr})\otimes _{Z_{Fr}}Z\otimes_Z U_\zeta
\cong
Q(Z)\otimes _{Z}U_\zeta.
\]
Hence we obtain \eqref{eq:q4} by Lemma \ref{lem:rank2}, Proposition \ref{prop:rank3}.
The proof of Theorem \ref{thm:main} is complete.
\begin{corollary}
\label{cor:degree}
The degree of $U_\zeta$ is equal to
$m\prod_{\alpha\in\Delta^+}r_\alpha$, where $m$ is as in \eqref{eq:m}.
\end{corollary}
\begin{remark}
{\rm
Corollary \ref{cor:degree} was proved by De Concini-Procesi \cite{DP} in the case $\ell$ is odd and by Beck \cite{Beck} in the case $\ell$ is divided by $4d$.
}
\end{remark}
\bibliographystyle{unsrt}
|
1,477,468,751,440 | arxiv | \section{
\section{Introduction
Low-mass stars have main-sequence masses of $M_{\star}\simeq0.08-2$~M$_{\sun}$,
and are classified with spectral types of M7--A5 (e.g. \citealp{stahler2005}).
The formation process of these types of stars begins when the parent molecular
cloud core undergoes gravitational collapse (e.g. \citealp{shu1987};
\citealp{mckee2007}). In the course of time, the collapsing core centre
heats up due to compression, and ultimately becomes a protostar.
The youngest low-mass protostars, characterised by accretion from
the much more massive envelope ($M_{\rm env}\gg M_{\star}$),
are known as the Class~0 objects (\citealp{andre1993}, 2000).
A curious example of a Class~0 protostellar object is SMM3 in the Orion~B9
star-forming filament. This object was first uncovered by
Miettinen et al. (2009; hereafter Paper~I), when they mapped Orion B9 using
the Large APEX BOlometer CAmera (LABOCA) at 870~$\mu$m. In Paper~I, we constructed and analysed
a simple mid-infrared--submillimetre spectral energy distribution (SED) of SMM3, and classified it as a Class~0 object. The physical and chemical properties of SMM3 (e.g. the gas temperature and the level of N$_2$H$^+$ deuteration) were further characterised by Miettinen et al. (2010, 2012; hereafter referred to as Papers~II and III, respectively) through molecular line observations. In Paper~III, we also presented the results of our Submillimetre APEX BOlometer
CAmera (SABOCA) 350~$\mu$m imaging of Orion~B9. With the flux density of
$S_{350\,{\rm \mu m}}\simeq 5.4$~Jy, SMM3 turned out to be the strongest
350~$\mu$m emitter in the region. Perhaps more interestingly, the
350~$\mu$m image revealed that SMM3 hosts two subfragments (dubbed SMM3b and
3c) on the eastern side of the protostar, where an extension could already
be seen in the LABOCA map at 870~$\mu$m. The projected distances of the subfragments from the
protostar's position, 0.07--0.10~pc\footnote{In the present work, we have adopted a distance of $d=420$~pc to the source to be consistent with the most recent studies of SMM3 (\citealp{stutz2013}; \citealp{tobin2015}; \citealp{furlan2016}). We note that in Papers~I--III, we assumed a distance
of $450$~pc, which is a factor of 1.07 larger than used here.},
were found to be comparable to the local thermal Jeans length.
This led us to suggest that the parent core might have fragmented
into smaller units via Jeans gravitational instability.
The Orion~B or L1630 molecular cloud, including Orion~B9, was mapped with
\textit{Herschel} as part of the \textit{Herschel} Gould Belt Survey (HGBS; Andr\'e et al. 2010)\footnote{The HGBS is a \textit{Herschel} key programme jointly carried out by SPIRE Specialist Astronomy Group 3 (SAG 3), scientists of se\-veral institutes in the PACS Consortium (CEA Saclay, INAF-IFSI Rome and INAF-Arcetri, KU Leuven, MPIA Heidelberg), and scientists of the \textit{Herschel} Science Center (HSC). For more details, see {\tt http://gouldbelt-herschel.cea.fr}}. The \textit{Herschel} images revealed that Orion~B9 is actually a
filamentary-shaped cloud in which SMM3 is embedded
(see Fig.~2 in \citealp{miettinen2013b}). Miettinen (2012b)
found that there is a sharp velocity gradient in the parent filament
(across its short axis), and suggested that it might represent a shock front resulting from the feedback from the nearby expanding H{\scriptsize II} region/OB cluster NGC~2024 ($\sim3.7$~pc to the southwest of Orion~B9). Because SMM3 appears to lie on the border of the velocity gradient, it might have a physical connection to it, and it is possible that the formation of SMM3 (and the other dense cores in Orion~B9) was triggered by external, positive feedback (\citealp{miettinen2012b}). Actually, the OB associations to the west of the whole Orion~B cloud have likely affected much of the cloud area through their strong feedback in the form of ionising radiation and stellar winds (e.g. \citealp{cowie1979}). The column density probability distribution function of Orion~B, studied by Schneider et al. (2013), was indeed found to be broadened as a result of external compression.
The Class~0 object SMM3 was included in the Orion protostellar core
survey by Stutz et al. (2013, hereafter S13; their source 090003). Using data from
\textit{Spitzer}, \textit{Herschel}, SABOCA, and LABOCA, S13
constructed an improved SED of SMM3 compared to what was presented in Paper~I.
The bolometric temperature and luminosity -- as based on the Myers \& Ladd (1993) method -- were found to be $T_{\rm bol}=36.0\pm0.8$~K and $L_{\rm bol}=2.71\pm0.24$~L$_{\sun}$. They also performed a modified blackbody (MBB) fit to the SED ($\lambda \geq70$~$\mu$m) of SMM3, and obtained a dust temperature of $T_{\rm dust}=21.4\pm0.4$~K, luminosity of $L=2.06\pm0.15$~L$_{\sun}$,
and envelope mass of $M_{\rm env}=0.33\pm0.06$~M$_{\sun}$ (see their Fig.~9). The derived SED properties led S13 to the conclusion that SMM3 is likely a Class~0 object, which supports our earlier suggestion (Papers~I and III).
Tobin et al. (2015) included SMM3 in their Combined Array for Research for Millimetre Astronomy (CARMA) 2.9~mm continuum imaging survey of Class~0 objects in Orion. This was the first high angular resolution study of SMM3. With a 2.9~mm flux density of $S_{\rm 2.9\, mm}=115.4\pm3.9$~mJy (at an angular resolution of $2\farcs74 \times 2\farcs56$), SMM3 was found to be the second brightest source among the 14 target sources. The total (gas$+$dust) mass
derived for SMM3 by Tobin et al. (2015), $M=7.0\pm0.7$~M$_{\sun}$, is much higher than that derived earlier by S13 using a MBB fitting technique, which underpredicted the 870~$\mu$m flux density of the source (see \citealp{tobin2015} and Sect.~4.1 herein for further discussion and different assumptions used). Tobin et al. (2015) did not detect 2.9~mm emission from the subfragments SMM3b or 3c, which led the authors to conclude that they are starless.
Kang et al. (2015) carried out a survey of H$_2$CO and HDCO emission towards Class~0 objects in Orion, and SMM3 was part of their source sample (source HOPS403 therein). The authors derived a HDCO/H$_2$CO ratio of $0.31\pm0.06$ for SMM3, which improves our knowledge of the chemical characteristics of this source, and strongly points towards its early evolutionary stage from a chemical point of view.
Finally, we note that SMM3 was part of the recent large Orion protostellar core survey by Furlan et al. (2016; source HOPS400 therein), where the authors presented the sources' panchromatic (1.2--870~$\mu$m) SEDs and radiative transfer model fits. They derived a bolometric luminosity of $L_{\rm bol}=2.94$~L$_{\sun}$ (a trapezoidal summation over all the available flux density data points), total (stellar$+$accretion) luminosity of $L_{\rm tot}=5.2$~L$_{\sun}$, bolometric temperature of $T_{\rm bol}=35$~K (following \citealp{myers1993} as in S13), and an envelope mass of $M_{\rm env}=0.30$~M$_{\sun}$, which are in fairly good agreement with the earlier S13 results. We note that the total luminosity derived by Furlan et al. (2016) from their best-fit model is corrected for inclination effects, and hence is higher than $L_{\rm bol}$. Moreover, the aforementioned value of $M_{\rm env}$ refers to a radius of 2\,500~AU ($=0.012$~pc), which corresponds to an angular radius of about $6\arcsec$ at the distance of SMM3, while a similar envelope mass value derived by S13 refers to a larger angular scale as a result of coarser resolution of the observational data used (e.g. $19\arcsec$ resolution in their LABOCA data).
In the present study, we attempt to further add to our understanding of the physical and chemical
properties of SMM3 by means of our new molecular line observations. We also
re-analyse our previous spectral line data of SMM3 in a uniform manner to
make them better comparable with each other. This paper is outlined as
follows. The observations and the observational data are described
in Sect.~2. The immediate observational results are presented in Sect.~3.
The analysis of the observations is described in Sect.~4. The results are
discussed in Sect.~5, and the concluding remarks are given in Sect.~6.
\section{Observations, data, and data reduction}
\subsection{New spectral line observations with APEX}
\subsubsection{Single-pointing observations}
A single-pointing position at $\alpha_{2000.0}=05^{\rm h}42^{\rm m}45\fs24$, and
$\delta_{2000.0}=-01\degr16\arcmin14\farcs0$ (i.e. the \textit{Spitzer} 24~$\mu$m peak of SMM3) was observed with the 12-metre APEX
telescope\footnote{{\tt http://www.apex-telescope.org/}} (\citealp{gusten2006})
in the frequency range $\sim218.2-222.2$ GHz. The observations were carried
out on 20 August 2013, when the amount of precipitable water vapour (PWV) was
measured to be 1.3~mm, which corresponds to a zenith atmospheric transmission
of about 93\%.
As a front end we used the APEX-1 receiver of the Swedish Heterodyne
Facility Instrument (SHeFI; \citealp{belitsky2007}; \citealp{vassilev2008a},b).
The APEX-1 receiver operates in a single-sideband (SSB) mode using
sideband separation mixers, and it has a sideband rejection
ratio better than 10~dB. The backend was the RPG eXtended bandwidth
Fast Fourier Transfrom Spectrometer (XFFTS; see \citealp{klein2012}) with an
instantaneous bandwidth of 2.5~GHz and 32\,768 spectral channels. The
spectrometer consists of two units, which have a fixed overlap region of
1.0~GHz. The resulting channel spacing, 76.3~kHz, corresponds to
104~m~s$^{-1}$ at the central observed frequency of 220\,196.65~MHz.
The beam size (Half-Power Beam Width or HPBW) at the observed frequency range
is $\sim28\farcs1-28\farcs6$.
The observations were performed in the wobbler-switching
mode with a $100\arcsec$ azimuthal throw between two positions on sky
(symmetric offsets), and a chopping rate of $R=0.5$ Hz. The total on-source
integration time was 34 min. The telescope focus and pointing were optimised
and checked at regular intervals on the planet Jupiter and the variable star
R Leporis (Hind’s Crimson Star). The pointing was found to be accurate to
$\sim3\arcsec$. The typical SSB system temperatures during the observations
were in the range $T_{\rm sys}\sim130-140$ K. Calibration was made by means of
the chopper-wheel technique, and the output intensity scale given by the system
is the antenna temperature corrected for the atmospheric
attenuation ($T_{\rm A}^{\star}$). The observed intensities were converted
to the main-beam brightness temperature scale by
$T_{\rm MB}=T_{\rm A}^{\star}/\eta_{\rm MB}$, where $\eta_{\rm MB}=0.75$ is the
main-beam efficiency at the observed frequency range. The absolute calibration
uncertainty is estimated to be about 10\%.
The spectra were reduced using the Continuum and Line Analysis
Single-dish Software 90 ({\tt CLASS90}) program of the GILDAS software
package\footnote{Grenoble Image and Line Data Analysis
Software (GILDAS) is provided and actively developed by Institut de
Radioastronomie Millim\'etrique (IRAM), and is available
at {\tt http://www.iram.fr/IRAMFR/GILDAS}}. The individual spectra were
averaged, and the resulting spectra were Hanning-smoothed to a
velocity resolution of 208~m~s$^{-1}$ to improve the
signal-to-noise (S/N) ratio. Linear (first-order) baselines
were determined from the velocity ranges free of spectral line features,
and then subtracted from the spectra. The resulting $1\sigma$ rms
noise levels at the smoothed velocity resolution were $\sim6.3-19$~mK on
a $T_{\rm A}^{\star}$ scale, or $\sim8.4-25.3$~mK on a $T_{\rm MB}$ scale.
The line identification from the observed frequency range was done by using
{\tt Weeds}, which is an extension of {\tt CLASS} (\citealp{maret2011}),
and the JPL\footnote{Jet Propulsion Laboratory (JPL) spectroscopic database
(\citealp{pickett1998}); see {\tt http://spec.jpl.nasa.gov/}}
and CDMS\footnote{Cologne Database for Molecular Spectroscopy
(CDMS; \citealp{muller2005}); see {\tt http://www.astro.uni-koeln.de/cdms}}
spectroscopic databases. The following spectral line transitions were
detected: $^{13}$CO$(2-1)$, C$^{18}$O$(2-1)$, SO$(5_6-4_5)$,
\textit{para}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$,
\textit{para}-H$_2$CO$(3_{2,\,2}-2_{2,\,1})$,
\textit{para}-H$_2$CO$(3_{2,\,1}-2_{2,\,0})$, and E$_1$-type CH$_3$OH$(4_2-3_1)$.
Selected spectroscopic para\-meters of the detected species and transitions are
given in Table~\ref{table:lines}. We note that the original purpose of these
observations was to search for glycolaldehyde (HCOCH$_2$OH) line emission near
220.2 GHz (see \citealp{jorgensen2012}; \citealp{coutens2015}). However, no positive detection of HCOCH$_2$OH lines was made.
\subsubsection{Mapping observations}
The APEX telescope was also used to map SMM3 and its surroundings in the
frequency range $\sim215.1-219.1$~GHz. The observations were done on
15 November 2013, with the total telescope time of 2.9~hr. The target
field, mapped using the total power on-the-fly mode, was
$5\arcmin \times 3\farcm25$ ($0.61\times0.40$~pc$^2$) in size, and centred on
the coordinates $\alpha_{2000.0}=05^{\rm h}42^{\rm m}47\fs071$, and
$\delta_{2000.0}=-01\degr16\arcmin33\farcs70$. At the observed frequency range, the telescope HPBW is $\sim28\farcs5-29\arcsec$. The target area was scanned alternately in
right ascension and declination, i.e. in zigzags to ensure minimal striping
artefacts in the final data cubes. Both the angular separation between two
successive dumps and the step size between the subscans was $9\farcs5$, i.e.
about one-third the HPBW. We note that to avoid beam smearing,
the readout spacing should not exceed the value HPBW/3. The dump time was set
to one second. The front end/backend system was composed of the APEX-1
receiver, and the 2.5~GHz XFFTS with 32\,768 channels. The channel spacing,
76.3~kHz, corresponds to 105~m~s$^{-1}$ at the central observed frequency
of 217\,104.98~MHz.
The focus and pointing measurements were carried out by making
CO$(2-1)$ cross maps of the planet Jupiter and the M-type red supergiant
$\alpha$ Orionis (Betelgeuse). The pointing was found to be consistent within
$\sim3\arcsec$. The amount of PWV was $\sim0.6$~mm, which translates into
a zenith transmission of about 96\%. The data were calibrated using the
standard chopper-wheel method, and the typical SSB system temperatures
during the observations were in the range $T_{\rm sys}\sim120-130$~K on a
$T_{\rm A}^{\star}$ scale. The main-beam efficiency needed in the conversion
to the main-beam brightness temperature scale is $\eta_{\rm MB}=0.75$.
The absolute calibration uncertainty is about 10\%.
The {\tt CLASS90} program was used to reduce the spectra.
The individual spectra were Hanning-smoothed to a velocity resolution of
210~m~s$^{-1}$ to improve the S/N ratio of the data, and a third-order
polynomial was applied to correct the baseline in the spectra.
The resulting $1\sigma$ rms noise level of the average smoothed spectra were
about 90~mK on a $T_{\rm A}^{\star}$ scale. The visible spectral lines,
identified by using {\tt Weeds}, were assigned to DCO$^+(3-2)$ and
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ (see Table~\ref{table:lines} for
details). The latter line showed an additional velocity component at
$v_{\rm LSR}\simeq1.5$~km~s$^{-1}$, while the systemic velocity of SMM3
is about 8.5~km~s$^{-1}$. The main purpose of these mapping observations was
to search for SiO$(5-4)$ emission at 217\,104.98~MHz, but no signatures of
this shock tracer were detected.
The spectral-line maps were produced using the Grenoble Graphic ({\tt GreG})
program of the GILDAS software package. The data were convolved with a
Gaussian of 1/3 times the HPBW, and hence the effective angular
resolutions of the final DCO$^+(3-2)$ and
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ data cubes are $30\farcs7$ and
$30\farcs4$, respectively. The average $1\sigma$ rms noise level of the
completed maps was $\sigma(T_{\rm MB})\sim100$~mK per 0.21~km~s$^{-1}$ channel.
\subsection{Previous spectral line observations}
In the present work, we also employ the \textit{para}-NH$_3(1,\,1)$ and
$(2,\,2)$ inversion line data obtained with the Effelsberg 100~m
telescope\footnote{The 100~m telescope at Effelsberg/Germany is operated by
the Max-Planck-Institut f\"ur Radioastronomie on behalf of the
Max-Planck-Gesellschaft (MPG).} as described in Paper~II. The angular
resolution (full-width at half maximum or FWHM) of these observations was
$40\arcsec$. The original channel separation was 77~m~s$^{-1}$, but the spectra
were smoothed to the velocity resolution of 154~m~s$^{-1}$.
We note that the observed target position towards SMM3 was $\alpha_{2000.0}=05^{\rm h}42^{\rm m}44\fs4$, and $\delta_{2000.0}=-01\degr16\arcmin03\farcs0$, i.e. $\sim16\farcs7$ northwest of the new target position (Sect.~2.1.1).
In Paper~III, we presented the C$^{17}$O$(2-1)$, DCO$^+$(4-3), N$_2$H$^+(3-2)$,
and N$_2$D$^+(3-2)$ observations carried out with APEX towards the aforementioned NH$_3$ target position. Here, we will employ these data as well.
The HPBW of APEX at the frequencies of the above transitions is in the range
$21\farcs7-27\farcs8$, and the smoothed velocity resolution is 260~m~s$^{-1}$
for N$_2$H$^+$ and DCO$^+$, and 320~m~s$^{-1}$ for C$^{17}$O and N$_2$D$^+$. For
further details, we refer to Paper~III. Spectroscopic parameters of the
species and transitions described in this subsection are also tabulated in
Table~\ref{table:lines}.
\begin{table*}
\scriptsize
\caption{The observed molecular spectral lines and selected spectroscopic parameters.}
\label{table:lines}
\begin{tabular}{c c c c c c}
\hline\hline
Transition & $\nu$ & $E_{\rm u}/k_{\rm B}$ & $\mu$ & $n_{\rm crit}$ & Rotational constants and \\
& [MHz] & [K] & [D] & [cm$^{-3}$] & Ray's parameter ($\kappa$) \\
\hline
\textit{p}-NH$_3(J,\,K=1,\,1)$ & 23\,694.4955 & 23.26 & 1.4719 ($=\mu_C$) & $3.9\times10^3$\tablenotemark{a} & $A=B=298\,192.92$ MHz, \\
& & & & & $C=186\,695.86$ MHz; \\
& & & & & $\kappa=+1\Rightarrow$ oblate symmetric top\\
\textit{p}-NH$_3(J,\,K=2,\,2)$ & 23\,722.6333 & 64.45 & 1.4719 ($=\mu_C$) & $3.08\times10^3$\tablenotemark{a} & \ldots \\
DCO$^+(J=3-2)$ & 216\,112.5766\tablenotemark{b} & 20.74 & 3.888 ($=\mu_A$) & $2.0\times10^6$\tablenotemark{c} & $B=36\,019.76$ MHz; linear molecule\\
\textit{p}-H$_2$CO$(J_{K_a,\,K_c}=3_{0,\,3}-2_{0,\,2})$ & 218\,222.192 & 20.96 & 2.331 ($=\mu_A$) & $2.8\times10^6$\tablenotemark{c} & $A=281\,970.5$ MHz, $B=38\,833.98$ MHz, \\
& & & & & $C=34\,004.24$ MHz; \\
& & & & & $\kappa=-0.961\Rightarrow$ prolate asymmetric top \\
CH$_3$OH-E$_1(J_{K_a,\,K_c}=4_{2,\,2}-3_{1,\,2})$ & 218\,440.050 & 45.46 & 0.899 ($=\mu_A$) & $4.7\times10^6$\tablenotemark{c} & $A=127\,523.4$ MHz, $B=24\,690.2$ MHz, \\
& & & $-1.44$ ($=\mu_B$) & & $C=23\,759.7$ MHz; \\
& & & & & $\kappa=-0.982\Rightarrow$ prolate asymmetric top\\
\textit{p}-H$_2$CO$(J_{K_a,\,K_c}=3_{2,\,2}-2_{2,\,1})$ & 218\,475.632 & 68.09 & 2.331 ($=\mu_A$) & $1.2\times10^6$\tablenotemark{c} & \ldots \\
\textit{p}-H$_2$CO$(J_{K_a,\,K_c}=3_{2,\,1}-2_{2,\,0})$ & 218\,760.066 & 68.11 & 2.331 ($=\mu_A$) & $2.6\times10^6$\tablenotemark{c} & \ldots \\
C$^{18}$O$(J=2-1)$ & 219\,560.3568 & 15.81 & 0.11079 ($=\mu_A$) & $2.0\times10^4$\tablenotemark{c} & $B=54\,891.42$ MHz; linear molecule\\
SO$(N_J=5_6-4_5)$ & 219\,949.442 & 34.98 & 1.55 ($=\mu_A$) & $2.4\times10^6$\tablenotemark{d} & $B=21\,523.02$ MHz; linear molecule\\
$^{13}$CO$(J=2-1)$ & 220\,398.7006\tablenotemark{e} & 15.87 & 0.11046 ($=\mu_A$) & $2.0\times10^4$\tablenotemark{c} & $B=55\,101.01$ MHz; linear molecule \\
C$^{17}$O$(J=2-1)$ & 224\,714.199\tablenotemark{f} & 16.18 & 0.11034 ($=\mu_A$) &$2.1\times10^4$\tablenotemark{c} & $B=56\,179.99$ MHz; linear molecule \\
N$_2$D$^+(J=3-2)$ & 231\,321.912\tablenotemark{g} & 22.20 & 3.40 ($=\mu_A$) & $1.9\times10^6$\tablenotemark{h} & $B=38\,554.71$ MHz; linear molecule \\
N$_2$H$^+(J=3-2)$ & 279\,511.832\tablenotemark{g} & 26.83 & 3.40 ($=\mu_A$) & $3.3\times10^6$\tablenotemark{h} & $B=46\,586.86$ MHz; linear molecule \\
DCO$^+(J=4-3)$ & 288\,143.855\tablenotemark{i} & 34.57 & 3.888 ($=\mu_A$) & $1.9\times10^7$\tablenotemark{c} & $B=36\,019.76$ MHz; linear molecule \\
\hline
\end{tabular}
\tablecomments{The spectroscopic data were compiled from the JPL database except in the cases of CH$_3$OH and $^{13}$CO where the data were taken from the CDMS. In columns (2)--(5) we list the rest frequency, upper-state energy divided by the Boltzmann constant, permanent electric dipole moment, and critical density at 10~K unless otherwise stated. In the last column, we give the rotational constants ($A,\,B,\,C$) and the Ray's asymmetry parameter, which is defined by $\kappa=(2B-A-C)/(A-C)$.}\tablenotetext{a}{From \citealp{maret2009}.}\tablenotetext{b}{Frequency of the strongest hyperfine component $F=4-3$ (JPL).}\tablenotetext{c}{To calculate $n_{\rm crit}$, we used the Einstein $A$ coefficients and collision rates ($C_{\rm ul}$) adopted from the Leiden Atomic and Molecular Database (LAMDA; \citealp{schoier2005}); {\tt http://home.strw.leidenuniv.nl/$\sim$moldata/}.}\tablenotetext{d}{A value of $C_{\rm ul}$ at 60~K from LAMDA was used (i.e. at the lowest temperature value reported in the database).}\tablenotetext{e}{Frequency of the strongest hyperfine component $F=5/2-3/2$ (CDMS).}\tablenotetext{f}{Frequency of the strongest hyperfine component $F=9/2-7/2$ (\citealp{ladd1998}).}\tablenotetext{g}{Frequency of the strongest hyperfine component $F_1,\,F=4,\,5-3,\,4$ (\citealp{pagani2009}; their Tables~4 and 10).}\tablenotetext{h}{To calculate $n_{\rm crit}$, we used the Einstein $A$ coefficients from Pagani et al. (2009), and the N$_2$H$^+$--H$_2$ collision rate from LAMDA.}\tablenotetext{i}{Frequency of the strongest hyperfine component $F=5-4$ (JPL).}
\end{table*}
\subsection{Submillimetre dust continuum data}
In the present study, we use our LABOCA 870~$\mu$m data
first published in Paper~I. However, we have re-reduced
the data using the Comprehensive Reduction Utility for SHARC-2 (Submillimetre
High Angular Resolution Camera II) or CRUSH-2 (version 2.12-2) software package\footnote{{\tt http://www.submm.caltech.edu/$\sim$sharc/crush/index.htm}} (\citealp{kovacs2008}), as explained in more detail in the paper by Miettinen \&
Offner (2013a). The resulting angular resolution was $19\farcs86$ (FWHM),
and the $1\sigma$ rms noise level in the final map was 30~mJy~beam$^{-1}$.
Measuring the flux density of SMM3 inside an aperture of radius equal to the
effective beam FWHM, we obtained a value of $S_{\rm 870\,\mu m}=1.58\pm0.29$~Jy,
where the uncertainty includes both the calibration uncertainty
($\sim10\%$) and the map rms noise around the source (added in quadrature).
The SABOCA 350~$\mu$m data published in Paper~III are also used in this study.
Those data were also reduced with CRUSH-2 (version 2.03-2). The obtained
angular resolution was $10\farcs6$ (FWHM), and the $1\sigma$ rms noise was
$\sim60$~mJy~beam$^{-1}$. Again, if the flux density is calculated using
an aperture of radius $10\farcs6$, we obtain
$S_{\rm 350\,\mu m}=4.23\pm1.30$~Jy, where the quoted error includes both
the calibration uncertainty ($\sim30\%$) and the local rms noise.
This value is about 1.3 times lower than the one reported in
Paper~III ($5.4\pm1.6$~Jy, which was based on a clumpfind analysis above
a $3\sigma$ emission threshold). The APEX dust continuum flux densities of SMM3 are tabulated in Table~\ref{table:photometry}.
\subsection{Far-infrared and millimetre data from the literature}
For the purpose of the present study, we use the far-infrared (FIR)
flux densities from S13, and the 2.9~mm flux density of $S_{\rm 2.9\, mm}=115.4\pm3.9$~mJy from Tobin et al. (2015). Stutz et al. (2013) employed the \textit{Herschel}/Photodetector Array Camera \& Spectrometer (PACS; \citealp{pilbratt2010}; \citealp{poglitsch2010}) observations of SMM3 at
70 and 160~$\mu$m. Moreover, they used the \textit{Herschel}/PACS 100~$\mu$m
data from the HGBS. The aperture radii used for the photometry at the
aforementioned three wavelengths were $9\farcs6$, $12\farcs8$, and $9\farcs6$,
respectively, and the flux densities were found to be
$S_{\rm 70\,\mu m}=3.29\pm0.16$~Jy, $S_{\rm 100\,\mu m}=10.91\pm2.79$~Jy, and
$S_{\rm 160\,\mu m}=16.94\pm2.54$~Jy (see Table~4 in S13).
We note that the \textit{Spitzer}/MIPS (the Multiband Imaging Photometer for
\textit{Spitzer}; \citealp{rieke2004}) 70~$\mu$m flux density we determined
in Paper~I, $3.6\pm0.4$~Jy, is consistent with the aforementioned
\textit{Herschel}-based measurement (see Table~\ref{table:photometry}
for the flux density comparison).
\begin{table*}
\caption{Mid-infrared to millimetre photometry of SMM3.}
\label{table:photometry}
\begin{tabular}{c c c c c c c c}
\hline\hline
Reference & $S_{\rm 24\,\mu m}$ & $S_{\rm 70\,\mu m}$\tablenotemark{a} & $S_{\rm 100\,\mu m}$ & $S_{\rm 160\,\mu m}$ & $S_{\rm 350\,\mu m}$ & $S_{\rm 870\,\mu m}$ & $S_{\rm 2.9\,mm}$\\
& [mJy] & [Jy] & [Jy] & [Jy] & [Jy] & [Jy] & [mJy] \\
\hline
This work & \ldots & \ldots & \ldots & \ldots & $4.23\pm1.30$ & $1.58\pm0.29$ & \ldots\\
Paper~I & $5.0\pm0.2$ & $3.6\pm0.4$ & \ldots & \ldots & \ldots & $2.5\pm0.4$ & \ldots\\
Paper~III & \ldots & \ldots & \ldots & \ldots & $5.4\pm1.6$ & \ldots & \ldots\\
\citealp{stutz2013} & $4.74\pm0.3$ & $3.29\pm0.16$ & $10.91\pm2.79$ & $16.94\pm2.54$ & 3.63\tablenotemark{b} & 2.2/1.9\tablenotemark{c}& \ldots \\
\citealp{tobin2015} & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & $115.4\pm3.9$ \\
\hline
\end{tabular}
\tablecomments{See the reference studies and text herein for details on how the tabulated flux densities were measured.}\tablenotetext{a}{The 70~$\mu$m flux density from Paper~I was measured using the \textit{Spitzer}/MIPS data, while S13 used the \textit{Herschel}/PACS data.}\tablenotetext{b}{The authors adopted the SABOCA 350~$\mu$m peak surface brightness from Paper~III.}\tablenotetext{c}{The first value refers to a flux density measured in an aperture with radius equal to the beam FWHM ($19\arcsec$), while the latter one is otherwise the same but represents a background-subtracted value.}
\end{table*}
\section{Observational results}
\subsection{Images of continuum emission}
In Fig.~\ref{figure:images}, we show the SABOCA and LABOCA submm images of SMM3,
and \textit{Spitzer} 4.5~$\mu$m and 24~$\mu$m images of the same region.
We note that the latter two were retrieved from a set of Enhanced Imaging
Products (SEIP) from the \textit{Spitzer} Heritage Archive (SHA)\footnote{{\tt http://sha.ipac.caltech.edu/applications/Spitzer/SHA/}}, which include both the Infrared Array Camera (IRAC; \citealp{fazio2004}) and MIPS Super Mosaics.
The LABOCA 870~$\mu$m dust continuum emission is slightly extended to the
east of the centrally concentrated part of the core. From this eastern part the
SABOCA 350~$\mu$m image reveals the presence of two subcondensations,
designated SMM3b and 3c (Paper~III). The \textit{Spitzer} 24~$\mu$m image
clearly shows that the core harbours a central protostar, while the 4.5~$\mu$m
feature slightly east of the 24~$\mu$m peak is probably related to shock
emission. In particular, the 4.5~$\mu$m band is sensitive to shock-excited
H$_2$ and CO spectral line features (e.g. \citealp{smith2005}; \citealp{ybarra2009}; \citealp{debuizer2010}). As indicated by the plus signs in
Fig.~\ref{figure:images}, our previous line observations probed the outer edge
of SMM3, i.e. the envelope region. In contrast, the present single pointing
line observations were made towards the 24~$\mu$m peak position. This
positional difference has to be taken into account when comparing the chemical
properties derived from our spectral line data.
\begin{figure*
\begin{center}
\includegraphics[width=\textwidth]{multi.eps}
\caption{Multiwavelength views of the SMM3 core. From top left to bottom right the panels show the LABOCA 870~$\mu$m, SABOCA 350~$\mu$m, \textit{Spitzer}/MIPS 24~$\mu$m, and \textit{Spitzer}/IRAC 4.5~$\mu$m images. The images are shown with linear scaling, and the colour bars indicate the surface-brightness scale in Jy~beam$^{-1}$ (APEX bolometers) or MJy~sr$^{-1}$ (\textit{Spitzer}). The overlaid LABOCA contours in the top left panel start at $3\sigma$, and increase in steps of $3\sigma$, where $3\sigma=90$~mJy~beam$^{-1}$. The SABOCA contours also start at $3\sigma$, but increase in steps of $5\sigma$ ($1\sigma=60$~mJy~beam$^{-1}$). Both the \textit{Spitzer} images are overlaid with the $3\sigma$ SABOCA contours. The positions of our molecular line observations are marked by plus signs. The 350 $\mu$m condensations, SMM3b and 3c, are also indicated. In the bottom left corner of each panel, a scale bar indicating the 0.05~pc projected length is shown. In the bottom right corner of each panel, the circle shows the beam size (HPBW).}
\label{figure:images}
\end{center}
\end{figure*}
\subsection{Spectral line maps}
In Fig.~\ref{figure:linemaps}, we show the zeroth moment maps or integrated
intensity maps of DCO$^+(3-2)$ and \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$
plotted as contours on the SABOCA 350~$\mu$m image. The DCO$^+(3-2)$ map
was constructed by integrating the line emission over the local standard of
rest (LSR) velocity range of [7.4, 11.8]~km~s$^{-1}$.
The \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ line showed two velocity
components. The line emission associated with SMM3 was integrated over
[7.5, 11]~km~s$^{-1}$, while that of the lower-velocity component
($v_{\rm LSR}\simeq1.5$~km~s$^{-1}$) was integrated over [-0.27, 2.49]~km~s$^{-1}$. The aforementioned velocity intervals were determined from the average
spectra. The final $1\sigma$ noise levels in the zeroth moment maps were in
the range 0.08--0.16~K~km~s$^{-1}$ (on a $T_{\rm MB}$ scale).
With an offset of only
$\Delta \alpha=-2\farcs6,\, \Delta \delta=3\farcs3$, the DCO$^+(3-2)$ emission
maximum is well coincident with the 350~$\mu$m peak position of the
core. The corresponding offset from our new line observation target position is
$\Delta \alpha=-1\farcs7,\, \Delta \delta=5\farcs3$.
Moreover, the emission is extended to the east (and slightly to the west), which
resembles the dust emission morphology traced by LABOCA.
The \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ emission, shown by black contours
in Fig.~\ref{figure:linemaps}, is even more elongated in the east-west
direction than that of DCO$^+$. The emission peak is located inside the $7\sigma$ contour of DCO$^+(3-2)$ emission. We note that the 350 $\mu$m subcondensations SMM3b and 3c lie within the $3\sigma$ contour of both the line emissions.
The low-velocity component of \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$, with a
radial velocity of about 1.5~km~s$^{-1}$, is concentrated on the east and
northeast parts of the mapped region. This is exactly where the
$^{13}$CO$(2-1)$ and C$^{18}$O$(2-1)$ line emissions at $\sim1.3$~km~s$^{-1}$
were found to be concentrated (\citealp{miettinen2012b}). As discussed by
Miettinen (2012b), several other high-density tracer lines at a radial
velocity of 1.3--1.9~km~s$^{-1}$ have been detected towards other cores in
Orion B9 (Papers I--III). Hence, the detection of
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ emission at this low velocity comes as
no surprise.
\begin{figure}[H]
\centering
\resizebox{\hsize}{!}{\includegraphics{Miettinen_fig5.eps}}
\caption{Spectral line emission maps overlaid on the SABOCA 350
$\mu$m image. The DCO$^+(3-2)$ emission is shown with white contours plotted
at $3\sigma$, $6\sigma$, and $7\sigma$ ($1\sigma=0.16$~K~km~s$^{-1}$). The
black contours show the \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ emission,
plotted at $3\sigma$ and $5\sigma$ ($1\sigma=0.1$~K~km~s$^{-1}$). The yellow
contours, which show the low-velocity component of
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$, are drawn at $3\sigma$ and $6\sigma$
($1\sigma=0.08$~K~km~s$^{-1}$). The white plus signs mark our new and previous
single-pointing line observation positions. The red crosses indicate the 350~$\mu$m peak positions of SMM3b and 3c (Paper~III). The nested circles in
the bottom left corner indicate the HPBW values of $28\farcs1$ and $30\farcs7$,
i.e. the highest and coarsest resolutions of our new line observations
(see Sect.~2). A scale bar indicating the 0.05~pc projected length is shown
in the bottom right corner.}
\label{figure:linemaps}
\end{figure}
\subsection{Spectra and spectral line parameters}
The previously observed spectra are shown in Fig.~\ref{figure:spectra1}. The
target position of these measurements is shown by the northwestern plus sign
in Figs.~\ref{figure:images} and \ref{figure:linemaps}.
The new spectra, observed towards the 24~$\mu$m peak of SMM3, are presented in
Fig.~\ref{figure:spectra2}. The DCO$^+(3-2)$ spectrum shown in the top panel
of Fig.~\ref{figure:spectra2} was extracted from the line emission peak, and,
as mentioned above, that position is well coincident with the 24~$\mu$m and
350~$\mu$m peaks (Fig.~\ref{figure:linemaps}).
The \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ line shown in
Fig.~\ref{figure:spectra2} can be decomposed into two components,
namely a narrow line at the systemic velocity, and a much broader one with
non-Gaussian line-wing emission. The narrow line is probably originating in
the quiescent envelope around the protostar, while the broad component is
probably tracing the dense ambient gas swept up by an outflow (e.g.
\citealp{yildiz2013}). However, the mapped \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ data did not show evidence of line wings (and hence we could not separately image the blue and redshifted parts of the line emission). The other two formaldehyde lines ($3_{2,\,1}-2_{2,\,0}$ and $3_{2,\,2}-2_{2,\,1}$) and the CH$_3$OH line shown in Fig.~\ref{figure:spectra2} are also broad, and hence likely originate in the swept-up outflow gas. A hint of an outflow wing emission is also visible in the SO spectrum.
Two velocity components are also seen in the $^{13}$CO spectrum, one at the systemic velocity, and the other at $\sim1.5$~km~s$^{-1}$, the velocity at which \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ emission was seen in the line maps. The C$^{18}$O and $^{13}$CO spectra exhibit absorption features next to the emission lines. These are caused by emission in the OFF beam positions when chopping between two positions on sky (wobbling secondary). This problem has been recognised in our previous papers on Orion B9, and is difficult to avoid when observing the abundant CO isotopologues. In fact, the detected $^{13}$CO line at the systemic velocity suffers so badly from the subtraction of the off-signal that the line shape and intensity are deformed. For example, the intensity of the C$^{18}$O line appears to be higher than that of the more abundant $^{13}$CO isotopologue. Hence, the $^{13}$CO data are not used in the present study.
The hyperfine structure of the ammonia lines were fitted using the
{\tt CLASS90}'s methods NH3$(1,\,1)$ and NH3$(2,\,2)$. The former method
could be used to derive the optical thickness of the main hyperfine group
($\tau_{\rm m}$; see Sect.~4.2.1). The remaining lines
shown in Fig.~\ref{figure:spectra1} are also split into hyperfine components,
and hence were fitted using the {\tt CLASS90}'s hyperfine structure method.
Of the newly observed lines, only DCO$^+(3-2)$ (cf.~\citealp{vandertak2009})
and $^{13}$CO$(2-1)$ (Cazzoli et al. 2004) exhibit hyperfine structure.
In Fig.~\ref{figure:spectra2}, the fits to the $^{13}$CO lines are shown, but,
as mentioned above, we do not study the lines further in the present paper.
Single-Gaussian fits to the remaining lines were performed using {\tt CLASS90}.
The obtained line parameters are listed in Table~\ref{table:lineparameters}.
Columns~(2)--(5) in this table give the LSR velocity ($v_{\rm LSR}$),
FWHM linewidth ($\Delta v$), peak intensity ($T_{\rm MB}$), and the
integrated line intensity ($\int T_{\rm MB} {\rm d}v$). Besides the formal
$1\sigma$ fitting errors, the errors in the last two quantities also include
the calibration uncertainty (15\% for the Effelsberg/NH$_3$ data, and 10\% for
our APEX data). We note that rather than using a Gaussian fit, the integrated
intensity of the C$^{17}$O line was computed by integrating over the velocity
range [5.87, 10.14]~km~s$^{-1}$ to take the non-Gaussian shape of the line
into account.
\begin{figure}[H]
\centering
\resizebox{0.7\hsize}{!}{\includegraphics{spec1.eps}}
\caption{Hanning-smoothed spectra originally published in Papers~II and III.
The hyperfine structure fits are shown with green lines. The velocity range
shown in all panels was chosen so that the outer \textit{p}-NH$_3(1,\,1)$
satellite lines can be seen.}
\label{figure:spectra1}
\end{figure}
\begin{figure}[H]
\centering
\resizebox{0.6\hsize}{!}{\includegraphics{spec2.eps}}
\caption{Hanning-smoothed spectra obtained with our new APEX observations.
The DCO$^+(3-2)$ spectrum was extracted from the line emission peak.
The single-Gaussian fits are shown with green lines, while those overlaid on
the DCO$^+$ and $^{13}$CO spectra show the hyperfine structure fits. The
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ and $^{13}$CO spectra show two velocity
components. The red vertical line plotted on the former spectrum indicates
the radial velocity of the \textit{p}-NH$_3(1,\,1)$ line. The velocity range
is wider in the fourth panel from top to show the two nearby lines. The
features with negative intensity in the C$^{18}$O and $^{13}$CO spectra are
caused by emission in the observed OFF position.}
\label{figure:spectra2}
\end{figure}
\begin{table*}
\caption{Spectral line parameters.}
\footnotesize
\label{table:lineparameters}
\begin{tabular}{c c c c c c c c}
\hline\hline
Transition & $v_{\rm LSR}$ & $\Delta v$ & $T_{\rm MB}$ & $\int T_{\rm MB} {\rm d}v$ & $\tau$ & $T_{\rm ex}$ & $T_{\rm rot}$ \\
& [km~s$^{-1}$] & [km~s$^{-1}$] & [K] & [K~km~s$^{-1}$] & & [K] & [K]\\
\hline
\textit{p}-NH$_3(1,\,1)$ & $8.40\pm0.01$ & $0.40\pm0.01$ & $2.46\pm0.40$\tablenotemark{a} & $1.91\pm0.30$\tablenotemark{a} & $2.01\pm0.11\,(=\tau_{\rm m})$\tablenotemark{a} & $6.8\pm0.7$ & $10.6\pm0.5$ \\
\textit{p}-NH$_3(2,\,2)$ & $8.42\pm0.02$ & $0.45\pm0.08$ & $0.37\pm0.06$\tablenotemark{a} & $0.23\pm0.04$\tablenotemark{a} & $0.10\pm0.02\,(=\tau_0)$\tablenotemark{b} & $6.8\pm0.7$\tablenotemark{c} & \ldots\\
DCO$^+(3-2)$\tablenotemark{d} & $8.48\pm0.02$ & $0.60\pm0.04$ & $1.51\pm0.19$ & $0.98\pm0.11$ & $0.84\pm0.30\,(=\tau_0)$\tablenotemark{e} & $6.8\pm0.7$\tablenotemark{f} & \ldots \\
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$\tablenotemark{g} & $8.46\pm0.01$ & $0.42\pm0.02$ & $0.38\pm0.04$ & $0.17\pm0.02$ & $0.06\pm0.01\,(=\tau_0)$\tablenotemark{h} & $11.2\pm0.5$\tablenotemark{h} & \ldots \\
& $9.26\pm0.08$ & $8.22\pm0.21$ & $0.14\pm0.02$ & $1.21\pm0.12$ & $\ll 1$\tablenotemark{i} & \ldots & $64\pm15$\tablenotemark{i}\\
CH$_3$OH-E$_1(4_{2,\,2}-3_{1,\,2})$ & $9.93\pm0.29$ & $10.98\pm0.80$ & $0.05\pm0.01$ & $0.54\pm0.06$ & $0.0009\pm0.0002\,(=\tau_0)$\tablenotemark{j} & $64\pm15$\tablenotemark{j} & \ldots \\
\textit{p}-H$_2$CO$(3_{2,\,2}-2_{2,\,1})$ & $8.99\pm0.47$ & $10.07\pm1.23$ & $0.03\pm0.01$ & $0.34\pm0.05$ & $\ll 1$\tablenotemark{i} & \ldots & $64\pm15$\tablenotemark{i}\\
\textit{p}-H$_2$CO$(3_{2,\,1}-2_{2,\,0})$ & $9.45\pm0.35$ & $10.92\pm0.83$ & $0.03\pm0.01$ & $0.31\pm0.04$ & $\ll 1$\tablenotemark{i} & \ldots & $64\pm15$\tablenotemark{i}\\
C$^{18}$O$(2-1)$ & $8.66\pm0.01$ & $0.82\pm0.01$ & $1.33\pm0.16$ & $1.17\pm0.12$ & $0.23\pm0.02\,(=\tau_0)$\tablenotemark{k} & $11.2\pm0.5$\tablenotemark{k} & \ldots\\
SO$(5_6-4_5)$ & $8.67\pm0.01$ & $0.68\pm0.02$ & $0.41\pm0.05$ & $0.29\pm0.03$ & $0.06\pm0.01\,(=\tau_0)$\tablenotemark{k} & $11.2\pm0.5$\tablenotemark{k} & \ldots\\
$^{13}$CO$(2-1)$\tablenotemark{g} & $8.38\pm0.15$ & $1.51\pm0.35$ & $0.57\pm0.18$ & $0.93\pm0.27$ & \ldots & \ldots & \ldots\\
& $1.53\pm0.03$ & $0.43\pm0.03$ & $1.32\pm0.26$ & $0.89\pm0.15$ & \ldots & \ldots & \ldots \\
C$^{17}$O$(2-1)$ & $8.68\pm0.06$ & $0.59\pm0.11$ & $0.34\pm0.05$ & $0.54\pm0.07$ & $0.05\pm0.01\,(=\tau_0)$\tablenotemark{k} & $11.2\pm0.5$\tablenotemark{k} & \ldots\\
N$_2$D$^+(3-2)$ & $8.39\pm0.04$ & $0.53\pm0.11$ & $0.21\pm0.03$ & $0.17\pm0.03$ & $0.09\pm0.02\,(=\tau_0)$\tablenotemark{l} & $6.8\pm0.7$\tablenotemark{f} & \ldots\\
N$_2$H$^+(3-2)$ & $8.57\pm0.03$ & $0.85\pm0.09$ & $0.62\pm0.10$ & $0.67\pm0.08$ & $0.36\pm0.11\,(=\tau_0)$\tablenotemark{l} & $6.8\pm0.7$\tablenotemark{f} & \ldots\\
DCO$^+(4-3)$ & $8.54\pm0.03$ & $0.42\pm0.18$ & $0.20\pm0.02$ & $0.09\pm0.02$ & $0.11\pm0.03\,(=\tau_0)$\tablenotemark{e} & $6.8\pm0.7$\tablenotemark{f} & \ldots\\
\hline
\end{tabular}
\tablecomments{The parameters given in columns~(2)--(5) are described in Sect.~3.3, while those in the last three columns are the line optical thickness, excitation temperature, and rotational temperature (Sect.~4.2.1).}\tablenotetext{a}{These values refer to the main group of hyperfine components ($F_1=1-1$ and $F_1=2-2$ for the $(J,\,K)=(1,\,1)$ transition; $F_1=1-1$, $F_1=2-2$, and $F_1=3-3$ for the $(J,\,K)=(2,\,2)$ transition). The total NH$_3(1,\,1)$ line optical thickness is twice the main group value, i.e. $\tau_{\rm tot}=2\tau_{\rm m}$.}\tablenotetext{b}{Peak optical thickness of the strongest hyperfine component ($F=7/2-7/2$, $F_1=3-3$; weight $8/35$) calculated using $T_{\rm ex}[{\rm NH_3}(1,\,1)]$.}\tablenotetext{c}{Assumed to be that of the NH$_3(1,\,1)$ transition.}\tablenotetext{d}{The analysed beam-averaged spectrum was extracted from the line emission peak.}\tablenotetext{e}{The value of $\tau_0$ refers to the strongest hyperfine component, which is $F=4-3$ for $J=3-2$ (relative intensity ${\rm R.I.}=3/7$), and $F=5-4$ for $J=4-3$ (${\rm R.I.}=11/27$).}\tablenotetext{f}{The value of $T_{\rm ex}$ was assumed to be that derived for NH$_3(1,\,1)$, and the value of $\tau_0$ was calculated based on this assumption.}\tablenotetext{g}{Two velocity components were detected. The $^{13}$CO lines are not analysed further in the present work (Sect.~3.3).}\tablenotetext{h}{The narrow line component at the systemic velocity was assumed to be thermalised at the kinetic temperature ($T_{\rm kin}$) derived from NH$_3$. The peak optical thickness was then calculated under the assumption that $T_{\rm ex}=T_{\rm kin}$.}\tablenotetext{i}{A rotational diagram method was used to derive $T_{\rm rot}$, under the assumption of optically thin emission.}\tablenotetext{j}{The value of $T_{\rm ex}$ was assumed to be equal to $T_{\rm rot}(p-{\rm H_2CO})$, and $\tau_0$ was estimated accordingly.}\tablenotetext{k}{The line was assumed to be thermalised at $T_{\rm kin}({\rm NH_3})$, and $\tau_0$ was calculated under this assumption. For C$^{17}$O, the relative intensity of the strongest hyperfine component $F=9/2-7/2$ is ${\rm R.I.}=1/3$.}\tablenotetext{l}{The value of $\tau_0$ refers to the strongest hyperfine component, i.e. $J_{F_1F}=3_{45}-2_{34}$ for both N$_2$H$^+$ and N$_2$D$^+$ (${\rm R.I.}=11/63$).}
\end{table*}
\section{Analysis and results}
\subsection{Spectral energy distribution of SMM3 -- modified blackbody fitting}
The SED of SMM3, constructed using the \textit{Herschel}/ PACS 70, 100,
and 160~$\mu$m, SABOCA 350~$\mu$m, LABOCA 870~$\mu$m, and CARMA 2.9~mm
flux densities (see Sects.~2.3 and 2.4, and Table~\ref{table:photometry}),
is show in Fig.~\ref{figure:SED}. The \textit{Spitzer} 24~$\mu$m data point,
which represents a flux density of
$4.74\pm0.3$~mJy from S13, is also shown in the figure,
but it was excluded from the fit (see below). We note that the S13 24~$\mu$m
flux density is close to a value of $5.0\pm0.2$~mJy we determined in Paper~I ($13\arcsec$ aperture; see Table~\ref{table:photometry}). The 24~$\mu$m emission
originates in a warmer dust component closer to the accreting central
protostar, while the longer wavelength data ($\lambda \geq70$~$\mu$m)
are presumable tracing the colder envelope.
The solid line in Fig.~\ref{figure:SED} represents a single-temperature MBB
function fitted to the aforementioned data points.
The fit was accomplished with optimisation ($\chi^2$ minimisation) by
simulated annealing (Kirkpatrick et al. 1983), which, although more time-consuming, can work better in finding the best fit solution than the most commonly-used standard (non-linear) least-squares fitting method that can be sensitive to the chosen initial values (see also \citealp{bertsimas1993}; \citealp{ireland2007}). The original version of the fitting algorithm was written by J.~Steinacker (M.~Hennemann, priv.~comm.).
It was assumed that the thermal dust emission is optically thin ($\tau \ll 1$).
We note that this assumption is probably good for the wavelengths longward
of 70~$\mu$m, but it gets worse at shorter wavelengths. This, together with the fact that 24~$\mu$m emission originates in a warmer dust component closer to
the accreting central protostar than the longer wavelength emission ($\lambda \geq70$ $\mu$m) arising from the colder envelope, is the reason
why we excluded the 24~$\mu$m flux density from the fit
(e.g. \citealp{ragan2012}). The model fit takes into account the
wavelength-dependence of the dust opacity ($\kappa_{\lambda}$). As the dust
model, we employed the widely used Ossenkopf \& Henning (1994, hereafter OH94) model
describing graphite-silicate dust grains that have coagulated and accreted
thin ice mantles over a period of $10^5$~yr at a gas density of $10^5$~cm$^{-3}$. For the total dust-to-gas mass ratio we adopted a value of
$\delta_{\rm dg}\equiv M_{\rm dust}/M_{\rm gas}=1/141$. This mass ratio is based on the assumption that
the core's chemical composition is similar to the solar mixture, i.e. the mass fractions for hydrogen, helium,
and heavier elements were assumed to be $X=0.71$, $Y=0.27$, and $Z=0.02$,
respectively\footnote{In this case, the ratio between the total mass
(H+He+metals) to hydrogen mass is $1/X\simeq1.41$.}.
As can be seen in Fig.~\ref{figure:SED}, the PACS data are reasonably well fitted although the
160~$\mu$m flux density is slightly overestimated. The SABOCA data point is
not well fitted, which could be partly casused by the spatial filtering owing
to the sky-noise removal. Hence, a ground-based bolometer flux density can
appear lower than what would be expected from the \textit{Herschel} data.
On the other hand, our LABOCA data point is well matched with the MBB fit. Finally, we note that the CARMA 2.9~mm flux density, which is based on the highest angular resolution data used here, is underestimated by the MBB curve. Radio continuum observations would be needed to quantify the amount of free-free contribution at 2.9~mm (cf.~\citealp{wardthompson2011}).
The dust temperature, envelope mass, and luminosity obtained from the SED fit
are $T_{\rm dust}=15.1\pm0.1$~K, $M_{\rm env}=3.1\pm0.6$~M$_{\sun}$, and
$L=3.8\pm0.6$~L$_{\sun}$. However, we emphasise that these values should be taken with some caution because clearly the fit shown in Fig.~\ref{figure:SED} is not perfect. In principle, while the 24~$\mu$m emission is expected to trace a warmer dust component than those probed by $\lambda_{\rm obs} \geq70$~$\mu$m observations (e.g. \citealp{ragan2012}), it is possible that our poor single-$T_{\rm dust}$ fit reflects the presence of more than one cold dust components in the protostar's envelope, and would hence require a multi-$T_{\rm dust}$ fit. However, following S13, and to allow an easier comparison with their results, we opt to use a simplified single-$T_{\rm dust}$ MBB in the present study.
We note that $M_{\rm env}\propto (\kappa_{\lambda}\delta_{\rm dg})^{-1}$, and hence the choice of the dust model (effectively $\kappa_{\lambda}$) and $\delta_{\rm dg}$ mostly affect the envelope mass among the SED parameters derived here (by a factor of two or more; OH94). The adopted dust model can also (slightly) influence the derived values of $T_{\rm dust}$ and $L$ because of the varying dust emissivity index ($\beta$) among the different OH94 models ($\kappa_{\lambda}\propto \lambda^{-\beta}$). The submm luminosity, $L_{\rm submm}$, computed by numerically integrating the fitted SED curve longward of 350~$\mu$m, is about 0.23~L$_{\sun}$, i.e. about $6\%$ of the total luminosity. For Class 0 protostellar cores, the $L_{\rm submm}/L$ ratio is defined to be
$>5\times10^{-3}$, which reflects the condition that the envelope mass exceeds
that of the central protostar, i.e. $M_{\rm env}\gg M_{\star}$ (\citealp{andre1993}, 2000). With a $L_{\rm submm}/L$ ratio of about one order of magnitude higher
than the definition limit, SMM3 is clearly in the Class 0 regime.
Our $T_{\rm dust}$ value is by a factor of 1.4 lower than that obtained by S13 through their MBB analysis, while the values of $M_{\rm env}$ and $L$ we derived are higher by factors of about 9.4 and 1.8,
respectively (see Sect.~1). We note that similarly to the present work,
S13 fitted the data at $\lambda \geq 70$ $\mu$m, but they adopted a slightly different OH94 dust model (coagulation at a density of
$10^6$~cm$^{-3}$ rather than at $10^5$~cm$^{-3}$ as here),
and a slightly higher gas-to-dust ratio than we ($1.36\times110=149.6$, which
is $6\%$ higher than our value of 141). Hence, we attribute the aforementioned discrepancies to the different SABOCA and LABOCA flux density values used in the analysis (e.g. S13 used the peak surface brightness from our SABOCA map, and their fit underestimated the LABOCA flux density), and to the fact that we have here used the new CARMA 2.9~mm data from Tobin et al. (2015) as well.
Given that Class~0 objects have, by definition, $M_{\rm env}\gg M_{\star}$, an envelope mass of $\sim3$~M$_{\sun}$ derived here might be closer to the true value than a value of $\sim0.3$~M$_{\sun}$ derived by S13. Also, as was already mentioned in Sect.~1, SMM3 was found to be a very bright 2.9~mm-emitter by Tobin et al. (2015), and hence they derived a high mass of $7.0\pm0.7$~M$_{\sun}$ under the assumption that $T_{\rm dust}=20$~K and $\delta_{\rm dg}=1/100$ (their mass is $2.3\pm0.5$ times higher than the present estimate, but a direct comparison with a single-flux density analysis is not feasible). In the context of stellar evolution, if the core star formation efficiency is $\sim30$\% (e.g. \citealp{alves2007}), and the central SMM3 protostar has $M_{\star} \ll M_{\rm env}$, this source could evolve into a near solar-mass star if $M_{\rm env}\sim3$~M$_{\sun}$ as estimated here, while an envelope mass of $\sim0.3$~M$_{\sun}$ would only be sufficient to form a very low-mass single star (near the substellar--stellar limit of $\sim0.1$~M$_{\sun}$). Moreover, the dust temperature we have derived here is closer to the gas kinetic temperature in SMM3 (the ratio between the two is $1.35\pm0.06$; see Sect.~4.2.1) than the value $T_{\rm dust}=21.4\pm0.4$~K from S13. In a high-density protostellar
envelope, the gas temperature is indeed expected to be similar to $T_{\rm dust}$ (e.g. the dust--gas coupling occurs at $\sim10^5$~cm$^{-3}$ in the
\citealp{hollenbach1989} prescription). Finally, the physical implication of the higher luminosity we have derived here -- $1.8\pm0.3$ times the S13 value -- is that
the mass accretion rate of the SMM3 protostar is higher by a similar factor.
\begin{figure}[H]
\centering
\resizebox{\hsize}{!}{\includegraphics{Miettinen_fig19.eps}}
\caption{Spectral energy distribution of SMM3. The square symbols with vertical error bars represent the measured flux densities (\textit{Herschel}/PACS, SABOCA, LABOCA, and CARMA). A modified blackbody fit to the data points is shown
by a solid black line. The \textit{Spitzer} 24~$\mu$m data point from S13 is also indicated (MIPS1), but not used in the fit.}
\label{figure:SED}
\end{figure}
\begin{figure}[H]
\centering
\resizebox{\hsize}{!}{\includegraphics{Miettinen_fig20.eps}}
\caption{Rotational diagram for \textit{p}-H$_2$CO. The left-hand side of
Eq.~(\ref{eq:rot}) is plotted as a function of the energy of the upper level.
The red solid line shows a least-squares fit to the observed data. The resulting values of
$T_{\rm rot}$ and $N$ are indicated.}
\label{figure:rot}
\end{figure}
\subsection{Analysis of the spectral line data}
\subsubsection{Line optical thicknesses, and the excitation, rotational, and
kinetic temperatures}
The optical thickness of the main \textit{p}-NH$_3(1,\,1)$ hyperfine group, $\tau_{\rm m}$, could be derived by fitting the hyperfine structure of the line. The main hyperfine group ($\Delta F=0$) has a relative strength of half the total value, and hence the total optical thickness of
\textit{p}-NH$_3(1,\,1)$ is given by $\tau_{\rm tot}=2\tau_{\rm m}$
($=2\times(2.01\pm0.11)$; see \citealp{mangum1992}; Appendix~A1
therein). The strongest hyperfine component has a relative strength of
$7/30$, which corresponds to a peak optical thickness of $\tau_0\simeq0.94$.
The excitation temperature of the line, $T_{\rm ex}$, was calculated from
the antenna equation ($T_{\rm MB}\propto (1-e^{-\tau})$; see e.g Eq.~(1)
in Paper~I), assuming that the background tempe\-rature is equal to that of the cosmic microwave
background radiation, i.e. $T_{\rm bg}\equiv T_{\rm CMB}=2.725$~K (\citealp{fixsen2009}).
The obtained value, $T_{\rm ex}=6.8\pm0.7$~K\footnote{We note that
in Paper~II we determined a value of $T_{\rm ex}({\rm NH_3})=6.1\pm0.5$~K from
a unsmoothed \textit{p}-NH$_3(1,\,1)$ spectrum, while the present value
was derived from a smoothed spectrum.}, was also adopted for the \textit{p}-NH$_3(2,\,2)$ line
because its hyperfine satellites were not detected. Using this assumption and
the antenna equation, the peak \textit{p}-NH$_3(2,\,2)$ optical thickness was
determined to be $0.1\pm0.02$. To calculate $\tau_{\rm tot}$, this value should
be scaled by the relative strength of the strongest hyperfine component which
is $8/35$. The value $T_{\rm ex}=6.8\pm0.7$~K was also adopted for the
N$_2$H$^+$, N$_2$D$^+$, and DCO$^+$ lines, although we note that they might
originate in a denser gas than the observed ammonia lines.
Another caveat is that the $J=3-2$ line of DCO$^+$ was extracted from a
position different from the ammonia target position, but, within the errors,
the aforementioned $T_{\rm ex}$ value is expected to be a reasonable choice (e.g.
\citealp{anderson1999}). The values of $\tau_0$ were then derived as in the case
of the $(2,\,2)$ transition of ammonia (see Col.~(6) in
Table~\ref{table:lineparameters}).
Using the $\tau_{\rm m}[p-{\rm NH_3(1,\,1)}]$ value and the intensity ratio
between the $(2,\,2)$ and $(1,\,1)$ lines of \textit{p}-NH$_3$, we derived the
rotational temperature of ammonia ($T_{\rm rot}$; see Eq.~(4) in
\citealp{ho1979}). This calculation assumed that the $T_{\rm ex}$ values, and also the linewidths,
are equal between the two inversion lines. The latter assumption is justified by the
observed FWHM linewidths. The derived value of $T_{\rm rot}$, $10.6\pm0.5$~K,
was converted into an estimate of the gas kinetic temperature using the
$T_{\rm kin}-T_{\rm rot}$ relationship from Tafalla et al. (2004; their
Appendix~B), which is valid in the low-temperature regime of
$T_{\rm kin}\in [5,\,20]$~K. The value we derived, $T_{\rm kin}=11.2\pm0.5$ K\footnote{The quoted value of $T_{\rm kin}$ differs slightly from the one derived
in Paper II ($11.3\pm0.8$~K) because of the smoothed ammonia spectra employed
in the analysis in the present work.}, was adopted as $T_{\rm ex}$ for the observed
CO isotopologue transitions, SO, and the narrow \textit{p}-H$_2$CO line.
The choice of $T_{\rm ex}=T_{\rm kin}$ means that the level populations
are assumed to be thermalised, and this is often done in the case of
C$^{18}$O (e.g. \citealp{hacar2011}), while in the cases of SO and H$_2$CO it
should be taken as a rough estimate only.
The three broad \textit{p}-H$_2$CO lines we detected allowed us to construct
a rotational diagram for \textit{p}-H$_2$CO. The rotational diagram technique
is well established, and details of the method can be found in a number of
papers (e.g. \citealp{linke1979}; \citealp{turner1991};
\citealp{goldsmith1999}; Anderson et al. 1999; Green et al. 2013). When the line emission is assumed to
be optically thin, the integrated intensity of the line is related to
$T_{\rm rot}$ and the total column density of the species, $N$, according to the
equation
\begin{equation}
\label{eq:rot}
\ln \left[\frac{\int T_{\rm MB}{\rm d}v}{\nu Sg_Kg_I}\right]=\ln \left(\frac{2\pi^2\mu^2}{3k_{\rm B}\epsilon_0}\frac{N}{Z_{\rm rot}}\right)-\frac{1}{T_{\rm rot}}\frac{E_{\rm u}}{k_{\rm B}}\,,
\end{equation}
where $S$ is the line strength, $g_K$ is the $K$-level
degeneracy, $g_I$ is the reduced nuclear spin degeneracy, $\epsilon_0$ is the
vacuum permitti\-vity, and $Z_{\rm rot}$ is the rotational partition function.
The values of $S$ were adopted from the Splatalogue
database\footnote{{\tt http://www.cv.nrao.edu/php/splat/}}. Because H$_2$CO is
an asymmetric top molecule, there is no $K$-level degeneracy, and hence
$g_K=1$. For the \textit{para} form of H$_2$CO ($K_a$ is even), the value
of $g_I$ is $1/4$ (\citealp{turner1991}). The H$_2$CO molecule belongs to
a $C_{2v}$ symmetry group (two vertical mirror planes), and its partition function
at the high-temperature limit ($hA/k_{\rm B}T_{\rm ex} \ll 1$, where $h$ is the Planck
constant) can be approximated as (\citealp{turner1991})
\begin{equation}
\label{eq:part}
Z_{\rm rot}(T_{\rm rot})\simeq\frac{1}{2}\sqrt{\frac{\pi(k_{\rm B}T_{\rm rot})^3}{h^3ABC}}\,.
\end{equation}
The derived rotational diagram, i.e. the left-hand side of Eq.~(\ref{eq:rot})
plotted as a function of $E_{\rm u}/k_{\rm B}$, is shown
in Fig.~\ref{figure:rot}. The red solid line represents a least-squares fit
to the three data points. The fit provides a value of $T_{\rm rot}$ as the
reciprocal of the slope of the line, and $N$ can be calculated from the
$y$-intercept. We note that two of the detected \textit{p}-H$_2$CO transitions have almost the same upper-state energy, i.e. they lie very close to each other in the direction of the $x$-axis in Fig.~\ref{figure:rot}, which makes the fitting results rather poorly constrained. We also note that the
\textit{ortho}-H$_2$CO$(2_{1,\,1}-1_{1,\,1})$ line detected by Kang et al.
(2015) refers to the narrow-line component ($\Delta v=0.45$~km~s$^{-1}$), and hence cannot be employed in our rotational diagram for the broad-line component. The value of $T_{\rm rot}$ we derived is $64\pm15$~K, which in the case of local thermodynamic equilibrium (LTE) is equal to $T_{\rm kin}$. Owing to the common formation route for formaldehyde and methanol (Sect.~5.2.3), the aforementioned $T_{\rm rot}$ value was adopted as $T_{\rm ex}$ for the detected CH$_3$OH line (which then appears to be optically thin). The molecular column density calculations are described in the next subsection.
\subsubsection{Molecular column densities and fractional abundances}
As described above, the beam-averaged column density of \textit{p}-H$_2$CO
for the broad component was derived using the rotational diagram method.
The column densities of the species other than NH$_3$ (see below) were
calculated by using the standard LTE formulation
\begin{equation}
\label{eq:N}
N=\frac{3h\epsilon_0}{2\pi^2}\frac{1}{\mu^2S}\frac{Z_{\rm rot}(T_{\rm ex})}{g_Kg_I}e^{E_u/k_{\rm B}T_{\rm ex}}F(T_{\rm ex})\int \tau(v){\rm d}v \, ,
\end{equation}
where $F(T_{\rm ex})\equiv \left(e^{h\nu/k_{\rm B}T_{\rm ex}}-1\right)^{-1}$.
Here, the electric dipole moment matrix element is defined as
$\left|\mu_{\rm ul} \right|\equiv \mu^2S/g_{\rm u}$, where $g_{\rm u}\equiv g_J=2J+1$
is the rotational degeneracy of the upper state (\citealp{townes1975}).
The values of the product $\mu^2S$ were taken from the Splatalogue database, but
we note that for linear molecules $S$ is simply equal to the rotational quantum
number of the upper state, i.e. $S=J$ (the SO molecule, which possesses a
$^3\Sigma$ (electronic spin is 1) electronic ground state, is an exception;
\citealp{tiemann1974}). For linear molecules, $g_K=g_I=1$ for all levels, while
for the E-type CH$_3$OH, $g_K=2$ and $g_I=1$ (\citealp{turner1991}).
The partition function of the linear molecules was approximated as
\begin{equation}
\label{eq:Z1}
Z_{\rm rot}(T_{\rm ex}) \simeq \frac{k_{\rm B}T_{\rm ex}}{hB}+\frac{1}{3}\,.
\end{equation}
Equation~(\ref{eq:Z1}) is appropriate for heteropolar molecules at a
high-temperature limit of $hB/k_{\rm B}T_{\rm ex} \ll 1$. For SO, however, the
rotational levels with $N\geq1$ are split into three sublevels (triplet of
$N=J-1$, $N=J$, and $N=J+1$). To calculate the partition function of SO,
we used the approximation formulae from Kontinen et al. (2000; Appendix~A
therein). For CH$_3$OH, which has an internal rotor, the partition function
is otherwise similar to that in Eq.~(\ref{eq:part}) but with a numerical
factor of 2 instead of 1/2 (\citealp{turner1991}).
When the spectral line has a Gaussian profile, the last integral term in
Eq.~(\ref{eq:N}) can be expressed as a function of the FWHM linewidth and peak
optical thickness of the line as
\begin{equation}
\label{eq:tau}
\int \tau(v){\rm d}v=\frac{\sqrt{\pi}}{2\sqrt{\ln 2}}\Delta v \tau_0 \simeq1.064\Delta v \tau_0 \,.
\end{equation}
We note that for the lines with hyperfine structure the total optical
thickness is the sum of peak optical thicknesses of the different components.
Moreover, if the line emission is optically thin ($\tau \ll 1$),
$T_{\rm MB}\propto \tau$, and $N$ can be computed from the integrated
line intensity. The values of $\tau$ listed in Col.~(6) in
Table~\ref{table:lineparameters} were used to decide whether the assumption
of optically thin emission is valid (in which case the column density was
calculated from the integrated intensity).
To derive the total column density of NH$_3$, we first calculated that in the
$(1,\,1)$ state, which, by taking into account both parity states of the level,
is given by (e.g. \citealp{harju1993})
\begin{equation}
\label{eq:ammonia1}
N({\rm NH_3})_{(1,\,1)}=N_++N_-=N_+(1+e^{h\nu_{(1,\,1)}/k_{\rm B}T_{\rm ex}})\,.
\end{equation}
The latter equality follows from the Boltzmann population distribution, and
the fact that the two levels have the same statistical weights ($J$ and $K$ do
not change in the inversion transition). Because $N_+$ represents the column
density in the upper state, its value was calculated from a formula that can
be derived by substituting Eq.~(\ref{eq:tau}) into Eq.~(\ref{eq:N}), and
dividing by the term $Z_{\rm rot}/(g_Kg_I)e^{E_u/k_{\rm B}T_{\rm ex}}$.
The value of $S$ for a $(J,\,K)\rightarrow(J,\,K)$ transition is
$S=K^2/[J(J+1)]$. Finally, making
the assumption that at the low temperature of SMM3 only the four lowest
metastable ($J=K$) levels are populated, the value of $N({\rm NH_3})_{(1,\,1)}$
was scaled by the partition function ratio $Z_{\rm rot}/Z_{\rm rot}(1,\,1)$ to
derive the total (\textit{ortho}+\textit{para}) NH$_3$ column density as
\begin{eqnarray}
N({\rm NH_3}) &=& N({\rm NH_3})_{(0,\,0)}+N({\rm NH_3})_{(1,\,1)}\nonumber \\
& & +N({\rm NH_3})_{(2,\,2)}+ \,N({\rm NH_3})_{(3,\,3)}\nonumber \\
&=& N({\rm NH_3})_{(1,\,1)}\times \nonumber \\
& & \left(\frac{1}{3}e^{\frac{23.4}{T_{\rm rot}}}+1+\frac{5}{3}e^{-\frac{41.5}{T_{\rm rot}}}+\frac{14}{3}e^{-\frac{101.2}{T_{\rm rot}}} \right)\,.
\end{eqnarray}
The column density analysis presented here assumes that
the line emission fills the telescope beam,
i.e. that the beam filling factor is unity. As can be seen in
Fig.~\ref{figure:linemaps}, the DCO$^+(3-2)$ and
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ emissions are somewhat extended with
respect to the 350~$\mu$m-emitting core whose size is comparable
to the beam size of most of our line observations. Moreover,
the detected N-bearing species are often found
to show spatial distributions comparable to the dust emission of dense cores
(e.g. \citealp{caselli2002a}; \citealp{lai2003}; \citealp{daniel2013}). It is still
possible, however, that the assumption of unity filling factor is not correct.
The gas within the beam area can be structured in a clumpy fashion, in which
case the true filling factor is $<1$. The derived beam-averaged column density
is then only a lower limit to the source-averaged value.
The fractional abundances of the molecules were calculated by dividing the
molecular column density by the H$_2$ column density, $x=N/N({\rm H_2})$.
To be directly comparable to the molecular line data, the $N({\rm H_2})$ values
were derived from the LABOCA data smoothed to the resolution of
the line observations (cf.~Eq.~(3) in Paper~I). For this calculation, we
adopted the dust temperature derived from the SED fit ($T_{\rm dust}=15.1 \pm 0.1$~K), except for the broad component of \textit{p}-H$_2$CO and CH$_3$OH for which $T_{\rm dust}$ was assumed to be $64\pm15$~K ($=T_{\rm rot}(p-{\rm H_2CO})$). The mean molecular weight per H$_2$ molecule we used was $\mu_{\rm H_2}=2.82$, and the
dust opacity per unit dust mass at 870~$\mu$m was set to
$\kappa_{\rm 870\,\mu m}=1.38$~cm$^2$~g$^{-1}$ to be consistent with the OH94 dust
model described earlier. The beam-averaged column densities and abundances with
respect to H$_2$ are listed in Table~\ref{table:chemistry}.
\subsubsection{Deuterium fractionation and CO depletion}
The degree of deuterium fractionation in N$_2$H$^+$ was calculated by dividing
the column density of N$_2$D$^+$ by that of N$_2$H$^+$. The obtained value,
$14\%\pm6\%$, is about $40\%$ of the value derived in Paper~III (i.e.
$0.338\pm0.09$ based on a non-LTE analysis).
To estimate the amount by which the CO molecules are depleted in SMM3,
we calculated the CO depletion factors following the analysis presented in
Paper~III with the following modifications. Recently, Ripple et al. (2013)
analysed the CO abundance variation across the Orion giant molecular clouds.
In particular, they derived the $^{13}$CO fractional abundances, and found that
in the self-shielded interiors ($3<A_{\rm V}<10$ mag) of Orion B, the value
of $x({\rm ^{13}CO})$ is $\simeq3.4\times10^{-6}$. On the other hand, towards
NGC~2024 in Orion~B the average [$^{12}$C]/[$^{13}$C] ratio is measured to be
about 68 (\citealp{savage2002}; \citealp{milam2005}). These two values translate
into a canonical (or undepleted) CO abundance of $\simeq2.3\times10^{-4}$.
We note that this is 2.3 times higher than the classic value $10^{-4}$, but
fully consistent with the best-fitting CO abundance of $2.7_{-1.2}^{+6.4}\times10^{-4}$
found by Lacy et al. (1994) towards NGC 2024. Because we derived the C$^{18}$O
and C$^{17}$O abundances towards the core centre and the envelope, respectively,
the canonical abundances of these two species had to be estimated. We assumed
that the [$^{16}$O]/[$^{18}$O] ratio is equal to the average local interstellar
medium value of 557 (\citealp{wilson1999}), and that the [$^{18}$O]/[$^{17}$O]
ratio is that derived by Wouterloot et al. (2008) for the Galactic disk
(Galactocentric distance range of 4--11~kpc), namely 4.16. Based on the aforementioned ratios, the canonical C$^{18}$O and C$^{17}$O abundances were set to $4.1\times10^{-7}$ and $9.9\times10^{-8}$, respectively. With respect to the observed abundances,
the CO depletion factors were derived to be $f_{\rm D}=27.3\pm1.8$ towards the
core centre (C$^{18}$O data), and $f_{\rm D}=8.3\pm0.7$ in the envelope
(C$^{17}$O data). The deuteration level and the CO depletion factors are given
in the last two rows in Table~\ref{table:chemistry}. We note that the non-LTE
analysis presented in Paper~III yielded a value of $f_{\rm D}=10.8\pm2.2$ towards
the core edge, i.e. a factor of $1.3\pm0.3$ times higher than the present value.
Assuming that the core mass we derived through SED fitting,
$3.1\pm0.6$~M$_{\sun}$, is the mass within an effective radius, which corresponds to the size of the largest photometric
aperture used, i.e. $R_{\rm eff}=19\farcs86$ or $\simeq0.04$~pc, the
volume-averaged H$_2$ number density is estimated to be
$\langle n({\rm H_2})\rangle=1.7\pm0.3\times10^5$~cm$^{-3}$ (see Eq.~(1) in
Paper~III). Following the analysis presented in Miettinen (2012a,
Sect.~5.5 therein), the CO depletion timescale at the aforementioned density (and adopting a $\delta_{\rm dg}$ ratio of 1/141) is estimated to be
$\tau_{\rm dep}\sim3.4\pm0.6\times10^4$~yr. This can be
interpreted as a lower limit to the age of SMM3.
\begin{table}
\caption{Molecular column densities, fractional abundances with respect to
H$_2$, and the degrees of deuteration and CO depletion.}
\small
\label{table:chemistry}
\begin{tabular}{c c c}
\hline\hline
Species & $N$ [cm$^{-2}$] & $x$\\
\hline
NH$_3$ & $1.5\pm0.2\times10^{15}$ & $6.6\pm0.9\times10^{-8}$\\[1ex]
\textit{p}-H$_2$CO\tablenotemark{a} & $1.0\pm0.3\times10^{12}$ & $2.0\pm0.6\times10^{-11}$\\[1ex]
\textit{p}-H$_2$CO\tablenotemark{b} & $2.2\pm0.4\times10^{13}$ & $3.1\pm1.0\times10^{-9}$\\[1ex]
CH$_3$OH & $6.8\times10^{14}$\tablenotemark{c} & $9.4\pm2.5\times10^{-8}$\\[1ex]
C$^{18}$O & $7.1\pm0.8\times10^{14}$ & $1.5\pm0.1\times10^{-8}$\\[1ex]
SO & $8.1\pm1.2\times10^{12}$ & $1.6\pm0.2\times10^{-10}$\\[1ex]
C$^{17}$O & $3.2\pm0.4\times10^{14}$ & $1.2\pm0.1\times10^{-8}$\\[1ex]
N$_2$D$^+$ & $1.7\pm0.5\times10^{12}$ & $4.8\pm1.4\times10^{-11}$\\[1ex]
N$_2$H$^+$ & $1.2\pm0.4\times10^{13}$ & $2.9\pm0.9\times10^{-10}$\\[1ex]
DCO$^+$\tablenotemark{d} & $1.3\pm0.5\times10^{13}$ & $2.6\pm1.0\times10^{-10}$\\[1ex]
DCO$^+$\tablenotemark{e} & $6.2\pm2.9\times10^{11}$ & $2.0\pm0.9\times10^{-11}$\\[1ex]
\hline
& Core centre & Envelope \\[1ex]
[N$_2$D$^+$]/[N$_2$H$^+$] & \ldots & $0.14\pm0.06$ \\[1ex]
$f_{\rm D}({\rm CO})$ & $27.3\pm1.8$ & $8.3\pm0.7$ \\[1ex]
\hline
\end{tabular}
\tablenotetext{a}{The narrow-line component, which likely originates in \\ the quiescent
envelope.}\tablenotetext{b}{The broad-line/warm component, which likely originates in \\
the outflow gas.}\tablenotetext{c}{The estimated error is unrealistically large
(much larger \\ than the nominal value), and is therefore not reported.}\tablenotetext{d}{From the $J=3-2$ line observation towards the core centre.}\tablenotetext{e}{From the previous $J=4-3$ line observation towards the core \\ envelope.}
\end{table}
\section{Discussion}
\subsection{Fragmentation and protostellar activity in SMM3}
Owing to the revised fundamental physical properties of SMM3, we are in a position to re-investigate its fragmentation characteristics. At a gas temperature of
$T_{\rm kin}=11.2\pm0.5$~K, the isothermal sound speed is
$c_{\rm s}=197.5\pm4.4$~m~s$^{-1}$, where the mean molecular weight per free
particle was set to $\mu_{\rm p}=2.37$. The aforementioned values can be used to calculate the thermal Jeans length
\begin{equation}
\lambda_{\rm J}=\sqrt{\frac{\pi c_{\rm s}^2}{G \langle \rho \rangle}}\, ,
\end{equation}
where $G$ is the gravitational constant, the mean mass density is
$\langle \rho \rangle=\mu_{\rm H_2}m_{\rm H}\langle n({\rm H_2})\rangle$, and
$m_{\rm H}$ is the mass of the hydrogen atom. The resulting Jeans length,
$\lambda_{\rm J}\simeq0.05$~pc, is a factor of 1.4 shorter than our previous
estimate (0.07~pc; Paper~III), where the difference can be mainly attributed to the higher gas density derived here. We note that the uncertainty propagated from those of $T_{\rm kin}$ and $\langle n({\rm H_2})\rangle$ is only 1~mpc.
If we use the observed \textit{p}-NH$_3(1,\,1)$ linewidth as
a measure of the non-thermal velocity
dispersion, $\sigma_{\rm NT}$ ($=169.9\pm4.2$~m~s$^{-1}$), the effective sound
speed becomes $c_{\rm eff}=(c_{\rm s}^2+\sigma_{\rm NT}^2)^{1/2}=260.5\pm1.6$~m~s$^{-1}$. The corresponding effective Jeans length is
$\lambda_{\rm J}^{\rm eff}\simeq0.06$~pc. Although not much different from
the purely thermal value, $\lambda_{\rm J}^{\rm eff}$ is in better agreement
with the observed projected distances of SMM3b and 3c from the protostar
position (0.07--0.10~pc). Hence, the parent core might have fragmented as a result of Jeans-type instabi\-lity with density perturbations in a self-gravitating fluid having both the thermal
and non-thermal motions (we note that in Paper~III we suggested a pure thermal Jeans fragmentation scenario due to the aforementioned longer $\lambda_{\rm J}$ value). Because information in the core is transported at the sound speed (being it thermal or effective one), the fragmentation timescale is
expected to be comparable to the crossing time, $\tau_{\rm cross}=R/c_{\rm eff}$,
where $R=0.07-0.10$~pc. This is equal to
$\tau_{\rm cross}\sim2.6-3.8\times10^5$~yr, which is up to an order of magnitude longer than the estimated nominal CO depletion timescale (Sect.~4.2.3).
The present SED analysis and the previous studies (see Sect.~1) suggest
that SMM3 is in the Class~0 phase of stellar evolution. Observational estimates
of the Class~0 lifetime are about $\sim1\times10^5$~yr (\citealp{enoch2009};
\citealp{evans2009}; Maury et al. 2011). In agreement with observations,
Offner \& Arce (2014) performed radiation-hydrodynamic simulations of
protostellar evolution including outflows, and obtained Stage~0 lifetimes of
$1.4-2.3\times10^5$~yr, where the Stage~0 represents a theoretical counterpart
of the observational Class~0 classification.
These observational and theoretical lifetime estimates are comparable to the
fragmentation timescale of SMM3, which supports a scenario of the age of SMM3 being a few times $10^5$~yr.
In the present paper, we have presented the first
signatures of an outflow activity in SMM3. These are \textit{i)} the broad
lines of \textit{p}-H$_2$CO and CH$_3$OH; \textit{ii)} the warm gas
($64\pm15$ K) associated with the broad-line component; and \textit{iii)} the
protrusion-like feature seen at 4.5~$\mu$m (Fig.~\ref{figure:images}, bottom
right panel), which is likely related to the shock emission near the accreting
protostar. Outflow activity reasserts the Class 0 evolutionary stage of
SMM3 (e.g. \citealp{bontemps1996}).
The 350~$\mu$m flux densities of the subcondensations SMM3b and 3c are
$250\pm60$~mJy and $240\pm60$~mJy, respectively (Paper~III). Assuming that
the dust temperature is that resulting from the SED of SMM 3 ($15.1\pm0.1$~K),
and adopting the same dust model as in Sect.~4.1, in which case the dust
opacity per unit dust mass at 350~$\mu$m is
$\kappa_{\rm 350\, \mu m}=7.84$~cm$^2$~g$^{-1}$, the condensation masses are only $\sim0.06\pm0.01$~M$_{\sun}$. If we instead use as $T_{\rm dust}$ the gas temperature derived from ammonia, the mass estimates will become about $0.16\pm0.05$~M$_{\sun}$, i.e. a factor of $2.7\pm0.9$ higher. As discussed in the case of the prestellar core Orion B9--SMM6 by Miettinen \& Offner (2013b), these types of very low-mass condensations are likely not able to collapse to form stars without any additional mass accretion. Instead, they could represent the precursors of
substellar-mass objects or brown dwarfs (e.g. \citealp{lee2013}). Alternatively,
if the condensations are gravitationally unbound structures, they could
disperse away in the course of time, an issue that could be solved by
high-resolution molecular line observations. Finally,
mechanical feedback from the protostellar outflow could affect the future
evolution of the condensations (cf.~the proto- and prestellar core
system IRAS~05399-0121/SMM1 in Orion~B9; \citealp{miettinen2013a}).
\subsection{Chemical properties of SMM3}
\subsubsection{NH$_3$ and N$_2$H$^+$ abundances}
The fractional abundances of the N-bearing species NH$_3$ and N$_2$H$^+$ we
derived are $6.6\pm0.9\times10^{-8}$ and $2.9\pm0.9\times10^{-10}$.
The value of $x({\rm NH_3})$ in low-mass dense cores is typically found to be
a few times $10^{-8}$ (e.g. \citealp{friesen2009}; \citealp{busquet2009}).
Morgan et al. (2010) derived a mean $x({\rm NH_3})$ value of
$2.6\times10^{-8}$ towards the protostars embedded in bright-rimmed clouds.
Their sources might represent
the sites of triggered star formation, and could therefore resemble the case
of SMM3 -- a core that might have initially formed as a result of external
feedback. More recently, Marka et al. (2012) found that the average NH$_3$
abundance in their sample of globules hosting Class 0 protostars is
$3\times10^{-8}$ with respect to H$_2$.\footnote{The authors reported the
abundances with respect to the total hydrogen column density, which is here assumed to be $N_{\rm H}=2N({\rm H_2})$.} Compared to the aforementioned reference studies,
the ammonia abundance is SMM3 appears to be elevated by a factor of about two or more, although differences in the assumptions of dust properties should be borne in mind. The chemical modelling of the Class 0 sources performed by Marka et al. (2012), which included reactions taking place on dust grain surfaces, predicted that an NH$_3$ abundance exceeds $\sim10^{-8}$ after $10^5$~yr of evolution (see
also \citealp{hilyblant2010} for a comparable result). This compares well with
the fragmentation timescale in SMM3 estimated above. For their sample of
low-mass protostellar cores, Caselli et al. (2002b) found a mean N$_2$H$^+$
abundance of $3\pm2\times10^{-10}$, which is very similar to the one we have
derived for SMM3.
The [NH$_3$]/[N$_2$H$^+$] ratio in SMM3, derived from the corresponding column
densities, is $125\pm45$. The abundance ratio between these two species is
known to show different values in starless and star-forming objects.
For example, Hotzel et al. (2004), who studied the dense cores
B217 and L1262, both associated with Class~I protostars, found that
the above ratio is $\sim140-190$ in the starless parts of the cores, but only
about $\sim60-90$ towards the protostars. Our value, measured towards the
outer edge of SMM3, lies in between these two ranges, and hence is consistent with the observed trend. A similar behaviour is seen in IRAS~20293+3952, a site of clustered star formation (\citealp{palau2007}), and
clustered low-mass star-forming core Ophiuchus B (\citealp{friesen2010}).
In contrast, for their sample of dense cores in Perseus, Johnstone et al.
(2010) found that the \textit{p}-NH$_3$/N$_2$H$^+$ column density ratio is
fairly similar in protostellar cores ($20\pm7$) and in prestellar cores
($25\pm12$). Their ratios also appear to be lower than found in other sources
(we note that the statistical equilibrium value of the NH$_3$
\textit{ortho}/\textit{para} ratio is unity; e.g. \citealp{umemoto1999}).
The chemical reactions controlling the [NH$_3$]/[N$_2$H$^+$] ratio were
summarised by Fontani et al. (2012; Appendix~A therein). In starless
cores, the physical conditions are such that both the
CO and N$_2$ molecules can be heavily depleted. If this is the
case, N$_2$H$^+$ cannot be efficiently formed by the reaction between H$_3^+$
and N$_2$. On the other hand, this is counterbalanced by the fact that
N$_2$H$^+$ cannot be destroyed by the gas-phase CO, although it would serve
as a channel for the N$_2$ production (${\rm CO}+{\rm N_2H^+}\rightarrow {\rm HCO^+}+{\rm N_2}$).
Instead, in a gas with strong CO depletion, N$_2$H$^+$ is destroyed by the
dissociative electron recombination. The absence of N$_2$ also diminishes the
production of N$^+$, the cations from which NH$_3$
is ultimately formed via the reaction NH$_4^++{\rm e}^-$. However, the other
routes to N$^+$, namely ${\rm CN}+{\rm He}^+$ and ${\rm NH_2}+{\rm He}^+$, can
still operate. We also note that H$_3^+$, which also cannot be destroyed by CO
in the case of strong CO depletion, is a potential destruction agent of NH$_3$.
However, the end product of the reaction ${\rm NH_3}+{\rm H_3^+}$ is
NH$_4^+$, the precursor of NH$_3$. For these reasons, the NH$_3$ abundance
can sustain at the level where the [NH$_3$]/[N$_2$H$^+$] ratio is higher in
starless cores (strong depletion) than in the protostellar cores (weaker
depletion). It should be noted that the study of the
high-mass star-forming region AFGL 5142 by Busquet et al. (2011) showed that
the [NH$_3$]/[N$_2$H$^+$] ratio behaves opposite to that in low-mass
star-forming regions. The authors concluded that the higher ratio
seen towards the hot core position is the result of a higher dust temperature,
leading to the desorption of CO molecules from the grains mantles. As a result, the gas-phase CO can destroy the N$_2$H$^+$ molecules, which results in a higher
[NH$_3$]/[N$_2$H$^+$] ratio. Because SMM3 shows evidence for quite a strong CO
depletion of $f_{\rm D}=27.3\pm1.8$ towards the core centre, the chemical scheme described above is probably responsible for the
much higher abundance of ammonia compared to N$_2$H$^+$.
\subsubsection{Depletion and deuteration}
As mentioned above, the CO molecules in SMM3 appear to be quite heavily
depleted towards the protostar position, while it becomes lower by a factor
of $3.3\pm0.4$ towards the outer core edge. A caveat here is that the
two depletion factors were derived from two different isotopologues, namely C$^{17}$O for the envelope zone, and C$^{18}$O towards the core centre. This brings into question the direct comparison of the two depletion factors. Indeed, although the critical densities of the detected CO
isotopologue transitions are very similar, the C$^{18}$O linewidth is
$1.4\pm0.3$ times greater than that of C$^{17}$O. Although this is not a significant discrepancy, the observed C$^{18}$O emission could originate in a
more turbulent parts of the core.
For comparison, for their sample of 20 Class 0 protostellar cores,
Emprechtinger et al. (2009) derived CO depletion factors of $0.3\pm0.09-4.4\pm1.0$. These are significantly lower than what we have derived for SMM3.
The depletion factor in the outer edge of SMM3 we found is more remisnicent
to those seen in low-mass starless cores (e.g. \citealp{bacmann2002};
\citealp{crapsi2005}), but the value towards the core's 24~$\mu$m peak position
stands out as an exceptionally high.
The deuterium fractionation of N$_2$H$^+$, or the N$_2$D$^+$/N$_2$H$^+$ column
density ratio, is found to be $0.14\pm0.06$ towards the core edge. This lies
midway between the values found by Roberts \& Millar (2007) for their sample
of Class 0 protostars ($0.06\pm0.01-0.31\pm0.05$). Emprechtinger et al. (2009)
found N$_2$D$^+$/N$_2$H$^+$ ratios in the range $<0.029-0.271\pm0.024$ with an
average value of 0.097. Among their source sample, most objects had a
deuteration level of $<0.1$, while 20\% of the sources showed values of $>0.15$.
With respect to these results, the deuterium fractionation in SMM3 appears to
be at a rather typical level among Class 0 objects. For comparison, in
low-mass starless cores the N$_2$D$^+$/N$_2$H$^+$ ratio can be several tens of
percent (\citealp{crapsi2005}), while intermediate-mass Class 0-type protostars
show values that are more than ten times lower than in SMM3
(\citealp{alonso2010}). A visual inspection of Fig.~3 in Emprechtinger et al.
(2009) suggests that for a N$_2$D$^+$/N$_2$H$^+$ ratio we have derived for
SMM3, the dust temperature is expected to be $\lesssim25$~K.
This is qualitatively consistent with a value of $15.1\pm0.1$~K
we obtained from the MBB SED fit. On the other hand, the correlation in
the middle panel of Fig.~4 in Emprechtinger et al. (2009; see also their
Fig.~10) suggests that the CO depletion factor would be $\sim3$ at the
deuteration level seen in SMM3, while our observed value in the envelope is
$2.8\pm0.2$ times higher. The fact that CO molecules appear to be
more heavily depleted towards the new line observation target position
suggests that the degree of deuterium fractionation there is also higher.
A possible manifestation of this is that the estimated DCO$^+$ abundance is
higher by a factor of $13.0\pm7.7$ towards the core centre than towards the
core edge, but this discrepancy could be partly caused by the different
transitions used in the analysis ($J=3-2$ and $J=4-3$, respectively).
Recently, Kang et al. (2015) derived a deuterium fractionation of formaldehyde
in SMM3 (towards the core centre), and they found a HDCO/H$_2$CO ratio of $0.31\pm0.06$, which is the highest value among their sample of 15 Class~0 objects. This high deuteration level led the authors to conclude that SMM3 is in a very early stage of protostellar evolution.
\subsubsection{H$_2$CO, CH$_3$OH, and SO -- outflow chemistry in SMM3}
Besides the narrow ($\Delta v=0.42$~km~s$^{-1}$) component of the
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ line detected towards SMM3, this line
also exhibits a much wider ($\Delta v=8.22$~km~s$^{-1}$) component with
blue- and redshifted wing emission. The other two transitions of
\textit{p}-H$_2$CO we detected, $(3_{2,\,1}-2_{2,\,0})$ and
$(3_{2,\,2}-2_{2,\,1})$, are also broad, more than 10~km~s$^{-1}$ in FWHM,
and exhibit wing emission. The methanol line we detected,
with a FWHM of 10.98~km~s$^{-1}$, is also
significantly broader than most of the lines we have detected.
The similarity between the FWHMs of the methanol and formaldehyde lines suggests that they
originate in a common gas component. The rotational temperature derived from
the \textit{p}-H$_2$CO lines, $64\pm15$~K, is considerably higher than the
dust temperature in the envelope and the gas temperature derived from ammonia.
The large linewidths and the relatively warm gas temperature can be understood
if a protostellar outflow has swept up and shock-heated the surrounding
medium.
The H$_2$CO and CH$_3$OH molecules are organic species, and they can form
on dust grain surfaces through a common CO hydrogenation reaction sequence
(${\rm CO}\rightarrow {\rm HCO}\rightarrow {\rm H_2CO}\rightarrow {\rm CH_3O}$
or H$_3$CO or CH$_2$OH or H$_2$COH $\rightarrow {\rm CH_3OH}$; e.g.
Watanabe \& Kouchi 2002; \citealp{hiraoka2002}; \citealp{fuchs2009}, and
references therein). The intermediate compound, solid formaldehyde, and the
end product, solid methanol, have both been detected in absorption towards
low-mass young stellar objects (YSOs; Pontoppidan et al. 2003; \citealp{boogert2008}). A more recent study of solid-phase CH$_3$OH in low-mass YSOs by Bottinelli et al. (2010) suggests that much of the CH$_3$OH is in a CO-rich ice layer,
which conforms to the aforementioned formation path. We note that H$_2$CO can also be formed in the gas phase (e.g. \citealp{kahane1984}; \citealp{federman1991}), and the narrow \textit{p}-H$_2$CO line we detected is likely tracing a
quiescent gas not enriched by the chemical compounds formed on dust grains. The
estimated \textit{p}-H$_2$CO abundance for this component is very low, only
$2.0\pm0.6\times10^{-11}$. We note that the total H$_2$CO column density derived by Kang et al. (2015), $N({\rm H_2CO})=3.3\pm0.4\times10^{12}$~cm$^{-2}$ at $T_{\rm ex}=10$~K, is in good agreement with our \textit{p}-H$_2$CO column density if the \textit{ortho}/\textit{para} ratio is 3:1 as assumed by the authors.
In contrast, the fractional \textit{p}-H$_2$CO abundance is found to be
$155\pm70$ times higher for the broad component than for the narrow one.
The origin of the H$_2$CO abundance
enhancement in low-mass protostars can be understood in terms of the
liberation of the ice mantles (\citealp{schoier2004}). We note that there are
also gas-phase formation routes for CH$_3$OH, which start from the reaction
between CH$_3^+$ and H$_2$O or between H$_3$CO$^+$ and H$_2$CO. The resulting
protonated methanol, CH$_3$OH$_2^+$, can recombine with an electron and
dissociate to produce CH$_3$OH (\citealp{garrod2006}; \citealp{geppert2006}).
However, the gas-phase syntheses are not able to produce the high fractional
abundances like observed here towards SMM3 ($9.4\pm2.5\times10^{-8}$).
The thermal desorption of CH$_3$OH requires a dust temperature of
at least $\sim80$~K (\citealp{brown2007}; \citealp{green2009}). Although highly
uncertain, the H$_2$CO rotational temperature we derived does not suggest the
dust temperature to be sufficiently high for CH$_3$OH molecules to
sublimate. Hence, it seems possible that an outflow driven by SMM3
has sputtered the icy grain mantles (in impacts with gas-phase H$_2$ and He)
so that H$_2$CO and CH$_3$OH were released into the gas phase. On the other
hand, the high CO depletion factors we derived suggest that the grain ices are
rich in CO, and if CH$_3$OH molecules are embedded in CO-rich ice layers, their
thermal evaporation temperature can be considerably lower ($\sim30$~K; see
\citealp{maret2005}).
The \textit{p}-H$_2$CO/CH$_3$OH column density ratio for the broad line
component is found to be $0.03\pm0.005$. This value represents a lower limit
to the total H$_2$CO/CH$_3$OH ratio, which depends on the
\textit{o}/\textit{p} ratio. Based on the observed abundances of both the \textit{ortho} and \textit{para} forms of H$_2$CO in low-mass dense cores, J{\o}rgensen et al. (2005) derived a \textit{o}/\textit{p} ratio of $1.6\pm0.3$. The authors interpreted this to be consistent with thermalisation at 10--15~K on
dust grains. If we assume that the \textit{o}/\textit{p} ratio is $\simeq1.6$,
we obtain a total H$_2$CO/CH$_3$OH column density ratio of $\simeq0.08\pm0.01$, while for a \textit{o}/\textit{p} ratio, which is equal to the
relative statistical weights of 3:1, the total H$_2$CO/CH$_3$OH ratio becomes
$0.13\pm0.02$. The H$_2$CO/CH$_3$OH ice abundance ratio
in low-mass YSOs is found to be in the range $\sim0.2-6$
(\citealp{boogert2008}), which is higher than the gas-phase abundance ratio towards SMM3. Hence, it is possible that the ices are not completely sublimated into the gas phase. Interestingly, if the total H$_2$CO/CH$_3$OH ratio for SMM3 is
$\sim0.1$, and $x({\rm CH_3OH})\sim10^{-7}$, then these properties would resemble those derived for the Galactic centre clouds
where shocks (caused by expanding bubbles, cloud-cloud collisions, etc.) are
believed to have ejected the species from the grain mantles
(\citealp{requena2006}). In contrast, for the hot interiors of Class 0 sources,
i.e. hot corinos, the H$_2$CO/CH$_3$OH ratio is found to be higher,
in the range $>0.3-4.3$ (\citealp{maret2005}; their Table~3),
which are comparable to the aforementioned ice abundance ratios.
In hot corinos the dust temperature exceeds 100 K, and the evaporation of ice mantles is the result of
radiative heating by the central protostar. Moreover, in the Horsehead photodissociation region (PDR) in Orion~B, the H$_2$CO/CH$_3$OH ratio is found to be $2.3\pm0.4$ (\citealp{guzman2013}). Guzm{\'a}n et al. (2013) concluded that in the
UV-illuminated PDR both H$_2$CO and CH$_3$OH are released from the grain
mantles through photodesorption.
The SO line we detected is narrow ($\Delta v=0.68$~km~s$^{-1}$), but
low-intensity wing emission can be seen on both sides of it (see
\citealp{codella2002} for similar spectra towards the CB34 globule, which harbours
a cluster of Class 0 objects). The derived fractional abundance of SO,
$1.6\pm0.2\times10^{-10}$, is very low, as for example the average abundance
derived by Buckle \& Fuller (2003) for their sample of Class~0 objects is
$3.1\pm0.9\times10^{-9}$, and in the starless TMC-1 cloud the abundance is
found to be $\sim10^{-8}$ (\citealp{lique2006}).
While our narrow SO line is
probably originating in the quiescent envelope, where SO is formed through the
reactions ${\rm S}+{\rm OH}$ and ${\rm S}+{\rm O_2}$ (e.g. \citealp{turner1995}),
the weak line wings provide a hint of an outflowing SO gas. The SO emission is
indeed known to be a tracer of protostellar outflows
(e.g. \citealp{chernin1994}; \citealp{lee2010}; \citealp{tafalla2013}).
Outflow shocks can first release H$_2$S molecules from dust grains,
and subsequent hydrogenation reactions produce HS
molecules and S atoms (${\rm H_2S}+{\rm H}\rightarrow {\rm HS}+{\rm H_2}$;
${\rm HS}+{\rm H}\rightarrow {\rm S}+{\rm H_2}$; \citealp{mitchell1984};
\citealp{charnley1997}). The oxidation reactions ${\rm HS}+{\rm O}$ and
${\rm S}+{\rm O_2}$ can then lead to the formation of SO (see
\citealp{bachiller2001}). For example, Lee et al. (2010) derived an SO abundance
of $\sim2\times10^{-6}$ towards the HH211 jet driven by a Class 0 protostar,
which shows that a significant abundance enhancement can take place in
low-mass outflows. Some of the evolutionary models of the sulphur chemistry
by Buckle \& Fuller (2003) suggest that, after $\sim10^5$ yr,
the abundance of H$_2$S starts to drop, which leads to a rapid decrease in
the SO abundance. This could explain the very weak SO wing emission seen
towards SMM3, and agrees with the observational estimates of the
Class 0 lifetime of about $\sim1\times10^5$ yr (e.g. \citealp{evans2009};
see also \citealp{offner2014} for a comparable result from simulations).
Interestingly, some of the Buckle \& Fuller (2003) models, for example the one
with a gas temperature of 10~K, H$_2$ density of $10^5$~cm$^{-3}$, and a
cosmic-ray ionisation rate of $\zeta_{\rm H}=1.3\times10^{-16}$ s$^{-1}$, which is
ten times the standard $\zeta_{\rm H}$ (their Fig.~7, bottom left), predict SO
abundances comparable to that observed in SMM3 (a few times $10^{-10}$) after
$10^5$ yr, so perhaps the narrow-line component could also be (partly) tracing
a gas component that was affected by outflows in the past
\section{Summary and conclusions}
We used the APEX telescope to carry out follow-up molecular line observations
towards the protostellar core SMM3 , which is embedded in the filamentary
Orion B9 star-forming region. The new data were used in conjunction with
our earlier APEX data (including SABOCA and LABOCA continuum data), and NH$_3$
observations from the Effelsberg 100~m telescope. The main results are
summarised as follows.
\begin{enumerate}
\item From the observed frequency range $\sim218.2-222.2$~GHz, the following
chemical compounds were identified: $^{13}$CO, C$^{18}$O, SO,
\textit{p}-H$_2$CO, and CH$_3$OH-E$_1$. The last two species play a key
role in the synthesis of more complex organic molecules and prebiotic
chemistry, which makes them particularly interesting compounds in the
gas reservoir of a solar-type protostar like SMM3.
Our new mapping observations of SMM3 were performed in the frequency range
$\sim215.1-219.1$~GHz, from which DCO$^+(3-2)$ and
\textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ lines were identified.
\item Our revised SED analysis of SMM3 supports its Class 0 classification.
The dust temperature, envelope mass, and luminosity
were derived to be $15.1\pm0.1$~K, $3.1\pm0.6$~M$_{\sun}$, and $3.8\pm0.6$~L$_{\sun}$.
The NH$_3$-based gas kinetic temperature was derived to be $T_{\rm kin}=11.2\pm0.5$~K.
The revised analysis of the subfragments seen in our SABOCA 350~$\mu$m map suggests that SMM3 went
through a Jeans-type fragmentation phase, where the initial density perturbations might have had contributions from both thermal and non-thermal motions.
\item The CO depletion factor derived from the new C$^{18}$O data towards the
core centre is very high, $27.3\pm1.8$., while that
re-computed from our previous C$^{17}$O data towards the core edge is clearly
lower, $8.3\pm0.7$. We also recalculated the degree of deuterium fractionation
in the latter position, in terms of the N$_2$D$^+$/N$_2$H$^+$ ratio, and found a
value of $0.14\pm0.06$. Even higher deuteration is to be expected towards the
new line observation target position because of the stronger CO freeze
out.
\item The new spectral-line mapping observations revealed that SMM3 is
associated with extended DCO$^+$ and \textit{p}-H$_2$CO emission (as compared with the 350~$\mu$m-emitting region), and both the line emissions appear to be
elongated in the east-west direction. Besides the systemic velocity of
$\sim8.5$~km~s$^{-1}$, emission from \textit{p}-H$_2$CO$(3_{0,\,3}-2_{0,\,2})$ was also detected at a radial velocity of 1.5~km~s$^{-1}$, which
concentrates to the east and northeast of SMM3, similarly to the spatial distributions of $^{13}$CO$(2-1)$ and C$^{18}$O$(2-1)$ seen earlier by Miettinen (2012b).
\item The single-pointing observations showed that the $3_{0,\,3}-2_{0,\,2}$
line of \textit{p}-H$_2$CO exhibits two components, a narrow one and a broad
one. The other two \textit{p}-H$_2$CO lines we detected, $3_{2,\,1}-2_{2,\,0}$
and $3_{2,\,2}-2_{2,\,1}$, are also broad. Hence, a rotational diagram was
constructed for the broad component of \textit{p}-H$_2$CO, which yielded a
rotational temperature of $64\pm15$~K. The detected methanol line has a width
comparable to those of the broad formaldehyde lines, and is hence likely
tracing the same warm gas component.
\item We interpret the broad \textit{p}-H$_2$CO and CH$_3$OH lines, and the
e\-levated gas temperature, to be the first clear evidence of shock processing
and outflow activity in SMM3. The abundance of \textit{p}-H$_2$CO in the
broad component is enhanced by two orders of magnitude with respect to the
quiescent gas component. Additionally, the protrusion-like emission feature
seen in the \textit{Spitzer} 4.5~$\mu$m image is likely related to shock emission.
\item The detected SO line shows a narrow component at the systemic velocity,
and weak wings on both sides of it. The wing emission points towards a weak SO
outflow, while the narrow component is probably tracing the quiescent envelope.
\item The estimated fragmentation timescale of SMM3, and the observed chemical
characteristics all suggest that the age of SMM3 is a few times $10^5$~yr,
in agreement with its inferred Class~0 evolutionary stage. A dedicated chemical modelling would be useful in setting tighter constraints on the source age.
\end{enumerate}
Putting the results from the previous studies and the present
one together, we are in a position to place SMM3 in the wider context of Class~0 objects. Stutz et al.
(2013) classified SMM3 as a so-called PACS Bright Red source, or PBRs. This source population is composed of extreme, red Class~0 objects with presumably high-density envelopes and high mass infall rates, and the median values of their MBB-based dust temperature, envelope mass, luminosity, and $L_{\rm submm}/L$ ratio are 19.6~K, 0.6~M$_{\sun}$, 1.8~L$_{\sun}$, and 2.7\% (see Table~8 in S13).\footnote{We note that these median values were calculated by including the SMM3 values derived by S13, but if they are omitted, the median values are essentially the same. The median envelope mass reported here was scaled to the presently assumed dust-to-gas ratio.} Although the physical properties of SMM3 we have derived in the present work are more extreme than the typical PBRs' properties (it is colder, more massive, and more luminous), it can still be classified as a PBRs in agreement with S13 because this population was also found to contain sources with properties comparable to those we have derived. We note that the Orion~B cloud appears to contain a relatively high fraction of PBRs-type objects (17\% of the known protostars in Orion~B) compared to that in Orion~A (1\%; S13).
We can also draw a conclusion that SMM3 exhibits a rich chemistry.
It is possible that this Class~0 protostellar core hosts a so-called hot corino where the gas-phase chemistry can be as rich as in the hot molecular cores associated with high-mass star formation. This can be tested through high resolution
interferometric multi-line observations. Such observations would also be useful
to examine whether SMM3 drives a chemically rich/active molecular outflow, as our detection of the broad formaldehyde and methanol lines already suggest. In a more general context of low-mass star formation, SMM3 has the potential to become a useful target source of chemical evolution in a triggered star-forming region (feedback from NGC~2024, which could be ultimately linked to the nearby Ori OB1 association within the Ori-Eri superbubble).
By comparing its properties with those of Class~0 objects in more isolated, quiescent regions,
it could be possible to investigate whether its (chemical) evolution could have been accelerated
as a result of more dynamic environment. The observed fragmentation of the SMM3 core indeed suggests
that it has had a dynamical history, and is a fairly atypical object compared to the general Class~0 population in the Galaxy.
\acknowledgments
I would like to thank the referee for providing helpful, constructive
comments and suggestions that improved the content of this paper.
This publication is based on data acquired with the Atacama Pathfinder
EXperiment (APEX) under programmes {\tt 079.F-9313(A)}, {\tt 084.F-9304(A)},
{\tt 084.F-9312(A)}, {\tt 092.F-9313(A)}, and {\tt 092.F-9314(A)}.
APEX is a collaboration between the Max-Planck-Institut f\"{u}r
Radioastronomie, the European Southern Observatory, and the Onsala Space
Observatory. I would like to thank the staff at the APEX telescope for
performing the service-mode heterodyne and bolometer observations presented
in this paper. The research for this paper was financially supported by
the Academy of Finland, grant no.~132291. This research has made use of
NASA's Astrophysics Data System, and the NASA/IPAC Infrared Science Archive,
which is operated by the JPL, California Institute of Technology,
under contract with the NASA. This study also made use of APLpy,
an open-source plotting package for Python hosted at
{\tt http://aplpy.github.com}.
\bibliographystyle{plainnat}
|
1,477,468,751,441 | arxiv | \section{Introduction}
For hunting the $CP$-violating of the new physics effects beyond the Standard Model (SM), the three body $B$ meson decays are caught much attentions both in the theoreties and the experiments. Due to the exceptional progress of the experiments, many non-leptonic three body $B$ meson decays have been measured by the collaboratoins of Belle, BaBar, LHCb and so on \cite{pdg2018}.
On the other hand, to investigate the three body non-leptonic $B$ meson decays in theories, some theoretical models are proposed,
such as the $SU(3)$ flavor symmetry framework \cite{Savage:1989ub,Lipkin:1991st,Grossman:2003qp,Gronau:2006qn,Bhattacharya:2014eca,Deshpande:2002be,Xu:2013dta,He:2014xha}, the heavy quark effective theory combined with the chiral perturbation theory and the final state interactions \cite{Deshpande:1995nu,Fajfer:1998yc,Deandrea:2000tf,Deandrea:2000ce,Gardner:2001gc,Cheng:2002qu,Fajfer:2004cx,Bediaga:2008zz,Cheng:2013dua,Li:2014oca,Daub:2015xja,Boito:2017jav}, QCD sum rules under the factorization approach \cite{Leitner:2002xh,Chua:2001vh,Chua:2004mi,Zhang:2013oqa,Mohammadi:2014rpa,Mohammadi:2014eia}, the perturbative QCD approach \cite{Chen:2002th,Chen:2004az,Wang:2014qya,Wang:2015uea,Li:2015tja,Wang:2016rlo,Ma:2016csn,Morales:2016pcq,Li:2016tpn}, the final state interaction formalis \cite{Liang:2014tia,Liang:2015qva,Bayar:2014qha} based on the chiral unitary approach (ChUA) \cite{Oller:1997ti,Oset:1997it,Oller:2000ma,Oller:2000fj,Hyodo:2008xr,Oset:2008qh}, and so on.
Aiming at searching for the effect of physics beyond the SM, the LHCb collaboration have reported the first observation the rare three body decays of $B^{0}_{s} \rightarrow \phi\pi^{+}\pi^{-}$ and $B^{0} \rightarrow \phi\pi^{+}\pi^{-}$ \cite{Aaij:2016qnm} \footnote{Note that, sometimes these decay processes are referred as the $\bar{B}^{0}_{s}$ and $\bar{B}^{0}$ mesons' decays, since that they are not identical in the experimental measurements due to the particle pair productions and the charge symmetry.}, where the $B^{0}_{s} \rightarrow \phi\pi^{+}\pi^{-}$ decay investigated with the requirement of the $\pi^{+}\pi^{-}$ invariant mass in the range $400 \textsl{ MeV} < m(\pi\pi) < 1600 \textsl{ MeV} $, and some resonant contributions from the states of $\rho(770)$, $f_{0}(980)$, $f_{2}(1270)$, and $f_{0}(1500)$ are found in the $m(\pi\pi)$ invariant mass spectrum. The decays of $B^{0}_{s} \rightarrow \phi\pi^{+}\pi^{-}$ and $B^{0} \rightarrow \phi\pi^{+}\pi^{-}$ are interesting because they are induced by the flavor changing neutral current \cite{Buras:1996wn} $ b\rightarrow s\bar{s}s $ and $b\rightarrow d\bar{s}s$ process at the elementary particle level, which is absolutely forbidden at the tree level by the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing mechanism of the SM \cite{Wang:2018xux}. However, these decays are responsive to the new physics beyond SM because their amplitude are described by the loop (penguin) diagrams \cite{Raidal:2002ph}. After the experimental findings, the rare decay of $B^{0}_{s} \rightarrow \phi\pi^{+}\pi^{-}$ had been studied using the perturbative QCD approach in Ref. \cite{Wang:2018xux}, where the nonperturbative contributions from the resonance $f_0(980)$ are introduced in the distribution amplitudes by the time-like scalar form factor parameterized with the Flatt\'e model. Applying the QCD factorization framework, the three-body decays of $B^0_{(s)} \to \phi \pi^+\pi^-$ are also investigated in Ref. \cite{Estabar:2018ecl}, where the resonant contributions are taken into account for the three-body matrix element in terms of the Breit-Wigner formalism. Besides, also applying the perturbative QCD approach, the work of \cite{Li:2019xwh} research the direct $CP$ violation in the decay of $B^{0}_{s} \to \rho (\omega) \phi \to \phi\pi^{+}\pi^{-}$ via the $\rho-\omega$ mixing mechanism.
In the present work, aiming at examining the resonant contributions and understanding the reproductions of the $f_{0}(500)$ and $f_{0}(980)$ states in the final state interactions, we also study the decays of $B^{0}_{s} \rightarrow \phi\pi^{+}\pi^{-}$ and $B^{0} \rightarrow \phi\pi^{+}\pi^{-}$ with the final state interaction approach under the ChUA as done in Refs. \cite{Liang:2014tia,Liang:2015qva,Bayar:2014qha}, where the $\bar{B}^{0}_{s}$ and $\bar{B}^{0}$ mesons decay to $J/\psi$ with $\pi^+ \pi^-$ and the other final states are studies, especially $J/\psi$ with a vector meson considered in Ref. \cite{Bayar:2014qha}. As found in Refs. \cite{Liang:2014tia,Bayar:2014qha}, the $f_{0}(980)$ production is the dominant one in the $\bar{B}^{0}_{s}$ decay where there is indeed no evident signal for the $f_{0}(500)$ state as the experimental findings \cite{LHCb:2012ae,Aaij:2014emv}, whereas the production of the $f_{0}(500)$ resonance is dominant one in the $\bar{B}^{0}$ decay. As already known, the states of $f_{0}(500)$ (or called as $\sigma$ state), $f_{0}(980)$ and $a_{0}(980)$ are dynamically reproduced in the coupled channel interactions via the potentials derived from the lowest order chiral Lagrangian \cite{Gasser:1983yg,Bernard:1995dp} in the work of \cite{Oller:1997ti} taking the chiral dynamics as done in Ref. \cite{Kaiser:1995cy}. Recently, also starting with the ChUA (more applications about the ChUA can be found in recent reviews \cite{Oller:2019opk,MartinezTorres:2020hus,Oller:2020guq,Guo:2020hli}), the work of \cite{Ahmed:2020kmp} researches the different properties of these states $f_{0}(500)$, $f_{0}(980)$ and $a_{0}(980)$ in details, where the couplings, the compositeness, the wave functions and the radii are calculated to reveal the nature of them.
Indeed, the mixing components for the states of $f_{0}(500)$ and $f_{0}(980)$, which are all mainly decay into the $\pi\pi$ channel, are studied in Ref. \cite{Agaev:2017cfz} with the QCD sum rule, where more experimental data are required to clarify the strange component in the $f_{0}(500)$ resonance as suggested in Ref. \cite{Agaev:2018sco}. Thus, in the present work, we investigate the properties of the $f_{0}(500)$ and $f_{0}(980)$ states in the productions of the final state interactions on the decay processes of $B^{0}_{s} \rightarrow \phi\pi^{+}\pi^{-}$ and $B^{0} \rightarrow \phi\pi^{+}\pi^{-}$. In the next section, we will briefly introduce the formalism of the final state interactions with the ChUA for these two decay procedures of the $B^{0}_{s}$ and $B^{0}$ mesons. In the following section, we discuss the vector meson productions in the decay procedures. Then, we show the results of the $\pi^+\pi^-$ invariant mass distributions and the branching fractions of some decay channels in the following section. At the end, we make a short conclusion.
\section{The model for scalar meson production}
\begin{figure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{pics/feyB0.pdf}
\caption{\footnotesize $B^{0}$ decays with a $d \bar{d}$ productions.}
\label{fig:fig1a}
\end{subfigure}
\begin{subfigure}{0.48\textwidth}
\centering
\includegraphics[width=1\linewidth]{pics/feyB0s.pdf}
\caption{\footnotesize $B^{0}_{s}$ decays with a $s\bar{s}$ productions.}
\label{fig:figb}
\end{subfigure}%
\caption{Feynman diagrams for the decays of $B^{0}$ and $B^{0}_{s}$ into $\phi$ and a primary $q \bar{q}$ pair. }
\label{fig:fig1}
\end{figure}
Following the work of Refs. \cite{Liang:2014tia,Liang:2015qva}, where the decays of $B^0_{(s)} \to J/\psi\pi^{+}\pi^{-}$ are studied, we investigate the analogous ones of $B^0_{(s)} \to \phi\pi^{+}\pi^{-}$ with the vector meson $J/\psi$ replaced by a $\phi$ in the hadron level. But viewing at the dominant weak decay mechanism, the $B^0_{(s)}$ decayed into $J/\psi$ and a $q \bar{q}$ pair in their cases can be easy to fulfil through the tree level $b \to c$ transition. In our cases, it should be proceeded via a gluonic $b \to s$ penguin transition for the $B^0_{(s)}$ decayed into $\phi$ and a $q \bar{q}$ pair, see Fig. \ref{fig:fig1}, and thus, these decay processes are suppressed because these decays are forbidden at the tree level by the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing mechanism, where we also discuss the suppression effect later. Since the parts of the weak decay mechanism are isolated with a dynamical factor (see our formalism later), in our cases we also apply the final state interaction framework to research the decays of $B^0_{(s)} \to \phi\pi^{+}\pi^{-}$ to focus on the procedure of the $q \bar{q}$ pair hadronized to the final states with the interactions of each other, where this parts of the interactions can be utilized by the coupled channel interactions with the ChUA. Then, we show our formalism in details below. The dominant weak decay mechanism for the $B^{0}$ and $B^{0}_{s}$ decays as depicted in Fig. \ref{fig:fig1} are proceeded as,
\begin{equation}
\begin{aligned}
B^{0} (\bar{b} d) &\Rightarrow (V_{ub} \bar{u} + V_{cb} \bar{c}) \mathit{W^{+}} d \Rightarrow (V_{ub} \bar{u} g + V_{cb} \bar{c} g) \mathit{W^{+}} d \\
&\Rightarrow (V_{ub} V_{ud} + V_{cb} V_{cd} ) (s\bar{s}) (d\bar{d}) \, ,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
B^{0}_{s}(\bar{b} s) &\Rightarrow (V_{ub} \bar{u} + V_{cb} \bar{c}) \mathit{W^{+}} s \Rightarrow (V_{ub} \bar{u} g + V_{cb} \bar{c} g) \mathit{W^{+}} s \\
&\Rightarrow (V_{ub} V_{us} + V_{cb} V_{cs} ) (s\bar{s}) (s\bar{s}) \, ,
\end{aligned}
\end{equation}
where $V_{q_1 q_2}$ is the element of the CKM matrix for the transition of the quark $q_1 \to q_2$ (see appendix \ref{ckm} for the details of the CKM matrix), and a $\phi$ with $(s\bar{s})$ and a primary $q \bar{q}$ pair are produced at the end. To produce the $\pi^{+}\pi^{-}$ mesons alongside with the $\phi$ meson in the final states, the primary $q\bar{q}$ pair must undergo the hadronization. And thus, to achieve this procedure, an additional $q\bar{q}$ pair should be generated from the vacuum to accompany with the primary $q\bar{q}$, written as $u\bar{u}+d\bar{d}+s\bar{s}$, as shown in Fig. \ref{fig:fig2}, where this procedure is formulated as,
\begin{equation}
\begin{aligned}
B^{0} &\Rightarrow (V_{ub} V_{ud} + V_{cb} V_{cd}) (s\bar{s}\to \phi) [d\bar{d} \to d\bar{d} \cdot (u\bar{u}+d\bar{d}+s\bar{s}) ] \\
&\Rightarrow (V_{ub} V_{ud} + V_{cb} V_{cd}) (s\bar{s}\to \phi) [ M_{22} \to (M \cdot M)_{22} ] \, ,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
B^{0}_{s} &\Rightarrow (V_{ub} V_{us} + V_{cb} V_{cs} ) (s\bar{s}\to \phi) [ s\bar{s} \to s\bar{s} \cdot (u\bar{u}+d\bar{d}+s\bar{s}) ] \\
& \Rightarrow (V_{ub} V_{us} + V_{cb} V_{cs} ) (s\bar{s}\to \phi) [ M_{33} \to (M \cdot M)_{33} ] \, ,
\end{aligned}
\end{equation}
with the $q\bar{q}$ matrix element $M$ defined as
\begin{equation}
M=\left(\begin{array}{lll}{u \bar{u}} & {u \bar{d}} & {u \bar{s}} \\ {d \bar{u}} & {d \bar{d}} & {d \bar{s}} \\ {s \bar{u}} & {s \bar{d}} & {s \bar{s}}\end{array}\right) .
\label{eq:matrM}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{pics/quark.pdf}
\caption{Procedure for the hadronization $q\bar{q} \rightarrow q\bar{q}(u\bar{u}+d\bar{d}+s\bar{s})$.}
\label{fig:fig2}
\end{figure}
\begin{figure}
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[width=1\linewidth]{pics/feyB0decay.pdf}
\caption{\footnotesize Produced with direct plus rescattering mechanisms in $B^{0}$ decay.}
\end{subfigure}%
\begin{subfigure}{1\textwidth}
\centering
\includegraphics[width=0.7\linewidth]{pics/feyB0sdecay.pdf}
\caption{\footnotesize Produced via the rescattering mechanism in $B^{0}_{s}$ decay.}
\end{subfigure}%
\caption{Diagrammatic representation for the $\pi^{+}\pi^{-}$ productions in the final state interactions of $B^{0}$ (a) and $B^{0}_{s}$ (b) decays.}
\label{fig:fig3}
\end{figure}
Furthermore, we can write the matrix elements of $M$ in terms of the physical mesons, which corresponds to
\begin{equation}
\Phi=\left(\begin{array}{ccc}{\frac{1}{\sqrt{2}} \pi^{0}+\frac{1}{\sqrt{6}} \eta} & {\pi^{+}} & {K^{+}} \\ {\pi^{-}} & {-\frac{1}{\sqrt{2}} \pi^{0}+\frac{1}{\sqrt{6}} \eta} & {K^{0}} \\ {K^{-}} & {\bar{K}^{0}} & {-\frac{2}{\sqrt{6}} \eta}\end{array}\right),
\label{eq:Mphi}
\end{equation}
where we take $\eta \equiv \eta_{8}$. With the correspondence between the matrix $M$ and $\Phi$, the hadronization process can be accomplished to the hadron level in terms of two pseudoscalar mesons
\begin{equation}
\begin{aligned}
d \bar{d} \cdot (u \bar{u}+d \bar{d}+s \bar{s}) & \to (\Phi \cdot \Phi)_{22} =\pi^{+} \pi^{-}+\frac{1}{2} \pi^{0} \pi^{0}-\frac{1}{\sqrt{3}} \pi^{0} \eta+K^{0} \bar{K}^{0}+\frac{1}{6} \eta \eta ,\\
s \bar{s} \cdot (u \bar{u}+d \bar{d}+s \bar{s}) & \to (\Phi \cdot \Phi)_{33}=K^{-} K^{+}+K^{0} \bar{K}^{0}+\frac{4}{6} \eta \eta \, ,
\end{aligned}
\label{eq4}
\end{equation}
where one can see that, there are only $K\bar{K}$ and $\eta\eta$ produced in the $B^{0}_{s}$ decay, which is different from the one of $B^{0}$ decay having the other productions ($\pi\pi$ for example) too. As we have known from the ChUA \cite{Oller:1997ti,Ahmed:2020kmp}, the $f_0(980)$ state is bound by the $K\bar{K}$ component, whereas, the $f_0(500)$ resonance is mainly contributed from the $\pi\pi$ channel. Thus, one can expect these two states will have different contributions for the $B^{0}$ and $B^{0}_{s}$ decays, see our results later. Once the final states are hadronized after the weak decay productions, they also can go to further interactions, as depicted in Fig. \ref{fig:fig3}, where in fact there are three processes taken into account, the $\phi$ emission in the weak decay, the meson pair creation in the $q\bar{q}$ hadronizations and the final state interactions of the hadronic pair, as discussed in details in Ref. \cite{Miyahara:2015cja}. Then, the amplitudes for these final state production and their interaction procedures can be written as
\begin{equation}
\begin{aligned}
t\left(B^{0} \rightarrow \phi \pi^{+} \pi^{-}\right) =& V_{P} (V_{ub} V_{ud} + V_{cb} V_{cd} )\left(1+G_{\pi^{+} \pi^{-}} T_{\pi^{+} \pi^{-} \rightarrow \pi^{+} \pi^{-}}+ \frac{1}{2} G_{\pi^{0} \pi^{0}} T_{\pi^{0} \pi^{0} \rightarrow \pi^{+} \pi^{-}}\right.\\
&\left.+G_{K^{0} \bar{K}^{0}} T_{K^{0} \bar{K}^{0} \rightarrow \pi^{+} \pi^{-}}+ \frac{1}{6} G_{\eta \eta} T_{\eta \eta \rightarrow \pi^{+} \pi^{-}}\right) \, ,
\end{aligned}
\label{eq5}
\end{equation}
\begin{equation}
\begin{aligned}
t\left(B^{0}_{s} \rightarrow \phi \pi^{+} \pi^{-}\right)=& V_{P} (V_{ub} V_{us} +V_{cb} V_{cs} ) \left( G_{K^{+}K^{-}}T_{K^{+}K^{-} \rightarrow \pi^{+} \pi^{-}}+G_{K^{0} \bar{K}^{0}} T_{K^{0} \bar{K}^{0} \rightarrow \pi^{+} \pi^{-}}+ \right.\\
& \left. \frac{4}{6} G_{\eta \eta} T_{\eta \eta \rightarrow \pi^{+} \pi^{-}}\right),
\end{aligned}
\label{eq51}
\end{equation}
where $V_{P}$ \footnote{Note that, we only use the flavor structure of these processes and the remaining dynamical factors are included in $V_{p}$, which is then taken as a constant and independent on $M_{\text{inv}}$ \cite{Li:2012sw}.} is the production vertex factor, which contains all the dynamical factors and is assumed to be universal for these two reactions because of the similar production dynamics and the differences specified by the CKM matrix elements $V_{q_1 q_2}$.
It is worth to mention that, in Eqs. \eqref{eq5} and \eqref{eq51}, there is a factor of $2$ in the terms related with the identical particles (such as the $\pi^{0}\pi^{0}$ and $\eta\eta$) because of the two possibilities in the operators of Eq. \eqref{eq4} to create them, which has been cancelled with the factor of $\frac{1}{2}$ in their followed propagators within our normalization schem, see more discussions in Ref. \cite{Liang:2015qva}.
Besides, the scattering amplitude of $T_{ij}$ for the transition of $i \to j$ channel is evaluated by the Bethe-Salpeter equation with the on-shell approximation for the coupled channel interactions,
\begin{equation}
T = [1-VG]^{-1}V , \label{eq:BS}
\end{equation}
where the element of the diagonal matrix $G$ is the loop functions of two meson propagators, given by
\begin{equation}
G _ { i i} ( s ) = i \int \frac { d ^ { 4 } q } { ( 2 \pi ) ^ { 4 } } \frac { 1 } { q ^ { 2 } - m _ { 1 } ^ { 2 } + i \varepsilon } \frac { 1 } { \left( p _ { 1 } + p _ { 2 } - q \right) ^ { 2 } - m _ { 2 } ^ { 2 } + i \varepsilon } \text{ ,}
\label{eq:eq6}
\end{equation}
with $p_{1}$ and $p_{2}$ the four-momenta of the two mesons in the certain channel, respectively, having $s=(p_1 + p_2)^2$, and $m_{1}$, $m_{2}$ the corresponding masses for them. Since Eq. \eqref{eq:eq6} is logarithmically divergent, the regularization schemes should be utilized to solve this singular integral, either applying the three-momentum cutoff approach \cite{Oller:1997ti}, where the analytic expression is given by Refs. \cite{Oller:1998hw,Guo:2006fu}, or the dimensional regularization method \cite{Oller:2000fj}. In the present work, we take the cutoff method \cite{Oller:1997ti} for Eq. \eqref{eq:eq6},
\begin{equation}
G _ { ii } ( s ) = \int _ { 0 } ^ { q _ { \max } } \frac { q ^ { 2 } d q } { ( 2 \pi ) ^ { 2 } } \frac { \omega _ { 1 } + \omega _ { 2 } } { \omega _ { 1 } \omega _ { 2 } \left[ s - \left( \omega _ { 1 } + \omega _ { 2 } \right) ^ { 2 } + i \varepsilon \right] } \text{ ,}
\end{equation}
with $q=|\vec{q}\,|$ and $ \omega _ { i } = ( \vec { q } ^ { \:2 } + m _ { i } ^ { 2 } ) ^ { 1 / 2 }$, where the free parameter of the cutoff $q_{max}$ is chosen as $600$ MeV for the case of including $\eta\eta$ channel \cite{Liang:2014tia} and $931$ MeV for the one of excluding $\eta\eta$ channel \cite{Xiao:2019lrj}. Furthermore, the matrix $V$ is constructed by the scattering potentials of each coupled channel, where the elements for the $\pi\pi$ and $ K\bar{K}$ channels are taken from Ref. \cite{Oller:1997ti} and the one for the $\eta\eta$ channel from Ref. \cite{Gamermann:2006nm}. Thus, after projecting the potential to $S$-wave, the elements of $V$ matrix, $V_{ij}$, are given by
\begin{equation}
\begin{aligned}
&V_{11}=-\frac{1}{2 f^{2}} s, \quad V_{12}=-\frac{1}{\sqrt{2} f^{2}}\left(s-m_{\pi}^{2}\right), \quad V_{13}=-\frac{1}{4 f^{2}} s ,\\
&V_{14}=-\frac{1}{4 f^{2}} s, \quad V_{15}=-\frac{1}{3 \sqrt{2} f^{2}} m_{\pi}^{2}, \quad V_{22}=-\frac{1}{2 f^{2}} m_{\pi}^{2} ,\\
&V_{23}=-\frac{1}{4 \sqrt{2} f^{2}} s, \quad V_{24}=-\frac{1}{4 \sqrt{2} f^{2}} s, \quad V_{25}=-\frac{1}{6 f^{2}} m_{\pi}^{2} ,\\
&V_{33}=-\frac{1}{2 f^{2}} s, \quad V_{34}=-\frac{1}{4 f^{2}} s ,\\
&V_{35}=-\frac{1}{12 \sqrt{2} f^{2}}\left(9 s-6 m_{\eta}^{2}-2 m_{\pi}^{2}\right), \quad V_{44}=-\frac{1}{2 f^{2}} s ,\\
&V_{45}=-\frac{1}{12 \sqrt{2} f^{2}}\left(9 s-6 m_{\eta}^{2}-2 m_{\pi}^{2}\right) ,\\
&V_{55}=-\frac{1}{18 f^{2}}\left(16 m_{K}^{2}-7 m_{\pi}^{2}\right),
\end{aligned}
\end{equation}
where the indices 1 to 5 denote the five coupled channels of $\pi^{+}\pi^{-}$, $\pi^{0}\pi^{0}$, $K^{+}K^{-}$, $K^{0}\bar{K}^{0}$, and $\eta\eta$, respectively, and $f$ is the pion decay constant, taken as $93$ MeV \cite{Oller:1997ti}. Note that, a normalization factor $\frac{1}{\sqrt{2}}$ has been taken into account in the corresponding channels with the identical states of $\pi^0\pi^0$ and $\eta\eta$, and thus, there is no such a factor in the corresponding loop functions in the $G$ matrix, see Eq. \eqref{eq:BS}.
In order to analysis the $\pi\pi$ invariant mass distributions as given in Ref. \cite{Aaij:2016qnm}, we need to evaluate the differential decay width $\frac{d\Gamma}{dM_{\text{inv}}}$ in terms of the $\pi^{+}\pi^{-}$ invariant mass $M_{\text{inv}}$. Before doing that, one need to know the partial waves for the final states. If the hadronization parts of $q\bar{q} (\to\pi\pi)$ is in $S$-wave (see for $P$-wave in the next section), which lead to its $J^P =L^{(-1)^L} = 0^+$, thus, the primary decay procedure is a $0^{-} \rightarrow 1^{-} \,+\, 0^{+}$ transition. Therefore, the angular momentum conservation requires a $P$-wave $L'=1$ for the outgoing vector meson $\phi$, and then, there will be a term of $p_\phi \cos\theta$ contributed to the decay amplitude. Thus, we have finally
\begin{equation}
\frac{d \Gamma}{d M_{\text{inv}}}=\frac{1}{(2 \pi)^{3}} \frac{1}{8 M_{B^0_{(s)}}^{2}} \frac{2}{3} p_{\phi}^{3} \tilde{p}_{\pi} \bar{\sum} \sum \left| t_{B_{(s)}^{0} \rightarrow \phi \pi^{+} \pi^{-}}\right|^{2},
\label{eq10}
\end{equation}
where the factor $\frac{2}{3}$ comes from the integral of $\cos^{2}\theta$. Note that, when we fit the $\pi\pi$ invariant mass distributions, we take $\frac{d \Gamma}{d M_{\text{inv}}} \to C \frac{d \Gamma}{d M_{\text{inv}}}$ with an arbitrary constant $C$ to match the events of the experimental data, see our results later. Besides, $p_{\phi}$ is the $\phi$ momentum in the rest frame of the decaying $B^0_{(s)}$ meson, and $\tilde{p}_{\pi}$ the pion momentum in the rest frame of the $\pi^{+}\pi^{-}$ system, which are given by
\begin{equation}
\begin{aligned}
&p_{\phi}=\frac{\lambda^{1 / 2}\left(M_{B^0_{(s)}}^{2}, M_{\phi}^{2}, M_{\text{inv}}^{2}\right)}{2 M_{B_{(s)}}},\\
&\tilde{p}_{\pi}=\frac{\lambda^{1 / 2}\left(M_{\text{inv}}^{2}, m_{\pi}^{2}, m_{\pi}^{2}\right)}{2 M_{\text {inv }}},
\end{aligned}
\end{equation}
with the usual K\"allen triangle function $\lambda(a, b, c) = a^{2} + b^{2} + c^{2} - 2(ab + ac + bc)$.
\section{The model for vector meson production}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{pics/feyrho.pdf}
\caption{Diagram of $B^{0}_{s}$ decay into $\phi $ and $\rho^{0}$ mesons. }
\label{fig:fig4}
\end{figure}
As discussed above, the hadronization parts $q\bar{q} (\to\pi\pi)$ can also be in $P$-wave, and thus, the quantum numbers of this parts is $J^P =L^{(-1)^L} = 1^-$, which corresponds to the vector meson production. Therefore, for the transition $0^- \rightarrow 1^- \,+\, 1^-$, the created vector mesons will be in the partial waves of $L=0,2$ to preserve the angular momentum conservation. But, as done in Ref. \cite{Bayar:2014qha}, we also take $L=0$ for simplification, and thus, there is no term of $p_{\phi} \cos\theta$ present in the amplitudes. Taking the vector meson $\rho^0$ (except for $\phi$) production for example (the others discussed later), see Fig. \ref{fig:fig4} for the $B^{0}_{s} \to \phi \rho^{0}$ decay, the amplitude associated to this decay is given by
\begin{equation}
t_{B^{0}_{s} \rightarrow \phi\rho^{0}} = \frac{1}{\sqrt{2}} \tilde{V}_{P} V_{ub} V_{us}^{*} \textbf{ },
\end{equation}
where the prefactor $\frac{1}{\sqrt{2}}$ is the $u\bar{u}$ component in $\rho^{0}$, and $\tilde{V}_{P}$ the production vertex factor which contains all the dynamical factors and analogous to the one $V_P$ above but different. In general, the width for the vector meson ($V$) decay $B^0_{(s)} \to \phi V$ is given by
\begin{equation}
\Gamma_{B^{0}_{(s)} \rightarrow \phi V}=\frac{1}{8 \pi} \frac{1}{m_{B_{(s)}^{0}}^{2}}\left|t_{B^{0}_{(s)} \rightarrow \phi V}\right|^{2} p_{\phi}.
\label{eq13}
\end{equation}
Next, since the produced vector meson $\rho^0$ can be easily decay into $\pi^+\pi^-$, its contributions to the $\pi^+\pi^-$ invariant mass distributions in the $B^{0}_{s}$ decay can be obtained by means of the spectral function as done in Refs. \cite{Bayar:2014qha,Liang:2014ama,Wang:2020pem},
\begin{equation}
\frac{d \Gamma_{B^{0}_{s} \rightarrow \phi \rho^{0}}}{d M_{\text{inv}}\left(\pi^{+} \pi^{-}\right)}=- \frac{1}{\pi} 2 m_{\rho} \operatorname{Im} \frac{1}{M_{\text{inv}}^{2}-m_{\rho}^{2}+i m_{\rho} \tilde{\Gamma}_{\rho}\left(M_{\text{inv}}\right)} \Gamma_{B^{0}_{s} \rightarrow \phi \rho^{0}},
\label{eq14}
\end{equation}
where $\tilde{\Gamma}_{\rho}\left(M_{\text{inv}}\right)$ is the energy dependent decay width of $\rho^{0}$ into two pions, which is given by the parameterization,
\begin{equation}
\begin{aligned}
&\tilde{\Gamma}_{\rho}\left(M_{\text{inv}}\right)=\Gamma_{\rho}\textbf{ } \left(\frac{p_{\pi}^{\text {off }}}{p_{\pi}^{\text {on }}}\right)^{3},\\
&p_{\pi}^{\mathrm{off}}=\frac{\lambda^{1 / 2}\left(M_{\text{inv}}^{2}, m_{\pi}^{2}, m_{\pi}^{2}\right)}{2 M_{\text{inv}}} \theta\left(M_{\text{inv}}-2 m_{\pi}\right),\\
&p_{\pi}^{\mathrm{on}}=\frac{\lambda^{1 / 2}\left(m_{\rho}^{2}, m_{\pi}^{2}, m_{\pi}^{2}\right)}{2 m_{\rho}},
\end{aligned}
\end{equation}
with $p_{\pi}^{\mathrm{on}}(p_{\pi}^{\mathrm{off}})$ the pion on-shell (off-shell) three momentum in the rest frame of the $\rho^0$ decay, $\Gamma_{\rho}$ the total $\rho^0$ decay width, taking as $\Gamma_{\rho} = 149.1$ MeV \cite{pdg2018}, and the step function of $\theta\left(M_{\text{inv}}-2 m_{\pi}\right)$.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{pics/feyrhoB0.pdf}
\includegraphics[width=0.45\linewidth]{pics/feyomega.pdf}
\includegraphics[width=0.45\linewidth]{pics/feyphiphi.pdf}
\includegraphics[width=0.45\linewidth]{pics/feykstar.pdf}
\caption{Feynman diagrams for $B_{0}$ and $B^{0}_{s}$ decays into $\phi$ and other vector mesons.}
\label{fig:fig41}
\end{figure}
Moreover, we can carry on the investigation of the other vector meson productions, as shown in Fig. \ref{fig:fig41}, where none of these decay processes is allowed at the tree level, see the depression results later, and different from the one of Fig. \ref{fig:fig4}. The corresponding amplitudes of the decay diagrams in Fig \ref{fig:fig41} are written as,
\begin{equation}
\begin{array}{lll}
t_{B^{0} \rightarrow \phi \rho^{0}}=-\frac{1}{\sqrt{2}} \tilde{V}_{P}^{\prime} (V_{ub} V_{u d} +V_{c b} V_{c d} ), \textbf{ } & t_{B^{0} \rightarrow \phi \omega}=\frac{1}{\sqrt{2}} \tilde{V}_{P}^{\prime} (V_{ub} V_{u d} + V_{c b} V_{c d} ),\textbf{ } \\ t_{B^{0}_{s} \rightarrow \phi \phi}=2 \tilde{V}_{P}^{\prime} (V_{ub} V_{u s} +V_{c b} V_{c s} ),\textbf{ }
& t_{B^{0}_{s} \rightarrow \phi \bar{ K}^{* 0}}=2 \tilde{V}_{P}^{\prime} (V_{ub} V_{u d} +V_{c b} V_{c d} ),
\end{array}
\end{equation}
where the factor $-\frac{1}{\sqrt{2}}$ is the $d\bar{d}$ component in $\rho^{0}$ whereas $\frac{1}{\sqrt{2}}$ in $\omega$, $\tilde{V}_{P}^{\prime}$ another vertex factor for these hadronization procedures. Note that, one can see an extra factor of two in the two $B^{0}_{s}$ decay modes, $\phi \phi$ and $\phi \bar{ K}^{* 0}$, because there are two possibilities to create the $\phi$, one by the internal gluon as shown in the lower parts of Fig. \ref{fig:fig41} and the other one by the external gluon analogous to the case of $B^0$ decay in the upper parts of Fig. \ref{fig:fig41}, which are different from the cases of $\Lambda_c$ decay with the internal or external $W$ boson exchange as discussed in Refs. \cite{Wang:2020pem,Li:2020fqp,Xie:2016evi}, where the internal and external $W$ emission mechanism can also be discussed in the recent works on the reactions of $D^+\to \pi^+\pi^0\eta$ \cite{Duan:2020vye} and $D^0\to K^-\pi^+\eta$ \cite{Toledo:2020zxj}. Then, one can calculate the decay widths for these decay modes with the vector productions using Eq. \eqref{eq13}, and thus, the decay ratios for them, see our results in the next section.
\section{Results}
As discussed in the introduction, the rare decays of $B^{0}_{s} \rightarrow \phi\pi^{+}\pi^{-}$ and $B^{0} \rightarrow \phi\pi^{+}\pi^{-}$ has been reported by the LHCb collaboration \cite{Aaij:2016qnm}, where the $\pi\pi$ invariant mass distributions and some related branching fractions are obtained. To look for the resonant contributions in the energy region lower than about 1 GeV, we show the results of the $\pi\pi$ invariant mass distributions for the decay of $B^{0}_{s}\rightarrow \phi \pi^{+} \pi^{-}$ in Fig. \ref{fig:fig5}, which is only plotted up to 1.1 GeV within the effective energy range of the ChUA for the meson-meson interactions \cite{Oller:1997ti}. From Fig. \ref{fig:fig5}, one can see that the dominant contributions are the $f_0(980)$ resonant around the region 1 GeV, which is consistent with the one fitted with Flatt\'e model in the experimental results of Ref. \cite{Aaij:2016qnm}. As discussed in the formalism above, there are some theoretical uncertainties for the coupled channel interactions with \cite{Gamermann:2006nm} or without \cite{Oller:1997ti} the $\eta\eta$ channel, see the results of the dash (red, with $q_{max} = 600$ MeV) line and the dash-dot (black, with $q_{max} = 931$ MeV) line in Fig. \ref{fig:fig5}, respectively, where one can see that the line shape of the $f_0(980)$ state is more narrow when the contribution of the $\eta\eta$ channel is taken into account. Since the threshold of the $\eta\eta$ channel is not so far above the $f_0(980)$, it has nontrivial effects, which will give some uncertainties to the branching ratios, see our results later. Indeed, when the $\eta\eta$ channel is considered, the pole for the $f_0(980)$ state becomes smaller, see the dash-dot (blue) line and the dash (green) line of Fig. \ref{fig:fig51}. But, as found in Ref. \cite{Ahmed:2020kmp}, for the bound state of the $K\bar{K}$ channel, one should decrease the cutoff to move the pole of the $f_0(980)$ state to higher energy when the $\eta\eta$ channel is included, which will lead to the width of the pole decrease, see the solid (red) line of Fig. \ref{fig:fig51} and more discussions in Ref. \cite{Ahmed:2020kmp}. This is why the peak of the $f_0(980)$ state become narrow when we add the coupled channel of $\eta\eta$, which is different from the interference effects in the case of narrow $\sigma$ state in the $J/\psi \to p \bar{p} \pi^+\pi^-$ decay \cite{Li:2003zi} and the $J/\psi \to \omega\pi\pi$ decay \cite{Roca:2004uc}. Furthermore, the $\eta\eta$ channel was also taken into account in the investigation of the $D^+\to K^-K^+K^+$ decay in a recent work of \cite{Roca:2020lyi}, where the $N/D$ method were used for the scattering amplitudes as done in Ref. \cite{Oller:1998zr} to extend the applicability range higher than 1 GeV and they found that the dominant contributions were the one of $a_0(980)$ near the $K\bar{K}$ threshold.
In the last section, we also take into account the contributions from the vector meson productions when the final states of $\pi^+\pi^-$ are in $P$-wave, see the results of the dot (magenta) line in Fig. \ref{fig:fig5}, which is the contributions of the $\rho$ meson, as commented in the experiment \cite{Aaij:2016qnm}. From the results of the solid (cyan) line in Fig. \ref{fig:fig5}, our results of the sum of two resonances contributions of $f_0(980)$ and $\rho$ describe the experimental data up to 1 GeV well. But, as found in the experiment \cite{Aaij:2016qnm}, there is no signal for the $f_{0}(500)$ resonance. Indeed, in our formalism there is no such contributions, see Eq. \eqref{eq51}, since the the $f_{0}(500)$ state appears in the amplitude of $T_{\pi^{+} \pi^{-} \rightarrow \pi^{+} \pi^{-}}$, which is analogous to the case of $B^{0}_{s} \rightarrow J/\psi \pi^{+} \pi ^{-}$ decay \cite{Liang:2014tia,Liang:2015qva,Bayar:2014qha} as shown in our reproduced results in the appendices. Conversely, this is not the case for the $B^{0}\rightarrow \phi\pi^{+}\pi^{-}$ decay, see our predicted results of the S-wave $\pi^{+}\pi^{-}$ mass distribution for this decay in Fig. \ref{fig:fig6}, where one can see that the contributions from the broad peak of the $f_{0}(500)$ state above the $\pi \pi $ threshold is more significant than the one of the narrow and small peak near the $K\bar{K}$ threshold corresponded to the $f_{0}(980)$ resonance. There are also some uncertainties for the effects of the $\eta\eta$ channel as shown in Fig. \ref{fig:fig6}, where the contributions of the the $f_{0}(500)$ state is more stronger when the $\eta\eta$ channel is not considered, see the dash (magenta) line of Fig. \ref{fig:fig6}. Note that, there some uncertainties from the values of the CKM matrix elements, taking the Wolfenstein parameterization or the absolute values, see Eqs. \eqref{eq:ckm1} and \eqref{eq:ckm2}, especially in the case of the $B^{0}\rightarrow \phi\pi^{+}\pi^{-}$ decay, see Eq. \eqref{eq5}. But, our main results are used the ones of the Wolfenstein parameterization.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{pics/dgammaBs-f0+rho.pdf}
\caption{$\pi^{+}\pi^{-}$ invariant mass distributions of the $B^{0}_{s}\rightarrow \phi\pi^{+}\pi^{-}$ decay, where we plot $\frac{C \times 10^{-9}}{\Gamma_{B_{s}}} \frac{d \Gamma}{d M_{\text{inv}}}$.
The dash (red) line corresponds to the $f_{0}(980)$ resonance contributions with the coupled channel of $\eta\eta$ (normalization constant $C=1.22$), the dash-dot (black) line without the $\eta\eta$ channel ($C=3.70$), and the dot (magenta) line corresponds to the $\rho$ meson contribution ($C=10.0$), and the solid (cyan) line represents the sum of the contributions of two states $f_{0}(980)$ and $\rho$. Data is taken from Ref. \cite{Aaij:2016qnm}.}
\label{fig:fig5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{pics/tphiBs,onlyKK.pdf}
\caption{Modulus square of the scattering amplitude of $t_{B^{0}_{s}\rightarrow \phi \pi^{+}\pi^{-}}$, see Eq. \eqref{eq51}, where we only plot the last parts without the previous factors of the vertex $V_P$ and the CKM elements, for the case of including the $\eta\eta$ channel with the cutoff $q_{max}= 600$ MeV (the solid, red line), 931 MeV (the dash-dot, blue line) and excluding the $\eta\eta$ channel with the cutoff $q_{max}= 931$ MeV (the dash, green line). }
\label{fig:fig51}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{pics/dgamma_B.pdf}
\caption{$\pi^{+}\pi^{-}$ invariant mass distributions of the $B^{0}\rightarrow \phi\pi^{+}\pi^{-}$ decay for with the $\eta\eta$ channel (blue) and without the $\eta\eta$ channel (magenta).}
\label{fig:fig6}
\end{figure}
Due to the production vertex $V_{p}$ unknown in our formalism, see the discussion after Eq. \eqref{eq51}, for the predictions of $B^{0}\rightarrow \phi\pi^{+}\pi^{-}$, we need to determine it from the decay of $B^{0}_{s}\rightarrow \phi\pi^{+}\pi^{-}$. For the case of considering the $\eta\eta$ channel, using Eq. \eqref{eq10} we have
\begin{equation}
\text{Br}(B^{0}_{s} \rightarrow \phi f_{0}(980))= \frac{\Gamma_{B^{0}_{s} \rightarrow \phi f_{0}(980)}}{\Gamma_{B_{s}}}= \frac{\int_{2m_{\pi}}^{1200}\frac{d\Gamma_{B^{0}_{s} \rightarrow \phi f_{0}(980)}}{dM_{inv}}dM_{inv}}{\Gamma_{B_{s}}} \\ = \frac{V_{p}^{2}}{\Gamma_{B_{s}}} \times 32.95 \textbf{ },
\label{eq:32}
\end{equation}
whereas, for the case of without the $\eta\eta$ channel,
\begin{equation}
\text{Br}(B^{0}_{s} \rightarrow \phi f_{0}(980))= \frac{\Gamma_{B^{0}_{s} \rightarrow \phi f_{0}(980)}}{\Gamma_{B_{s}}}= \frac{\int_{2m_{\pi}}^{1200}\frac{d\Gamma_{B^{0}_{s} \rightarrow \phi f_{0}(980)}}{dM_{inv}}dM_{inv}}{\Gamma_{B_{s}}} \\ = \frac{V_{p}^{2}}{\Gamma_{B_{s}}} \times 51.18 \textbf{ }.
\label{eq:33}
\end{equation}
And then, using the measured branching fraction of $\text{Br}(B^{0}_{s} \rightarrow \phi f_{0}(980))= (1.12 \pm 0.21) \times 10^{-6}$ \cite{pdg2018}, we can obtain $ \frac{V_{p}^{2}}{\Gamma_{B_{s}}}= (3.40 \pm 0.64) \times 10^{-8}$ and $\frac{V_{p}^{2}}{\Gamma_{B_{s}}}= (2.19 \pm 0.41) \times 10^{-8}$ for the two cases, respectively. The uncertainties presented here are estimated from the errors of the experimental branching ratio.
Thus, we can predict the branching ratios of the decays $B^{0}\rightarrow \phi f_{0}(980)\rightarrow \phi\pi^{+}\pi^{-}$ and $B^{0}\rightarrow \phi f_{0}(500)\rightarrow \phi\pi^{+}\pi^{-}$ using the determined values of $ \frac{V_{p}^{2}}{\Gamma_{B_{s}}}$ from Eqs. \eqref{eq:32} and \eqref{eq:33} with
\begin{equation}
\text{Br}(B^{0} \rightarrow \phi f_{0}(980))= \frac{\Gamma_{B^{0} \rightarrow \phi f_{0}(980)}}{\Gamma_{B}}= \frac{\int_{900}^{1200}\frac{d\Gamma_{B^{0} \rightarrow \phi f_{0}(980)}}{dM_{inv}}dM_{inv}}{\Gamma_{B}} ,
\label{eq:34}
\end{equation}
\begin{equation}
\text{Br}(B^{0} \rightarrow \phi f_{0}(500))= \frac{\Gamma_{B^{0} \rightarrow \phi f_{0}(500)}}{\Gamma_{B}}= \frac{\int_{2m_{\pi}}^{900}\frac{d\Gamma_{B^{0} \rightarrow \phi f_{0}(500)}}{dM_{inv}}dM_{inv}}{\Gamma_{B}} ,
\label{eq:35}
\end{equation}
where the predicted results are shown in Table \ref{tab:tab1}. Note that, for the results in Table \ref{tab:tab1}, we have considered two uncertainties. The first one is estimated from the experimental error of the branching ratio used in determining the vertex factor, and the second one comes from the limits of the integration of Eqs. \eqref{eq:34} and \eqref{eq:35}, since there is some uncertainties in overlap region for the contribution of $f_{0}(500)$ and $f_{0}(980)$ as shown in Fig. \ref{fig:fig6}. For the central value, we has chosen $900$ MeV for the cutting point of the contributions between the $f_{0}(500)$ state and the $f_{0}(980)$ state, see Eqs. \eqref{eq:34} and \eqref{eq:35}. To estimate the uncertainty, we changed the central value by $\pm 50$ MeV.
\begin{table}
\renewcommand{\arraystretch}{1.7}
\setlength{\tabcolsep}{0.2cm}
\center
\caption{Predicted branching ratios of $B^{0}\rightarrow \phi f_{0}(980)$ and $B^{0}\rightarrow \phi f_{0}(500)$.}
\resizebox{0.8\textwidth}{!}{\begin{tabular}{|c|c|c|c|}
\hline
Branching ratios & Without $\eta\eta$ channel & With $\eta\eta$ channel & Exp. \\ \hline
$\text{Br}(B^{0}\rightarrow \phi f_{0}(980))$ & $(4.69 \pm 0.88 \pm _{-1.55}^{+3.96}) \times 10^{-9} $ & $ (7.37 \pm 1.38 _{-2.11}^{+4.61}) \times 10^{-9}$ & $< 3.8 \times 10^{-7} $ \\ \hline
$\text{Br}(B^{0}\rightarrow \phi f_{0}(500))$ & $(6.20 \pm 1.16 _{-0.21}^{+0.24})\times 10^{-8}$ & $(7.17 \pm 1.35_{-0.27}^{+0.31} )\times 10^{-8}$ & - \\ \hline
\end{tabular}}
\label{tab:tab1}
\end{table}
In addition, analogously we can also make some predictions for the ratios between different final states of $B^{0}_{s}$ and $B^{0}$ decays. In the present work, we study the suppressed decays of the $B^{0}_{s}\rightarrow \phi \pi^{+} \pi^{-}$ and $B^{0}\rightarrow \phi \pi^{+} \pi^{-}$ comparing to the Cabibbo allowed ones of the $B^{0}_{s} \rightarrow J/\psi \pi^{+} \pi^{-}$ and $B^{0}\rightarrow J/\psi \pi^{+} \pi^{-}$, see Refs. \cite{Liang:2014tia,Liang:2015qva}, which are reproduced in details \cite{Liang:2014tia} in Appendix \ref{section:app1}. Thus, we can predict the ratios between all the other channels relevant to these decays, based on the experimental results of the ratio to get the relation of the vertex factor, given by
\begin{equation}
\frac{\text{Br}(B^{0}_{s} \rightarrow \phi f_{0}(980))}{\text{Br}(B^{0}_{s}\rightarrow J/\psi f_{0}(980))}=(8.75 \pm 2.87) \times 10^{-3} \, ,
\end{equation}
where indeed the decay of $B^{0}_{s}\rightarrow \phi \pi^{+} \pi^{-}$ is more suppressed than the one of $B^{0}_{s}\rightarrow J/\psi \pi^{+} \pi^{-}$. And, within our theoretical model, we have
\begin{equation}
\frac{\text{Br}(B^{0}_{s} \rightarrow \phi f_{0}(980))}{\text{Br}(B^{0}_{s}\rightarrow J/\psi f_{0}(980))}=(\frac{V_{p}}{V_{p}^{\prime}})^{2} \times 3.78 \, .
\end{equation}
Therefore, we can obtain $(\frac{V_{p}}{V_{p}^{\prime}})^{2} = (2.31 \pm 0.76) \times 10^{-3}$. Moreover, this value is similar for both cases with or without the contributions of the $\eta\eta$ channel. The predicted ratios using the value of $\frac{V_{p}}{V_{p}^{\prime}}$ are presented in Table \ref{tab:tab3}, where, again, the first uncertainty is relevant to the experimental results and the second one corresponds to the limits of the integration.
Based on these results, using the experimental branching ratio of $\text{Br}(B^{0}\rightarrow J/\psi f_{0}(500))= 8 ^{+1.1}_{-0.9} \times 10^{-6}$, we can get the branching fraction of $B^{0}\rightarrow \phi f_{0}(500)$: $\text{Br}(B^{0}\rightarrow \phi f_{0}(500))= (5.67^{+2.64}_{-2.50}$ $^{+0.03}_{-0.02} )\times 10^{-8}$ (with the $\eta\eta$ channel) and $\text{Br}(B^{0}\rightarrow \phi f_{0}(500))= (5.67^{+2.64}_{-2.50}$ $^{+0.02}_{-0.02} )\times 10^{-8}$ (without the $\eta\eta$ channel), which are consistent the results obtained in Table \ref{tab:tab1} within the uncertainties.
\begin{table}
\renewcommand{\arraystretch}{1.7}
\setlength{\tabcolsep}{0.2cm}
\center
\caption{Predictions for the branching ratios.}
\resizebox{0.75\textwidth}{!}{\begin{tabular}{|c|c|c|}
\hline
Ratios & Without $\eta\eta$ channel & With $\eta\eta$ channel \\ \hline
$\frac{\text{Br}(B^{0}\rightarrow \phi f_{0}(980))}{\text{Br}(B^{0}\rightarrow J/\psi f_{0}(980))}$ & $(8.35 \pm 2.74 _{-0.16}^{+0.24}) \times 10^{-3}$ & $(8.33 \pm 2.73_{-0.13}^{+0.16}) \times 10^{-3}$ \\ \hline
$\frac{\text{Br}(B^{0}\rightarrow \phi f_{0}(500))}{\text{Br}(B^{0}\rightarrow J/\psi f_{0}(500))}$ & $(7.08 \pm 2.32 _{-0.03}^{+0.03}) \times 10^{-3}$ & $(7.09 \pm 2.32 _{-0.03}^{+0.03}) \times 10^{-3}$ \\ \hline
\end{tabular}}
\label{tab:tab3}
\end{table}
Furthermore, for the $B^{0}_{s} \rightarrow \phi\rho^{0}$ decay, similarly we can determine the value of the vertex factor of $\tilde{V}_{P}$ by the experimental branching fraction of $B^{0}_{s} \rightarrow \phi\rho^{0}$, $\text{Br}(B^{0}_{s} \rightarrow \phi \rho^{0})=(2.7\pm 0.8) \times 10^{-7}$ \cite{pdg2018}. Using Eq. \eqref{eq13}, we have
\begin{equation}
\text{Br}(B^{0}_{s} \rightarrow \phi\rho^{0})= \frac{\Gamma_{B^{0}_{s} \rightarrow \phi\rho^{0}}}{\Gamma_{B_{s}}}= \frac{\tilde{V}_{P}^{2}}{\Gamma_{B_{s}}} \times 1.14 \times 10^{-12}.
\end{equation}
Thus, using the experimental results as input, we can obtain $\frac{\tilde{V}_{P}^{2}}{\Gamma_{B_{s}}}= (2.36 \pm 0.70) \times 10^{5}$.
On the other hand, also with Eq. \eqref{eq13}, one can determine the ratios for the others of $\phi V$ vector decay channels. For example, the $\phi\phi$ decay channel, based on the measured branching fraction of $\text{Br}(B^{0}_{s} \rightarrow \phi \phi)= (1.87 \pm 0.15) \times 10^{-5}$, we can obtain the value of the vertex factor as $\frac{(\tilde{V}_{P}^{\prime})^{2}}{\Gamma_{B_{s}}}= (8.05 \pm 0.65) \times 10^{2}$,
\footnote{Note that, the vertex factor $\tilde{V}_{P}$ for the decay $ B^{0}_{s} \rightarrow \phi\rho^{0}$ is different from the one $\tilde{V}^{\prime}_{P}$ in the $B^{0}_{s} \rightarrow \phi \phi$ decay, because the decay of $ B^{0}_{s} \rightarrow \phi\rho^{0}$ only has the weak interactions in the intermediate processes, whereas, the case of $B^{0}_{s} \rightarrow \phi \phi$ has the strong and the weak interactions in the intermediate procedures.}
where the uncertainty comes from the experimental value of the branching ratio. Analogous to the others, the results are related to the CKM matrix elements for the intermediate, and one can easy to get the ratios as below,
\begin{equation}\begin{array}{l}
R_{1}^{th}=\frac{\Gamma_{B^{0} \rightarrow \phi \rho^{0}}}{\Gamma_{B^{0}_{s} \rightarrow \phi \phi}}=\frac{1}{4} \frac{1}{2}\left|\frac{V_{ub} V_{u d} +V_{c b} V_{c d} }{V_{ub} V_{u s} +V_{c b} V_{c s} }\right|^{2} \frac{m_{B^{0}_{s}}^{2}}{m_{B^{0}}^{2}} \frac{p_{\rho^{0}}}{p_{\phi}}= 6.03 \times 10^{-3}, \\
R_{2}^{th}=\frac{\Gamma_{B^{0} \rightarrow \phi \omega}}{\Gamma_{B^{0}_{s} \rightarrow \phi \phi}}=\frac{1}{4} \frac{1}{2}\left|\frac{V_{ub} V_{u d} +V_{c b} V_{c d} }{V_{ub} V_{u s} +V_{c b} V_{c s} }\right|^{2} \frac{m_{B^{0}_{s}}^{2}}{m_{B^{0}}^{2}} \frac{p_{\omega}}{p_{\phi}}= 6.03 \times 10^{-3}, \\
R_{3}^{th}=\frac{\Gamma_{B^{0}_{s} \rightarrow \phi \bar{ K}^{* 0}}}{\Gamma_{B^{0}_{s} \rightarrow \phi \phi}}= \left|\frac{V_{ub} V_{u d} +V_{c b} V_{c d} }{V_{ub} V_{u s} +V_{c b} V_{c s} }\right|^{2} \frac{p_{\bar{ K}^{* 0}}}{p_{\phi}}= 4.72 \times 10^{-2}.
\label{eq:ratio1}
\end{array}\end{equation}
The only available experimental ratio \cite{pdg2018} is
\begin{equation}
R_{3}^{exp}= \frac{\text{Br}(B^{0}_{s} \rightarrow \phi \bar{ K}^{* 0})}{\text{Br}(B^{0}_{s} \rightarrow \phi\phi)} = \frac{(1.14 \pm 0.30) \times 10^{-6} }{(1.87 \pm 0.15) \times 10^{-5}}= (6.09 \pm 2.09) \times 10^{-2} \textbf{ },
\end{equation}
where, we can see that our predicted $R_{3}^{th}$ is consistent with the experimental results within the uncertainties.
Besides, using the determined vertex factors above, we can also obtain the other three branching ratios,
\begin{equation}\begin{array}{l}
\text{Br}(B^{0} \rightarrow \phi \rho^{0}) = \frac{\Gamma_{B^{0} \rightarrow \phi\rho^{0}}}{\Gamma_{B}}= (1.13 \pm 0.09) \times 10^{-7},\\
\text{Br}(B^{0} \rightarrow \phi \omega)=\frac{\Gamma_{B^{0} \rightarrow \phi \omega}}{\Gamma_{B}}= (1.13 \pm 0.09) \times 10^{-7}, \\
\text{Br}(B^{0}_{s} \rightarrow \phi \bar{ K}^{* 0})=\frac{\Gamma_{B^{0}_{s} \rightarrow \phi \bar{ K}^{* 0}}}{\Gamma_{B_{s}}}= (8.83 \pm 0.71) \times 10^{-7},
\label{eq:ratio2}
\end{array}\end{equation}
which are consistent with the experimental results \cite{pdg2018} within the upper limits,
\begin{equation}\begin{array}{l}
\text{Br}(B^{0} \rightarrow \phi \rho^{0})< 3.3 \times 10^{-7},\\
\text{Br}(B^{0} \rightarrow \phi \omega)< 7 \times 10^{-7}, \\
\text{Br}(B^{0}_{s} \rightarrow \phi \bar{ K}^{* 0})= (1.14 \pm 0.30) \times 10^{-6}.
\end{array}\end{equation}
As one can see that, for the case of $B^{0}_{s} \rightarrow \phi \bar{ K}^{* 0}$, the predicted value for the branching ratio is in agreement with the experiment within the uncertainties.
\section{Conclusions}
The rare non-leptonic three body decays of $B^{0}_{s} \rightarrow \phi\pi^{+}\pi^{-}$ and $B^{0} \rightarrow \phi\pi^{+}\pi^{-}$, which induced by the flavor changing neutral current $b\rightarrow s\bar{s}s$ and $b\rightarrow d\bar{s}s$, respectively, are studied with the final state interaction approach, based on the chiral unitary approach, where the contributions from the scalar resonances ($f_{0}(500)$ and $f_{0}(980)$) and vector mesons ($\rho$, $\omega$, $\phi$, and $ \bar{ K}^{* 0}$) are taken into account in the final state interactions. Our results for the $\pi^+\pi^-$ invariant mass distributions of the $B^{0}_{s}\rightarrow \phi\pi^{+}\pi^{-}$ decay describe the experimental data up to 1 GeV well when we consider two resonances contributions of the $f_{0}(980)$ and $\rho$, whereas, there is no clear contributions of the $f_{0}(500)$ state in our formalism as indicated in the experiments. Based on these results, we make a prediction for the mass spectrum of the $B^{0}$ decay, where we found that the contributions from the $f_{0}(500)$ state are larger than the one of the $f_{0}(980)$ in the decay of $B^{0} \rightarrow \phi\pi^{+}\pi^{-}$, where an abroad resonance structure can be easily seen in the $\pi^+\pi^-$ invariant mass distributions and a small narrow peak corresponded to the $f_{0}(980)$ also can be found. From these results, one can conclude that the dominant components are the $\pi\pi$ parts in the $f_{0}(500)$ resonance and the $f_{0}(980)$ state is mainly contributed by the $K\bar{K}$ components. Furthermore, we also investigate the branching ratios for the different decay processes with the scalar and vector meson productions in the final states, where some of our results are in agreement with the experiments. Besides, we study the ratios between the $B^0_{(s)}$ decaying into $\phi$ plus the other states and into $J/\psi$ plus the same states. All the predicted results can be seen in Tables \ref{tab:tab1}, \ref{tab:tab3} and Eqs. \eqref{eq:ratio1}, \eqref{eq:ratio2}. Finally, we hope our predicted the $\pi^+\pi^-$ invariant mass distributions for the decay of $B^{0} \rightarrow \phi\pi^{+}\pi^{-}$ and some other branching ratios can be measured by the future experiments.
Note added: When our work is ready, we find that the work of \cite{Zou:2020dpg} also investigate the decays of $B^{0}_{(s)} \rightarrow \phi\pi^{+}\pi^{-}$ with the perturbative QCD approach, which focuses on the branching fractions, the CP asymmetries, and so on.
\section*{Acknowledgments}
We thank E. Oset, J. J. Xie and E. Wang for useful discussions and valuable comments. Z. F. is suported by the National Natural Science Founadtion of China (NSFC) under Grants No. 11705069, and partly suported by NSFC under Grants No. 11965016.
\begin{appendices}
\section{CKM matrix}
\label{ckm}
The CKM matrix elements are fundamental parameters of the SM.The elements of the CKM matrix have been determined from experiments, which can be expressed according to the $A$, $\rho$, $\lambda$, and $\eta$ parameters, called the Wolfenstein parameterization \cite{pdg2018,Wolfenstein:1964ks},
\begin{equation}
V_{\mathrm{CKM}}=\left(\begin{array}{ccc}1-\lambda^{2} / 2 & \lambda & A \lambda^{3}(\rho-i \eta) \\ -\lambda & 1-\lambda^{2} / 2 & A \lambda^{2} \\ A \lambda^{3}(1-\rho-i \eta) & -A \lambda^{2} & 1\end{array}\right)+\mathcal{O}\left(\lambda^{4}\right),
\label{eq:ckm1}
\end{equation}
where the values of these parameters are given by \cite{pdg2018}
\begin{equation}
\begin{array}{l}\lambda=0.22453 \pm 0.00044, \quad A=0.836 \pm 0.015, \quad \bar{\rho}=0.122_{-0.017}^{+0.018}, \quad \bar{\eta}=0.355_{-0.011}^{+0.012}\end{array},
\end{equation}
having
\begin{equation}
\bar{\rho}=\rho\left(1-\frac{\lambda^{2}}{2}\right), \quad \bar{\eta}=\eta\left(1-\frac{\lambda^{2}}{2}\right)
\end{equation}
Besides, the absolute value of the CKM matrix including the uncertainty can be given by
\begin{equation}
|V_{\mathrm{CKM}}|=\left(\begin{array}{ccc}0.97446 \pm 0.00010 & 0.22452 \pm 0.00044 & 0.00365 \pm 0.00012 \\ 0.22438 \pm 0.00044 & 0.97359+0.00010 & 0.04214 \pm 0.00076 \\ 0.00896_{-0.00023}^{+0.00024} & 0.04133 \pm 0.00074 & 0.999105 \pm 0.000032 \\ \end{array}\right).
\label{eq:ckm2}
\end{equation}
\section{Formalism of the B meson to $J/\psi \pi^{+}\pi^{-}$}
\label{section:app1}
Following the work of Ref. \cite{Liang:2014tia}, the details for the study of $B\to J/\psi \pi^{+}\pi^{-}$ are summarized as follow,
\begin{equation}
\begin{aligned}
B^{0}(\bar{b}d) &\Rightarrow [V_{cb}] \bar{c} \mathit{W^{+}} d \Rightarrow [V_{cb}][V_{cd}^{*} ]\textbf{ } (c\bar{c})\, (d\bar{d}) \\
&\Rightarrow [V_{cb}][V_{cd}^{*} ]\textbf{ } (c\bar{c}\to J/\psi) \textbf{ } [d\bar{d}\to d\bar{d}\cdot (u\bar{u}+d\bar{d}+s\bar{s})] \\
& \Rightarrow [V_{cb}][V_{cd}^{*} ]\textbf{ } (c\bar{c}\to J/\psi) \textbf{ } [M_{22}\to (M\cdot M)_{22}] \, ,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
B^{0}_{s} (\bar{b}s) &\Rightarrow [V_{cb}] \bar{c} \mathit{W^{+}} s\Rightarrow [V_{cb}][V_{cs}^{*} ]\textbf{ } (c\bar{c})\, (s\bar{s}) \\
&\Rightarrow [V_{cb}][V_{cs}^{*} ]\textbf{ } (c\bar{c}\to J/\psi) \textbf{ } [s\bar{s} \to s\bar{s}\cdot (u\bar{u}+d\bar{d}+s\bar{s})] \\
&\Rightarrow [V_{cb}][V_{cs}^{*} ]\textbf{ } (c\bar{c}\to J/\psi) \textbf{ } [M_{33}\to (M\cdot M)_{33}] \, ,
\end{aligned}
\end{equation}
where the matrix $M$ is defined in Eq. \eqref{eq:matrM}. Thus, for the hadronization procedures we have
\begin{equation}
\begin{aligned}
d \bar{d} \cdot (u \bar{u}+d \bar{d}+s \bar{s}) & \equiv(\Phi \cdot \Phi)_{22} =\pi^{+} \pi^{-}+\frac{1}{2} \pi^{0} \pi^{0}-\frac{1}{\sqrt{3}} \pi^{0} \eta+K^{0} \bar{K}^{0}+\frac{1}{6} \eta \eta ,\\
s \bar{s} \cdot (u \bar{u}+d \bar{d}+s \bar{s}) & \equiv(\Phi \cdot \Phi)_{33}=K^{-} K^{+}+K^{0} \bar{K}^{0}+\frac{4}{6} \eta \eta,
\end{aligned}
\end{equation}
with the matrix $\Phi$ given in Eq. \eqref{eq:Mphi}.
The amplitudes for $\pi^{+}\pi^{-}$ productions are given by
\begin{equation}
\begin{aligned}
t\left(B^{0} \rightarrow J/\psi \pi^{+} \pi^{-}\right) =& V^{\prime}_{P} (V_{cb} V_{cd}^{*} )\left(1+G_{\pi^{+} \pi^{-}} t_{\pi^{+} \pi^{-} \rightarrow \pi^{+} \pi^{-}} \right. +2 \frac{1}{2} \frac{1}{2} G_{\pi^{0} \pi^{0}} t_{\pi^{0} \pi^{0} \rightarrow \pi^{+} \pi^{-}} \\
&+G_{K^{0} \bar{K}^{0}} t_{K^{0} \bar{K}^{0} \rightarrow \pi^{+} \pi^{-}}\left.+2\frac{1}{6} \frac{1}{2} G_{\eta \eta} t_{\eta \eta \rightarrow \pi^{+} \pi^{-}}\right), \\
t\left(B^{0}_{s} \rightarrow J/\psi \pi^{+} \pi^{-}\right) =& V^{\prime}_{P} (V_{cb} V_{cs}^{*} )\left( G_{K^{+}K^{-}}t_{K^{+}K^{-} \rightarrow \pi^{+} \pi^{-}}\right. +G_{K^{0} \bar{K}^{0}} t_{K^{0} \bar{K}^{0} \rightarrow \pi^{+} \pi^{-}} \\
& \left.+2\frac{4}{6} \frac{1}{2} G_{\eta \eta} t_{\eta \eta \rightarrow \pi^{+} \pi^{-}}\right),
\end{aligned}
\end{equation}
Finally, the partial decay widths can be written as
\begin{equation}
\frac{d \Gamma}{d M_{\text{inv}}}=\frac{1}{(2 \pi)^{3}} \frac{1}{8 M_{B_{(s)}}^{2}} \frac{2}{3} p_{J/\psi}^{2} p_{J/\psi} \tilde{p}_{\pi} \bar{\sum} \sum \left|t_{B_{(s)}^{0} \rightarrow J/\psi \pi^{+} \pi^{-}}\right|^{2}.
\end{equation}
Thus, when the $\eta \eta$ channel is considered in the coupled channel interactions, we have
\begin{equation}
\text{Br}(B^{0}_{s} \rightarrow J/\psi f_{0}(980))= \frac{\Gamma_{B^{0}_{s} \rightarrow J/\psi f_{0}(980)}}{\Gamma_{B_{s}}}= \frac{\int_{2m_{\pi}}^{1200}\frac{d\Gamma_{B^{0}_{s} \rightarrow J/\psi f_{0}(980)}}{dM_{inv}}dM_{inv}}{\Gamma_{B_{s}}} \\ = \frac{V_{P}^{\prime 2}}{\Gamma_{B_{s}}} \times 8.66 \textbf{ },
\label{eq:49}
\end{equation}
and ignored the $\eta \eta$ channel in the two-body interactions,
\begin{equation}
\text{Br}(B^{0}_{s} \rightarrow J/\psi f_{0}(980))= \frac{\Gamma_{B^{0}_{s} \rightarrow J/\psi f_{0}(980)}}{\Gamma_{B_{s}}}= \frac{\int_{2m_{\pi}}^{1200}\frac{d\Gamma_{B^{0}_{s} \rightarrow J/\psi f_{0}(980)}}{dM_{inv}}dM_{inv}}{\Gamma_{B_{s}}} \\ = \frac{V_{P}^{\prime 2}}{\Gamma_{B_{s}}} \times 13.54 \textbf{ }.
\label{eq:50}
\end{equation}
Using the measured branching fraction of the $\text{Br}(B^{0}_{s} \rightarrow J/\psi f_{0}(980))= (1.28 \pm 0.18) \times 10^{-4}$ \cite{pdg2018}, we can obtain $ \frac{V_{P}^{\prime 2}}{\Gamma_{B_{s}}}= (1.48 \pm 0.21) \times 10^{-5}$ and $\frac{V_{P}^{\prime 2}}{\Gamma_{B_{s}}}= (9.46 \pm 1.33) \times 10^{-6}$ for with and without the $\eta \eta$ channel in the two-body interactions, respectively.
The $\pi^{+}\pi^{-}$ invariant mass distributions of $B^{0}_{s}\rightarrow J/\psi \pi^{+} \pi^{-}$ and $B^{0}\rightarrow J/\psi \pi^{+} \pi^{-}$ are shown in Fig. \ref{fig10} and Fig. \ref{fig11}, respectively, which are consistent with the ones of Ref. \cite{Liang:2014tia}.
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{pics/dgammabs-jpsi.pdf}
\caption{$\pi^{+}\pi^{-}$ invariant mass distributions of the $B^{0}_{s}\rightarrow J/\psi\pi^{+}\pi^{-}$ decay, where we plot $\frac{C^\prime \times 10^{-8}}{\Gamma_{B_{s}}} \frac{d \Gamma}{d M_{\text{inv}}}$ and the data is taken from Ref. \cite{Aaij:2014emv}. The dash (red) and solid (green) lines correspond to the results with (the normalization constant $C^\prime=5.36$) and without ($C^\prime=19.71$) the $\eta \eta$ channel in the two-body interactions, respectively.}
\label{fig10}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{pics/dgammab-jpsi.pdf}
\caption{$\pi^{+}\pi^{-}$ invariant mass distributions for the $B^{0}\rightarrow J/\psi\pi^{+}\pi^{-}$ decay, where the solid (magenta) and dash (black) lines represent the results with and without the coupled channel of $\eta \eta$, respectively. }
\label{fig11}
\end{figure}
\end{appendices}
\newpage
|
1,477,468,751,442 | arxiv | \section{Introduction}
Let $G=(V,E)$ be a simple graph, i.e., no self-loops and
multiple edges, and we call it an
$(n,q)$-graph if $|V|=n$ and $|E|=q$.
We denote the number of connected $(n,q)$-graphs by
$f(n,q)$.
Connected $(n,n-1)$-graphs are \textit{spanning trees} in
the complete graph $K_n$ over $n$ vertices
and it is known as Cayley's formula \cite{C89} that $f(n,n-1)=n^{n-2}$.
Connected $(n,n)$-graphs are called \textit{unicycles} and the formula for $f(n,n)$
was found by R\'enyi \cite{R59}, which is given by
\begin{equation}
f(n,n) = \frac{1}{2} \left( \frac{h(n)}{n} -
n^{n-2}(n-1)\right) \sim \sqrt{\frac{\pi}{8}} n^{n-1/2}
\quad (n \to \infty),
\label{eq:fnn}
\end{equation}
where
\[
h(n) = \sum_{s=1}^{n-1} {n \choose s} s^{s} (n-s)^{n-s}.
\]
The asymptotic behavior of $f(n,n+k)$ for general $k$ as $n \to \infty$ was
also discussed in \cite{W77},
where the proofs are based
on recurrence equations which $f(n,n+k)$'s satisfy,
the algebraic structures of generating functions and their
derivatives, and the combinatorial aspect as will be
seen in Theorem~\ref{thm:expressionofF_k} below.
We consider a bipartite simple graph $G = (V,E)$ with
bipartition $V = V_1 \sqcup V_2$ and call it a bipartite $(r,s,q)$-graph if
$|V_1|=r$, $|V_2|=s$ and $|E|=q$, which is also considered
as a connected spanning subgraph with $q$-edges in the bipartite graph $K_{r,s}$.
We denote by $f(r,s,q)$ the number of connected bipartite
$(r,s,q)$-graphs.
Similarly as before, connected bipartite
$(r,s,r+s-1)$-graphs are spanning trees in $K_{r,s}$ and it is well known \cite{Sc62}
that
\begin{equation}
f(r,s,r+s-1)=r^{s-1} s^{r-1},
\label{eq:spanning_trees}
\end{equation}
which is the bipartite version of Cayley's formula.
When $rs=0$, we understand $f(r,s,r+s-1) = 1$ if $(r,s)=(1,0),
(0,1)$; $=0$ otherwise, i.e., the one-vertex simple graph is regarded as
a spanning tree.
Connected bipartite $(r,s,r+s)$-graphs are unicycles in
$K_{r,s}$ and discussed in the context of cuckoo
hushing by \cite{Ku06}. In the present paper, we discuss
$f(r,s,r+s-1+k)$ for $k=0,1,\dots$ and the asymptotic
behavior of sum of
$f(r,s,r+s-1+k)$ with $r+s=n$. Since we are dealing
with connected graphs, we note that $k$ corresponds to the Betti number, the
rank of the first homology group, of each $(r,s,r+s-1+k)$-graph.
Note that $k-1$ is often called \textit{excess} since such
a connected graph has $k-1$ more edges than vertices.
We consider the exponential generating function of
$f(r,s,r+s-1+k)$ defined as
follows: for $k=0,1,\dots$,
\begin{equation}
F_k(x,y)
:= \sum_{r,s=0}^{\infty} \frac{f(r,s,r+s-1+k)}{r!s!}x^r y^s.
\label{eq:exp_generating_fun}
\end{equation}
For simplicity, we write the exponential generating function
for spanning trees in \eqref{eq:spanning_trees} by
\begin{equation}
T(x,y) := F_0(x,y)
= x+y+\sum_{r,s=1}^{\infty} \frac{r^{s-1}s^{r-1}}{r!s!}x^r y^s.
\label{eq:Tintro}
\end{equation}
We introduce the following functions of $x$ and $y$:
\begin{equation}
T_x = D_x T, \quad T_y = D_y T,
\quad Z = T_x + T_y, \quad W = T_x T_y,
\label{eq:TxTyZW}
\end{equation}
where $D_x = x \partial_x$ and $D_y = y \partial_y$ are
the Euler differential operators.
Then we have the following.
\begin{prop}\label{prop:Uintro}
The function $F_1(x,y)$ is expressed as $F_1 = f_1(W)$ with
$f_1(w) = -\frac{1}{2}\big( \log(1-w) + w \big)$, i.e.,
\[
F_1(x,y) = -\frac{1}{2}\Big( \log(1-T_xT_y) + T_x T_y \Big).
\]
\end{prop}
This result was discussed in (cf. \cite[Lemma 4.4]{Ku06},\cite{DK12}).
However, the term $w$ seems missing in $f_1(w)$ and $F_1(x,y)$
was given as $-\frac{1}{2} \log(1-T_xT_y)$, which does not give integer coefficients.
We will give how to compute $F_k(x,y)$ for general $k$ later and,
in principle, we are able to compute them. Here, we just
give the expression $F_2(x,y)$ (see Remark~\ref{rem:f3f4} for $F_3(x,y)$ and $F_4(x,y)$).
\begin{thm}\label{thm:W2intro}
The function $F_2(x,y)$ is expressed as $F_2 = f_2(Z,W)$ with
\begin{equation}
f_2(z,w)
= \frac{w^2}{24(1-w)^3} \big\{ (2+3w)z + 2w(6-w) \big\}.
\label{eq:v}
\end{equation}
\end{thm}
From Proposition~\ref{prop:Uintro} and Theorem~\ref{thm:W2intro},
the coefficients of the diagonals $F_1(x,x)$ and $F_2(x,x)$,
which corresponds to the number of connected components with
Betti number being $k$.
Let $\langle x^n \rangle A(x)$ denote the operation
of extracting the coefficient $a_n$ of $x^n/n!$ in an
exponential formal power series
$A(x) = \sum_{n=0}^{\infty} a_n \frac{x^n}{n!}$, i.e.
\begin{equation}
\langle x^n \rangle A(x)
= a_n.
\label{eq:braket}
\end{equation}
The coefficients of $\langle x^n \rangle F_k(x,x)$ counts the
number of connected bipartite graphs with Betti number $k$
over $n$ vertices, or equivalently, the total number of
connected bipartite $(r,s,n-1+k)$-graphs with $r+s=n$.
When $k=0$, we have
\begin{align*}
F_0(x,x)
= 2 \Big(x + \sum_{n=2}^{\infty} n^{n-2} \frac{x^n}{n!}
\Big),
\end{align*}
which is equivalent to \eqref{eq1}. That is, as we will see in Section~\ref{sec:asymptotics},
the spanning trees in $K_{r,s}$ for some $(r,s)$ with $r+s=n$
are in two-to-one correspondence with those in $K_n$.
When $k \ge 1$, the situation is different since there may exist
odd cycles $K_n$ while cycles must be even in $K_{r,s}$.
From Proposition~\ref{prop:Uintro} and
Theorem~\ref{thm:W2intro}, we obtain the asymptotic behavior
of the coefficients of $F_1(x,x)$ and $F_2(x,x)$.
\begin{thm}\label{thm:asympofun}
For $n=4,5,\dots$,
\begin{align*}
\langle x^n \rangle F_1(x,x)
=n^{n-1} \sum_{2 \le k \le n/2}\dfrac{n !}{(n-2k)! n^{2k}}
\sim \sqrt{\dfrac{\pi}{8}} n^{n-1/2}
\quad (n \to \infty).
\end{align*}
\end{thm}
\begin{figure}[htbp]
{\footnotesize
\begin{center}
\begin{tabular}{|c|ccccccccc|} \hline
$n$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\\hline
$f(n,n)$ & 1 & 15 & 222 & 3660 & 68295 & 1436568 & 33779340 & 880107840& 25201854045\\
$u_n$ & 0 & 6 & 120& 2280 & 46200 & 1026840 & 25102224& 673706880& 19745850960\\
\hline
\end{tabular}
\end{center}
}
\caption{$f(n,n)$ and $u_n=\langle x^n \rangle F_1(x,x)$ for $n=3,4,\dots,11$}
\end{figure}
From \eqref{eq:fnn}, this shows that the main term of the asymptotic behavior of the number of
bipartite unicycles over $n$ vertices is the same
as that of the number of unicycles.
\begin{thm}\label{thm:asympofF_2}
As $n\to\infty$,
\begin{equation}
\langle x^n \rangle F_2(x,x) \sim \frac{5}{48}n^{n+1}.
\label{eq:f2xx}
\end{equation}
\end{thm}
It is known \cite{W77} that
in the case of $K_n$, the main term of
asymptotics of the number of ``bicycles'' is known to be $\frac{5}{24}n^{n+1}$, which is
twice of \eqref{eq:f2xx}.
\begin{figure}[htbp]
{\footnotesize
\begin{center}
\begin{tabular}{|c|cccccccc|} \hline
$n$ & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\\hline
$f(n,n+1)$ & 6 & 205 & 5700 & 156555 & 4483360 & 136368414 &
4432075200 & 154060613850\\
$b_n$ & 0 & 20 & 960 & 33600 & 1111040 & 37202760 & 1295884800 &
47478243120 \\ \hline
\end{tabular}
\end{center}
}
\caption{$f(n,n+1)$ and $b_n=\langle x^n \rangle F_2(x,x)$ for $n=4,5,\dots,11$}
\end{figure}
As will be seen in Remark \ref{rem:f3f4}, in principle, we can
compute $F_k(x,y)$ inductively.
For general $k$, we conjecture that as $n\to\infty$,
\begin{conj}\label{conj:generalfk}
For $k \ge 2$, the function $F_k(x,y)$ is expressed as $F_k = f_k(Z,W)$ with
\begin{equation}
f_k(z,w)
= \frac{w^2}{(1-w)^{3(k-1)}} \sum_{j=0}^{k-1} q_{k,j}(w)z^j,
\label{eq:generalfk}
\end{equation}
where $q_{k,j}(w)$ is a polynomial in $w$. Moreover,
\begin{equation}
\langle x^n \rangle F_k(x,x) \sim \frac{1}{2^{k-1}}
\rho_{k-1} n^{n+(3(k-1)-1)/2},
\end{equation}
where
\[
f(n,n+k) \sim \rho_k n^{n+(3k-1)/2} \quad (n \to \infty).
\]
The explicit value of $\rho_k$ is given by the recurrence
equation in \cite{W77}.
\end{conj}
Towards this conjecture, we have the following result, which would be interesting on its own right and give more detailed information.
\begin{thm}\label{thm:expressionofF_k}
For $k \ge 2$, $F_k(x,y)$ is decomposed into the sum of rational functions of $T_x$ and $T_y$ over basic graphs as
\begin{align}\label{eq:expressionofF_k}
F_k(x,y)=\sum_{\mathcal{B} \in BG_k} J_{\mathcal{B}}(x,y),
\end{align}
where
\begin{equation}
J_{\mathcal{B}}(x,y)=\dfrac{T_x^{r_{{\rm sp}}+a_1+2a_2+b_2+c_2}T_y^{s_{{\rm sp}}
+2a_1+a_2+b_1+c_2}}{g_{\mathcal{B}}(1-T_xT_y)^{a_1+a_2+b_1+b_2+c_1+c_2}},
\label{eq:J_B}
\end{equation}
where $BG_k$ is the set of basic graphs and $g_{\mathcal{B}},r_{{\rm
sp}},s_{{\rm sp}},a_i,b_j,c_k (i,j,k=1,2)$ are constants
depending only on a basic graph $\mathcal{B}$.
\end{thm}
The definitions of basic graphs and several constants are given in the proof of Theorem~\ref{thm:expressionofF_k}.
From this theorem, we conclude at least that $F_k(x,y)$ for $k \ge 2$ is a rational function of $T_x$ and $T_y$.
Moreover, as a corollary of Theorem~\ref{thm:expressionofF_k}, we have the following.
\begin{cor}\label{cor:intro}
For $k \ge 2$, there exists a polynomial
$\tilde{q}_k(x,y)$ which does not have the factor $1-xy$ such that
\[
F_k(x,y) = \frac{1}{(1-T_xT_y)^{3(k-1)}} \tilde{q}_k(T_x,T_y).
\]
\end{cor}
The paper is organized as follows. In Section 2, we give
recurrence equations for $f(r,s,q)$
and derive recurrence linear partial differential equations that the
generating functions $F_k(x,y)$ of $f(r,s,r+s-1+k)$
satisfy. In Section 3, we solve these equations by reducing
them to a system of ordinary differential equations and
obtain the explicit expressions of $F_1(x,y)$ and
$F_2(x,y)$. In Section 4, we obtain the asymptotic
behavior of the coefficients of $F_k(x,x)$ for $k=1,2$.
In Section 5, we will give proofs of Theorem~\ref{thm:expressionofF_k} and Corollary~\ref{cor:intro} and another proof of Proposition \ref{prop:Uintro} by a combinatorial argument.
\section{Recurrence equations}
\label{sec:recurrence}
Let $f(r,s,q)$ be the number of connected bipartite
$(r,s,q)$-graphs as defined in the introduction.
Since an $(r,s,r+s-1)$-bipartite graph is a spanning tree and
we are dealing with simple graphs,
it is clear that
\begin{equation}
f(r,s,q) =0 \quad \text{if $q< r+s-1$ or $q > rs$}.
\label{eq:fequalto0}
\end{equation}
As mentioned in \eqref{eq:spanning_trees}, $f(r,s,r+s-1) = r^{s-1} s^{r-1}$.
Here we understand $0^a = \delta_{0,a}$ as Kronecker's
delta. For example, $f(1,0,0)=f(0,1,0)=1$ and $f(0,0,-1)=0$.
\begin{lem}\label{lem:recurrence1}
For $(r,s) \not= (0,0)$ and $q=-1,0,1,\dots$, we have the following recurrence
equations:
\begin{equation}
(q+1) f(r,s,q+1) = (rs -q) f(r,s,q) + Q(r,s,q),
\label{eq:recurrence1}
\end{equation}
where
\begin{align}
Q(r,s,q)
&=
\frac{1}{2} \sum_{r_1=0}^r \sum_{s_1=0}^s \sum_{t=0}^q
{r \choose r_1} {s \choose s_1}
\{(r-r_1)s_1 + r_1(s-s_1)\} \nonumber \\
&\quad \times
f(r_1,s_1,t) f(r-r_1,s-s_1,q-t)
\label{eq:recurrence1-2}
\end{align}
and $Q(r,s,-1)=0$.
\end{lem}
\begin{proof}
Here we give a sketch of the proof.
Let $G= (V_1,V_2,E)$ be an $(r,s)$-bipartite graph with $q$ edges
and we add an edge to make a connected $(r,s)$-bipartite
graph with $q+1$ edges.
There are two cases: (i) $G$ itself is connected and (ii) $G$
consists of two connected bipartite components.
For the case (i), we add an edge joining $V_1$ and $V_2$.
For the case (ii), if $V_j = V_{j,1} \sqcup V_{j,2} \
(j=1,2)$, then there are four ways to add an edge joining
two bipartitions, i.e.,
$V_{1,1}$ and $V_{2,1}$,
$V_{1,1}$ and $V_{2,2}$,
$V_{1,2}$ and $V_{2,1}$, or
$V_{1,2}$ and $V_{2,2}$.
\end{proof}
From Lemma~\ref{lem:recurrence1},
we have the following recurrence linear partial differential
equations for generating functions $\{F_k\}_{k=0,1,\dots}$ defined by \eqref{eq:exp_generating_fun}.
For the sake of convenience, we also consider $F_{-1}$, which
is equal to $0$ from \eqref{eq:fequalto0}.
\begin{prop}\label{prop:recurrence}
For $k=-1,0,1,2,\dots$,
\begin{align}
\lefteqn{(D_x+D_y+k)F_{k+1}} \nonumber \\
&= (D_xD_y- D_x-D_y+1-k)F_k + \sum_{l=0}^{k+1}
D_x F_l \cdot D_y F_{k+1-l},
\label{eq:PDE}
\end{align}
where $D_x = x \partial_x$ and $D_y = y \partial_y$.
\end{prop}
\begin{proof}
From Lemma~\ref{lem:recurrence1}, we have
\begin{align}
\lefteqn{(r+s+k) f(r,s,r+s+k)} \nonumber\\
&= (rs -r-s+1-k) f(r,s,r+s-1+k) + Q(r,s,r+s-1+k),
\label{eq:recurrence1}
\end{align}
where
\begin{align}
Q(r,s,r+s-1+k)
&=
\frac{1}{2} \sum_{r_1=0}^r
\sum_{s_1=0}^s \sum_{t=r_1+s_1-1}^{r_1+s_1+k}
{r \choose r_1} {s \choose s_1}
\{(r-r_1)s_1 + r_1(s-s_1)\} \nonumber \\
&\quad \times
f(r_1,s_1,t) f(r-r_1,s-s_1,r+s-1+k-t).
\end{align}
Here we used the fact that
$f(r_1,s_1,t)f(r-r_1,r-s_1,r+s-1+k-t)=0$
unless $t \ge r_1+s_1-1$ and $r+s-1+k-t \ge
(r-r_1)+(s-s_1)-1$, i.e., $r_1+s_1-1 \le t \le r_1+s_1+k$.
By multiplying both sides of \eqref{eq:recurrence1} and
taking sum over $r,s=0$ to $\infty$, we see that
\begin{align*}
(D_x+D_y+k)F_{k+1}
&= (D_xD_y- D_x-D_y+1-k)F_k \\
&\quad + \frac{1}{2} \sum_{l=0}^{k+1}
\{D_x F_l \cdot D_y F_{k+1-l}
+ D_y F_l \cdot D_x F_{k+1-l} \} \\
&= (D_xD_y- D_x-D_y+1-k)F_k + \sum_{l=0}^{k+1}
D_x F_l \cdot D_y F_{k+1-l}.
\end{align*}
\end{proof}
In what follows, we write $T:=F_0$ and use the symbols $T_x,
T_y, Z, W$ in \eqref{eq:TxTyZW}.
We think of $T$ as a known function below.
These functions satisfy several useful identities.
First let us consider the case $k=-1$ in \eqref{eq:PDE}. Then we have
\begin{align*}
(D_x+D_y-1)F_0 = D_x F_0 \cdot D_y F_0,
\end{align*}
which is equivalent to
\begin{equation}
T_x T_y = T_x+T_y -T.
\label{eq:T-formula}
\end{equation}
\begin{rem}
As in the above, in Sections 2 and 3, we always use the
subscript $x,y$, etc. for the differentiation by
Euler operators $D_x=x\partial_x, D_y = y\partial_y$,
etc., but not the usual partial derivative $\partial_x, \partial_y$, etc.
\end{rem}
For $k=1,2,\dots$, from \eqref{eq:PDE}, we have the following linear PDE
\begin{equation}
\mathcal{L}_k F_{k+1}
= (D_xD_y- D_x-D_y+1-k)F_k + \sum_{l=1}^{k}
D_x F_l \cdot D_y F_{k+1-l},
\label{eq:PDE2}
\end{equation}
where
\begin{equation}
\mathcal{L}_k := (1-T_y)D_x+(1-T_x)D_y+k.
\label{eq:Lk}
\end{equation}
Therefore, in principle, we can solve the
equation \eqref{eq:PDE2} recursively and
obtain $F_k$ for $k=1,2,\dots$ in terms of the known function $T$.
Before solving these equations, we observe several algebraic relations for $T_x$'s.
\begin{lem} The following identities hold.
\begin{align}
T_{xx} &= T_x (T_{xy}+1) \label{eq:31}\\
T_{xy} &= T_{yx} = T_x T_{yy} = T_y T_{xx} \label{eq:32}\\
T_{yy} &= T_y (T_{xy}+1). \label{eq:33}
\end{align}
Furthermore,
\begin{equation}
T_{xy} = \frac{T_xT_y}{1-T_xT_y}.
\label{eq:34}
\end{equation}
\end{lem}
\begin{proof}
It is known that two functions $T_x$ and $T_y$ satisfy the
following functional equations (cf. \cite[Section 3]{Ku06}):
\begin{equation}
T_x = x e^{T_y}, \quad
T_y = y e^{T_x}.
\label{eq:0}
\end{equation}
Differentiating both sides of \eqref{eq:0} yields the identities \eqref{eq:31},
\eqref{eq:32}, and \eqref{eq:33}.
Plugging \eqref{eq:31} into \eqref{eq:32} yields \eqref{eq:34}.
\end{proof}
By using the notations \eqref{eq:TxTyZW},
we can rewrite \eqref{eq:T-formula} and \eqref{eq:34} as
\begin{equation}
T = Z-W
\label{eq:T}
\end{equation}
and
\begin{equation}
T_{xy} = \frac{W}{1-W},
\label{eq:Txyw}
\end{equation}
respectively.
Functions of $Z$ and $W$ are well-behaved under the action of $\mathcal{L}_k$.
\begin{lem}\label{lem:L0}
Suppose $F(x,y)$ and $G(x,y)$ admit differentiable
functions $f(z)$ and $g(w)$ such that
$F(x,y) = f(Z(x,y))$ and $G(x,y) = g(W(x,y))$, respectively.
Then,
\begin{align}
\mathcal{L}_0 F &= (D_z f)(Z), \label{eq:L0f}\\
\mathcal{L}_0 G &= 2 (D_w g)(W), \label{eq:L0g}
\end{align}
where $(D_u f)(u) = u f'(u)$.
Moreover, $H = h(Z,W)$ for a differentiable function $h(z,w)$,
\begin{equation}
\mathcal{L}_0 H = (D_z h)(Z,W)
+ 2(D_w h)(Z,W).
\label{eq:L0-2}
\end{equation}
\end{lem}
\begin{proof}
From \eqref{eq:31}, we have
\begin{align*}
D_x Z &= T_{xx} + T_{yx}
= T_x + (T_x + 1) T_{xy}, \\
D_y Z &= T_{xy} + T_{yy} = T_y + (T_y + 1) T_{xy}.
\end{align*}
From \eqref{eq:34}, we see that
\begin{align*}
\mathcal{L}_0 Z
&= (1-T_y)\{T_x + (T_x + 1) T_{xy}\}
+ (1-T_x)\{T_y + (T_y + 1) T_{xy}\} \\
&= Z - 2W +
\{(1-T_y)(T_x + 1) + (1-T_x) (T_y + 1)\} T_{xy} \\
&= Z.
\end{align*}
In general, since $\mathcal{L}_0$ is a linear operator, we see that
\[
\mathcal{L}_0 f(Z)
= f'(Z) \mathcal{L}_0 Z
= Z f'(Z)
= (Df)(Z).
\]
We note that from the definition of $\mathcal{L}_0$,
\[
\mathcal{L}_0 T = (1-T_y) T_x + (1-T_x) T_y = Z - 2W.
\]
Since $W = Z - T$ from \eqref{eq:T}, we have
\[
\mathcal{L}_0 W = \mathcal{L}_0 Z - \mathcal{L}_0 T = 2W.
\]
Therefore,
\[
\mathcal{L}_0 g(W)
= g'(W) \mathcal{L}_0 W
= 2W g'(W)
= 2 (Dg)(W).
\]
For general $h(Z,W)$, we obtain \eqref{eq:L0-2} similarly.
This completes the proof.
\end{proof}
From this formula, we can reduce the analysis on $F(x,y) =
h(Z(x,y), W(x,y))$ to that on $h(z,w)$ of two variables $z$
and $w$.
\section{Explicit expressions of generating functions}
In this section, we solve the PDE \eqref{eq:PDE2} to obtain
the explicit expressions of generating functions $F_1$ and
$F_2$. The algebraic relations of $Z, W$ and their derivatives,
which were seen in the previous section, play an essential role of the proof.
\subsection{For $F_1$: unicycles}\label{sec:F1}
For unicycles, we will solve \eqref{eq:PDE2} with $k=0$, i.e.,
\begin{equation}
\mathcal{L}_0 F_1
= (D_xD_y- D_x-D_y+1) F_0.
\label{eq:PDE2-1}
\end{equation}
By using $T$ and their derivatives,
we can rewrite
\eqref{eq:PDE2-1} as
\begin{equation}
\mathcal{L}_0 F_1
= T_{xy} - T_x - T_y + T.
\label{eq:PDE2-1-2}
\end{equation}
The right-hand side is a function of $W$ and is written
\[
T_{xy} - T_x - T_y + T = \frac{W}{1-W} - W,
\]
from which together with \eqref{eq:L0g} we see that $U$ is also a function of
$W$ and obtain the following.
\begin{proof}[Proof of Proposition~\ref{prop:Uintro}]
Suppose there exists a function $f_1=f_1(w)$ such that $F_1 = f_1(W)$.
By definition, $f_1$ does not have a constant term, i.e., $f_1(0)=0$.
Since $\mathcal{L}_0 F_1 = 2 (Df_1)$ by
\eqref{eq:L0g}, \eqref{eq:PDE2-1-2} can be
expressed as
\[
2(Df_1)(w) = \frac{w}{1-w} - w,
\]
or equivalently,
\[
f_1'(w) = \frac{1}{2} \left(\frac{1}{1-w} - 1 \right).
\]
From this differential equation with $f_1(0)=0$, we obtain
\[
f_1(w) = -\frac{1}{2}\big( \log(1-w) + w \big)
\]
and thus we obtain the assertion.
\end{proof}
\subsection{For $F_2$: bicycles}\label{sec:F2}
We want to solve \eqref{eq:PDE2} with $k=1$, i.e.,
\begin{equation}
\mathcal{L}_1 F_2
= (D_xD_y- D_x-D_y)F_1 + D_x F_1 \cdot D_y F_1
\label{eq:PDE2-2}
\end{equation}
where $\mathcal{L}_1 = \mathcal{L}_0 + 1$.
Here $F_1$ has been given in already given in
Proposition~\ref{prop:Uintro} and considered as a known function.
We will solve this equation to prove Theorem~\ref{thm:W2intro}.
Before proceeding to the proof, we prepare some lemmas.
\begin{lem}
\begin{align*}
Z_x = \frac{W+T_x}{1-W}, \quad Z_y = \frac{W+T_y}{1-W},
\quad Z_{xy} = \frac{2+Z}{(1-W)^2} T_{xy}.
\end{align*}
Moreover,
\begin{equation}
Z_x + Z_y = \frac{Z+2W}{1-W}, \quad
Z_x Z_y = \frac{W(Z+W+1)}{(1-W)^2}
\label{eq:zxzy}
\end{equation}
\end{lem}
\begin{lem}
\begin{equation}
W_x = (1+T_x)T_{xy}, \quad
W_y = (1+T_y)T_{xy}
\label{eq:wxwy}
\end{equation}
and
\begin{equation}
W_{xy} = T_{xy}^2 + \frac{1+Z+W}{(1-W)^2} T_{xy}
\label{eq:wxy}
\end{equation}
Moreover,
\begin{equation}
W_x + W_y = \frac{W(Z+2)}{1-W}, \quad
W_x W_y = \frac{W^2(Z+W+1)}{(1-W)^2}
\label{eq:wxwy2}
\end{equation}
and
\begin{equation}
Z_x W_y + Z_y W_x = \frac{W(ZW+Z+4W)}{(1-W)^2}.
\label{eq:zxwy}
\end{equation}
\end{lem}
\begin{proof}
First it follows from \eqref{eq:32} that
\[
W_x = (T_x T_y)_{x} = T_{xx} T_y + T_x T_{xy}
= (1+T_x)T_{xy}.
\]
By symmetry, we have the second equation in
\eqref{eq:wxwy}.
Next it follows from \eqref{eq:34} that
\begin{equation}
(T_{xy})_y = \left(\frac{W}{1-W}\right)_y =
\frac{1}{(1-W)^2} W_y
= \frac{1}{(1-W)^2} (1+T_y) T_{xy}
\label{eq:txyy}
\end{equation}
Then,
\[
W_{xy} = ((1+T_x)T_{xy})_y = T_{xy}^2 + (T_x+1) (T_{xy})_y
= T_{xy}^2 + (1+T_x)(1+T_y)
\frac{1}{(1-W)^2} T_{xy}.
\]
\end{proof}
Later we will also use the following.
\begin{proof}[Proof of Theorem~\ref{thm:W2intro}]
Let $G=-2F_1$, i.e.,
\[
G(W) = \log (1-W) + W.
\]
First we observe that
\[
G_x = \left(\frac{-1}{1-W}+1\right)W_x
= \frac{-W}{1-W}W_x
= -T_{xy}W_x.
\]
Similarly, $G_y = -T_{xy} W_y$. Hence,
\[
G_x + G_y = -T_{xy} (W_x + W_y) = -(2+Z)T_{xy}^2.
\]
Next it follows from \eqref{eq:wxwy},
\eqref{eq:wxy} and \eqref{eq:txyy} that
\begin{align*}
G_{xy}
&= -(T_{xy}W_x)_y \\
&= -(T_{xy})_y W_x - T_{xy} W_{xy} \\
&= -\frac{1}{(1-W)^2} (1+T_y) T_{xy} \cdot (1+T_x)T_{xy}
- T_{xy} \left(T_{xy}^2 + (1+T_x)(1+T_y)
\frac{1}{(1-W)^2} T_{xy}\right)
\\
&=
-T_{xy}^2 \left\{
\frac{2}{(1-W)^2}(1+T_x)(1+T_y)+ T_{xy}
\right\} \\
&=
-T_{xy}^2 \left\{
\frac{2}{(1-W)^2}(1+Z+W)+ T_{xy}
\right\}.
\end{align*}
Lastly, we have
\begin{align*}
G_x G_y
&= T_{xy}^2 W_x W_y
= T_{xy}^4 (1+T_x)(1+T_y)
\end{align*}
Putting the above all together in \eqref{eq:PDE2-2}, we have
\begin{align}
4\mathcal{L}_1 F_2
&= 2 (G_x+G_y) + G_x G_y - 2 G_{xy} \nonumber\\
&= -2(Z+2)T_{xy}^2 + T_{xy}^4 (1+T_x)(1+T_y) + 2
T_{xy}^2 \left\{
\frac{2}{(1-W)^2} (1+T_x)(1+T_y)
+ T_{xy}
\right\} \nonumber\\
&= \frac{T_{xy}^2}{(1-W)^2}
\left\{
-2(Z+2)(1-W)^2 + W^2 (1+Z+W) +
4 (1+Z+W) + 2W(1-W)
\right\} \nonumber\\
&= \frac{W^2}{(1-W)^4}
\big\{
(-W^2+4 W+2) Z + (W^2-5 W+14) W
\big\}.
\label{eq:forF2}
\end{align}
Suppose there exist functions $a_0(w)$ and $a_1(w)$
such that $F_2 = f_2(Z,W)$ with $f_2(z,w) := a_1(w)z+a_0(w)$.
Since $\mathcal{L}_1 = \mathcal{L}_0 + 1$, from \eqref{eq:L0-2},
the equation \eqref{eq:forF2} can be
expressed as
\begin{equation}
4(D_z f_2 + 2 D_w f_2 + f_2)
=
\frac{w^2}{(1-w)^4}
\{
(-w^2+4 w+2) z + (w^2-5 w+14) w
\}.
\label{eq:ab1}
\end{equation}
On the other hand, since $f_2(z,w) = a_1(w)z+a_0(w)$, we have
\begin{align}
D_z f_2 + 2 D_w f_2 + f_2
&= a_1(w)z + 2 \{(Da_1)(w) z + (Da_0)(w)\} + a_1(w)z + a_0(w)
\nonumber \\
&= \{2a_1(w)+2 (Da_1)(w)\} z + \{2 (Da_0)(w) + a_0(w)\}.
\label{eq:ab2}
\end{align}
Comparing \eqref{eq:ab1} with \eqref{eq:ab2} yields
\[
a_0(w) + 2(Da_0)(w) =
\frac{w^3}{4(1-w)^4}
(w^2-5 w+14)
\]
and
\[
2a_1(w)+2 (Da_1)(w) =
\frac{w^2}{4(1-w)^4}(-w^2+4 w+2).
\]
On the other hand, by the definition of $F_2(x,y)$, the function $f_2(z,w)$ does not have the terms $z^i, i=0,1,2,\dots$ since if such a term appears in $f_2(z,w)$, so do the terms $x^i$ and $y^i$ in $F_2(x,y)$, which contradicts to the fact that $f(i,0,i+1)=f(0,i,i+1)=0$. This implies that $a_0(0) = a_1(0)=0$.
Then, we can easily solve the above differential equations with initial conditions $a_0(0) = a_1(0)=0$ to obtain
\[
a_0(w) = \frac{w^3(6-w)}{12(1-w)^3}, \quad
a_1(w) = \frac{w^2(2+3w)}{24(1-w)^3}.
\]
Therefore,
\[
f_2(z,w) = \frac{w^2(2+3w)}{24(1-w)^3} z +
\frac{w^3(6-w)}{12(1-w)^3}.
\]
This completes the proof.
\end{proof}
\begin{rem}\label{rem:f3f4}
We can continue the above computations for $F_k(x,y) = f_k(z,w)$.
Here we give $f_3(z,w)$ and $f_4(z,w)$ just for the reference:
\begin{align*}
f_3(z,w)
&= \frac{w^3(5+41w-23w^2+8w^3-w^4)}{24(1-w)^6} + \frac{w^3(32+34w-9w^2+3w^3)}{48(1-w)^6} z \\
&\quad + \frac{w^2(1+8w+6w^2)}{48(1-w)^6} z^2
\end{align*}
and
\begin{align*}
f_4(z,w)
&=\frac{\left(-76 w^7+809 w^6-3746 w^5+9889 w^4-15356
w^3+22820 w^2+7680 w+80\right) w^3}{2880 (1-w)^9} \\
&+\frac{\left(230 w^6-1425 w^5+5568 w^4-6617
w^3+30468 w^2+35988 w+2088\right) w^3}{5760
(1-w)^9} z \\
&+\frac{\left(61 w^4+64 w^3+1186 w^2+1692 w+312\right) w^3
}{576 (1-w)^9} z^2 \\
&+\frac{\left(254 w^4+1919
w^3+2624 w^2+704 w+24\right) w^2}{5760 (1-w)^9} z^3.
\end{align*}
These expressions lead to \eqref{eq:generalfk} in Conjecture~\ref{conj:generalfk}.
\end{rem}
\section{Asymptotic behaviors of the coefficients}
\label{sec:asymptotics}
\subsection{Asymptotic behavior of the coefficients of $F_1(x,x)$}
We use the notation \eqref{eq:braket}.
We recall the convolution of exponential generating
functions
\begin{equation}
\langle x^n \rangle A(x)B(x) = \sum_{k=0}^n {n \choose k} a_k
b_{n-k}
\label{eq:convolution}
\end{equation}
when $\langle x^n \rangle A(x) = a_n$ and $\langle x^n \rangle B(x) =
b_n$.
For an exponential power series $C(x,y) = \sum_{r,s=0}^{\infty} c_{rs}
\frac{x^ry^s}{r!s!}$ of two variables, we use the notation
\[
\langle x^r y^s \rangle C(x,y) = c_{rs},
\]
and we note that the coefficients of the diagonal
$C(x,x)$ is given by
\[
\langle x^n \rangle C(x,x) = \sum_{r+s=n} {n \choose r}c_{rs}.
\]
In Section~\ref{sec:F1}, we derived the generating function
$F_1(x,y)$ for unicycles.
In this section, we focus on the coefficients of the
diagonal $F_1(x,x)$,
\[
u_n := \langle x^n \rangle F_1(x,x) =
\sum_{r+s=n} {n \choose r} f(r,s,r+s),
\]
which corresponds to the total of the numbers of complete
unicycles over $n$ vertices.
We will see the asymptotic behavior of $u_n$ as $n \to
\infty$.
From Proposition~\ref{prop:Uintro}, we have
\begin{equation}
F_1(x,x)=\frac{1}{2}\sum_{k=2}^{\infty}\frac{W(x,x)^k}{k}.
\label{eq:coeffofUxx}
\end{equation}
First we consider the coefficients of the diagonal $W(x,x)$.
Since $W=T_x+T_y-T$ from \eqref{eq:T}, it is easy to see that
\[
W(x,y) = \sum_{r,s =1}^{\infty} \frac{w(r,s)}{r!s!}x^r y^s,
\]
where $w(r,s) = r^{s-1}s^{r-1}(r+s-1)$.
Hence, we have
\[
W(x,x) =\sum_{n=2}^{\infty}
\left(\sum_{r+s=n}\frac{w(r,s) n!}{r!s!}\right)\frac{x^n}{n!}
=: \sum_{n =2}^{\infty} w_n \frac{x^n}{n!},
\]
where
\begin{align}\label{w*1}
w_n & =\sum_{r+s=n}\frac{r^{s-1}s^{r-1}(r+s-1) n!}{r!s!}\nonumber\\
&=(n-1)\sum_{r=1}^{n-1}\binom{n}{r}r^{n-r-1}(n-r)^{r-1}.
\end{align}
The sum in \eqref{w*1} can be computed by the following
identity (cf. \cite{KP09}).
\begin{lem}\label{keylem1} For $n=2,3,\dots$,
\begin{align}\label{eq1}
\sum_{r=1}^{n-1}\binom{n}{r}r^{n-r-1}(n-r)^{r-1}=2n^{n-2}.
\end{align}
\end{lem}
\begin{proof}
Here we give a combinatorial proof of the identity.
Let $S_{n}$ and $S_{n}^b$ be the set of labeled
spanning trees on $K_{n}$ and that of
labeled spanning trees on the complete bipartite graph with
$n$ vertices, respectively. Also, let $S_{r,s}^b$ be the
set of labeled spanning trees on the complete bipartite
graph $K_{r,s}$. Then
\begin{align*}
S_{n}^b =
\bigsqcup_{1\le r \le n-1} S_{r,n-r}^b.
\end{align*}
For $(V_1\sqcup V_2, E_{r,n-r}) \in S_{r, n-r}^b$ with
$|V_1|=r$ and $|V_2|=n-r$,
we define a map $\phi: S_{n}^b \to S_{n}$ by
\begin{align*}
\phi((V_1\sqcup V_2, E_{r,n-r})):=(V, E_{r,n-r}),
\end{align*}
i.e., the map of forgetting partitions.
Since every spanning tree on $K_{n}$ is bipartite, $\phi$
is surjective. Moreover, $\phi$ is two-to-one
mapping. Indeed, for $1\le r \le n-1$ and $(V_1\sqcup
V_2, E_{r,n-r}) \in S_{r, n-r}^b$, there exists a
unique spanning tree $(V_1'\sqcup V_2', E_{n-r,r}) \in
S_{n-r,r}^b$ such that $V_1'=V_2, V_2'=V_1$ and
$E_{n-r,r}=\{(i,j) \in V_1'\times V_2' : (j,i) \in
E_{r,n-r} \}$. Now we derive \eqref{eq1}.
For $1\le r \le n-1$, $|S_{r,n-r}^b|
=\binom{n}{r}f(r,n-r,n-1)=\binom{n}{r}r^{n-r-1}(n-r)^{r-1}$
by the choice of labeled $r$ vertices in $V_1$ and
\eqref{eq:spanning_trees}.
Hence
\begin{align*}
|S_{n}^b| = \sum_{r=1}^{n-1}\binom{n}{r}r^{n-r-1}(n-r)^{r-1}.
\end{align*}
On the other hand, $|S_{n}| = n^{n-2}$ by Cayley's formula.
Therefore, we conclude that \eqref{eq1} holds from the two-to-one correspondence of $\phi$.
\end{proof}
\begin{cor}\label{cor:wn}
For $n=1,2,\dots$,
\begin{equation}
w_n = \langle x^n \rangle W(x,x) = 2(n-1) n^{n-2}.
\label{eq:w1}
\end{equation}
\end{cor}
Now we proceed to the case of the power of $W(x,x)$.
For $k=1,2,\dots$, we write
\[
w_n^{*k} := \langle x^n \rangle W(x,x)^k.
\]
In particular, $w_n^{*1} = w_n$ in Corollary~\ref{cor:wn}.
Note that the smallest degree of the terms in $W(x,x)$ is 2
and hence $w_n^{*k} = 0$ for $n=1,2,\dots,2k-1$.
From \eqref{eq:convolution}, $w_n^{*k}$ is the $k$-fold convolution of
$(w_n)_{n=2,3,\dots}$ and inductively defined by
\begin{equation}
w_n^{*(k+1)}
=\sum_{r=2k}^{n-2}\binom{n}{r}w_r^{*k} w_{n-r}.
\label{eq:repeated-conv}
\end{equation}
From \eqref{eq:coeffofUxx}, the coefficients $u_n$ of
$F_1(x,x)$ are given by
\begin{align}\label{coefu}
u_n
=\frac{1}{2}\sum_{2 \le k \le n/2} \frac{w_n^{*k}}{k}.
\end{align}
\begin{prop}\label{prop:wnconv}
For $k=1,2,\dots, \lfloor n/2 \rfloor$,
\begin{align}
w_n^{*k} = 2k \cdot (2k)! n^{n-2k-1} {n \choose 2k}.
\label{eq:conv}
\end{align}
\end{prop}
\begin{proof}
For fixed $n$, we prove the equation \eqref{eq:conv} by induction in $k$.
For $k=1$, it is obviously true since $w_n^{*1} = w_n$.
Suppose that \eqref{eq:conv} holds for up to $k$, then by \eqref{eq:w1} and
\eqref{eq:repeated-conv}, we have
\begin{align*}
w_n^{*(k+1)}
&= \sum_{r=1}^n {n \choose r} w_r^{*k} w_{n-r} \\
&= \sum_{r=2k}^{n-2} \binom{n}{r} 2k \cdot (2k)!
r^{r-2k-1} {r \choose 2k} \cdot 2(n-r-1)(n-r)^{n-r-2}\\
&=4k\sum_{r=2k}^{n-2} \binom{n}{r} (r-1)\cdots (r-(2k-1))r^{r-1-(2k-1)}(n-r-1)(n-r)^{n-r-2}.
\end{align*}
Now we introduce a class of polynomials which appears in Abel's generalization of the binomial formula \cite[Section 1.5]{Riordan}:
\begin{align*}
A_n(x,y; p,q) :=\sum_{r=0}^n \binom{n}{r}(x+r)^{r+p}(y+n-r)^{n-r+q}.
\end{align*}
In particular, when $p=q=-1$, it is known \cite[p.23]{Riordan} that
\begin{equation}
A_n(x,y; -1,-1)= (x^{-1}+y^{-1})(x+y+n)^{n-1}.
\label{eq:Anxy}
\end{equation}
Multiplying both sides by $xy$ yields
\begin{align}\label{eqS}
(x+y)Q(x,y)
&= xy\sum_{r=0}^n \binom{n}{r}(x+r)^{r-1}(y+n-r)^{n-r-1}\nonumber\\
&= x(x+n)^{n-1}+y(y+n)^{n-1}+xyS(x,y),
\end{align}
where $Q(x,y):=(x+y+n)^{n-1}$ and
\begin{align*}
S(x,y) :=\sum_{r=1}^{n-1} \binom{n}{r}(x+r)^{r-1}(y+n-r)^{n-r-1}.
\end{align*}
By the generalized Leibniz rule,
for $p \in \mathbb{N}$, we have
\begin{align*}
\partial_x^p(xS(x,y))
&= p \partial_x^{p-1} S(x,y)
+ x\partial_x^p S(x,y),\\
\partial_x^p ((x+y)Q(x,y))
&= p\partial_x^{p-1} Q(x,y)+(x+y) \partial_x^pQ(x,y),
\end{align*}
which gives
\begin{align}
\partial_x^p \partial_y^2 (xyS(x,y)) \Big|_{x=y=0}
&= 2pS^{(p-1,1)}(0,0),
\label{eq:Sderivative}
\\
\partial_x^p \partial_y^2 ((x+y)Q(x,y)) \Big|_{x=y=0}
&= (p+2)Q^{(p-1,2)}(0,0),
\label{eq:Qderivative}
\end{align}
where $S^{(p,q)}(x,y):=\partial_x^p \partial_y^q S(x,y)$ and
$Q^{(p,q)}(x,y):= \partial_x^p \partial_y^q Q(x,y)$.
For $k=1,2,\dots, \lfloor n/2 \rfloor$,
differentiating both sides of \eqref{eqS} $2k$ times with respect to $x$ and twice with respect to $y$
and using \eqref{eq:Sderivative} and \eqref{eq:Qderivative} with $p=2k$ yield
\begin{align*}
\partial_x^{2k} \partial_y^2 ({\rm RHS\ of\
\eqref{eqS}})\Big|_{x=y=0}
&= \partial_x^{2k} \partial_y^2 (xyS(x,y))\Big|_{x=y=0}\\
&=4kS^{(2k-1,1)}(0,0)=w_n^{*(k+1)},\\
\partial_x^{2k} \partial_y^2 ({\rm LHS\ of\
\eqref{eqS}})\Big|_{x=y=0} &=(2k+2)Q^{(2k-1,2)}(0,0)\\
&=2(k+1) \cdot (n-1)\cdots (n-(2k+1))n^{n-1-(2k+1)}\\
&=2(k+1)\cdot (2(k+1))! n^{n-2(k+1)-1}\binom{n}{2(k+1)},
\end{align*}
which complete the proof of \eqref{eq:conv}.
\end{proof}
Now we derive the leading asymptotics of $u_n$ as $n \to \infty$.
\begin{proof}[Proof of Theorem~\ref{thm:asympofun}]
By \eqref{coefu} and \eqref{eq:conv}, we have
\begin{align*}
u_n
&= \sum_{2 \le k \le n/2} (2k)! n^{n-2k-1} {n \choose 2k}\\
&=n^{n-1} \sum_{2 \le k \le n/2}\dfrac{n !}{(n-2k)! n^{2k}}.
\end{align*}
The last summation is similar to the Ramanujan $Q$-function, so we treat this summation in the same way as in \cite[Section 4]{FS09}. Let $k_0$ be an integer such that $k_0=o(n^{2/3})$ and we split the summation into two parts:
\begin{align*}
\sum_{2 \le k \le n/2}\dfrac{n !}{(n-2k)! n^{2k}}=\sum_{2 \le k \le k_0}\dfrac{n !}{(n-2k)! n^{2k}} + \sum_{k_0 < k \le n/2}\dfrac{n !}{(n-2k)! n^{2k}}.
\end{align*}
For $k=o(n^{2/3})$, by \cite[Theorem 4.4]{FS09} we have
\begin{align*}
\dfrac{n !}{(n-2k)! n^{2k}}=e^{-2k^2/n}\left(1+O\left(\frac{k}{n}\right)+O\left(\frac{k^3}{n^2}\right)\right).
\end{align*}
Because the terms in the summation are decreasing in $k$, and $e^{-2k^2/n}$ are exponentially small for $k>k_0$, the second summation is negligible. Therefore,
\begin{align*}
\sum_{2 \le k \le n/2}\dfrac{n !}{(n-2k)! n^{2k}}&=\sum_{2 \le k \le k_0}e^{-2k^2/n}\left(1+O\left(\frac{k}{n}\right)+O\left(\frac{k^3}{n^2}\right)\right) +o(1)\\
&=\sum_{2 \le k \le k_0}e^{-2k^2/n}+O(1).
\end{align*}
Again, since $e^{-2k^2/n}$ are exponentially small for $k>k_0$, we can take the summation for $2\le k \le n/2$. Therefore, by Euler-Maclaurin's formula we have
\begin{align*}
\sum_{2 \le k \le n/2}e^{-2k^2/n}=\sqrt{n}\int_0^\infty e^{-2x^2}dx+O(1)=\sqrt{\dfrac{\pi}{8}}\sqrt{n}+O(1),
\end{align*}
which complete the proof.
\end{proof}
\subsection{Asymptotic of the coefficients of $F_2(x,x)$}
\label{subsec:F2}
We deal with the coefficients of the diagonal $F_2(x,x)$.
From \eqref{eq:Tintro},
we have
\begin{align*}
Z(x,y) = T_x + T_y
= \sum_{r,s=0}^{\infty} \frac{(r+s)r^{s-1}s^{r-1}}{r!s!}
x^r y^s.
\end{align*}
In particular, by Lemma \ref{keylem1} we have
\begin{align}\label{eq:diagonalZ}
Z(x,x)
&= \sum_{r,s=0}^{\infty}
\frac{(r+s)r^{s-1}s^{r-1}}{r!s!} x^{r+s}\nonumber\\
&= \sum_{n=1}^{\infty} n
\left(\sum_{r+s=n} \frac{n! r^{s-1}s^{r-1}}{r!s!}\right)
\frac{x^n}{n!}
= \sum_{n=1}^{\infty} 2n^{n-1} \frac{x^n}{n!}.
\end{align}
Let $Y(x)$ be the exponential generating function for the number of labeled rooted spanning trees in $K_n$:
\begin{align}\label{eq:egftree}
Y(x):=\sum_{n=1}^{\infty}n^{n-1}\frac{x^n}{n!}.
\end{align}
First we see the formula for the power of $Y(x)$.
\begin{lem}\label{lem:polyegftree}
For $k=1,2,\dots$,
\begin{align}\label{eq:polyegftree}
Y(x)^k=\sum_{n=1}^{\infty}k(n-1)(n-2)\cdots(n-(k-1))n^{n-k}\frac{x^n}{n!}.
\end{align}
\end{lem}
\begin{proof}
The proof is by induction in $k$. Assume that \eqref{eq:polyegftree} holds for $k$. Then,
\begin{align*}
Y(x)^{k+1}&=\left(\sum_{n=1}^{\infty}k(n-1)(n-2)\cdots(n-(k-1))n^{n-k}\frac{x^n}{n!}\right)\left(\sum_{n=1}^{\infty}n^{n-1}\frac{x^n}{n!}\right)\\
&=kx^{k+1}\left(\sum_{n=0}^{\infty}(n+k)^{n-1}\frac{x^n}{n!}\right)\left(\sum_{n=0}^{\infty}(n+1)^{n-1}\frac{x^n}{n!}\right)\\
&=kx^{k+1}\sum_{n=0}^{\infty}\left(\sum_{r=0}^n\binom{n}{r}(k+r)^{r-1}(1+n-r)^{n-r-1}\right)\frac{x^n}{n!}.
\end{align*}
Note that by \eqref{eq:Anxy},
\begin{align*}
\sum_{r=0}^n\binom{n}{r}(k+r)^{r-1}(1+n-r)^{n-r-1}&=A_{n}(k,1;-1,-1)\\
&=\Bigl(\frac{1}{k}+1\Bigr)(k+1+n)^{n-1},
\end{align*}
so that
\begin{align*}
Y(x)^{k+1}&=x^{k+1}\sum_{n=0}^{\infty}(k+1)(n+k+1)^{n-1}\frac{x^n}{n!}\\
&=(k+1)\sum_{n=0}^{\infty}(n+1)(n+2)\cdots(n+k)(n+k+1)^{n}\frac{x^{n+k+1}}{(n+k+1)!}\\
&=(k+1)\sum_{n=k+1}^{\infty}(n-1)(n-2)\cdots(n-k)n^{n-(k+1)}\frac{x^n}{n!}.
\end{align*}
Hence, \eqref{eq:polyegftree} holds for $k+1$, and by induction this finishes the proof.
\end{proof}
Lemma \ref{lem:polyegftree} gives for $a_k \in \mathbb{R}, k=1,2,\dots,$
\begin{align}\label{seriesofY}
\sum_{k=1}^{\infty}a_kY(x)^k=\sum_{n=1}^{\infty}n^{n-1}\left(\sum_{k=1}^{\infty}a_kk\frac{(n-1)(n-2)\cdots(n-(k-1))}{n^{k-1}}\right)\frac{x^n}{n!},
\end{align}
where the summation with respect to $k$ is finite.
From Corollary \ref{cor:wn}, \eqref{eq:diagonalZ} and \eqref{eq:polyegftree},
\begin{eqnarray}
\begin{aligned}\label{eq:diagonalZW}
&Z(x,x)=\sum_{n=1}^{\infty}2n^{n-1}\frac{x^n}{n!}=2Y(x),\\
&W(x,x)=\sum_{n=1}^{\infty}2(n-1)n^{n-2}\frac{x^n}{n!}=Y(x)^2.
\end{aligned}
\end{eqnarray}
Hence, we can express $F_2(x,x)$ by using only $Y(x)$, instead of $Z(x,x)$ and $W(x,x)$. Substituting \eqref{eq:diagonalZW} in \eqref{eq:v} with the notation $Y=Y(x)$, we have
\begin{align}\label{eq:ExpandF2}
F_2(x,x)&=f_2(2Y,Y^2)
=\frac{Y^5(2+4Y-Y^2)}{12(1-Y)^3(1+Y)^2}\nonumber\\
&=\frac{Y^2-3Y-3}{12}
- \frac{11}{64(1+Y)} + \frac{1}{32(1+Y)^2}
\nonumber\\
&\ \ \ + \frac{143}{192(1-Y)} - \frac{11}{24(1-Y)^2}+\frac{5}{48(1-Y)^3}.
\end{align}
In the case of $K_n$, a similar
expression can be found in \cite[(17)]{W77}. As we will see
below, the last term of \eqref{eq:ExpandF2} determines the
asymptotic behavior of coefficients of $F_2(x,x)$ in
Theorem~\ref{thm:asympofF_2}.
To obtain the asymptotic of coefficients of $F_2(x,x)$,
from \eqref{eq:ExpandF2}, we only need to estimate coefficients of $\frac{1}{(1-Y)^p}$, $\frac{1}{(1+Y)^p},\ p \in \mathbb{N}$. For fixed $p\in \mathbb{N}$, the tree polynomials $\{t_n(p)\}_{n\ge0}$ are defined by
\begin{align}
\frac{1}{(1-Y(x))^p}=\sum_{n=0}^{\infty}t_n(p)\frac{x^n}{n!}.
\end{align}
This polynomial and their asymptotic behavior are well studied in \cite{KP89}.
\begin{lem}[\cite{KP89}]
For fixed $p \in \mathbb{N}$, as $n \to \infty$,
\begin{align}\label{asympoftnp}
t_n(p)=\frac{\sqrt{2\pi}n^{n-1}}{2^{p/2}}\left(\frac{n^{(p+1)/2}}{\Gamma(p/2)}+\frac{\sqrt{2}p}{3}\frac{n^{p/2}}{\Gamma((p-1)/2)}+O(n^{(p-1)/2})+O(1)\right).
\end{align}
\end{lem}
Hence, we have already obtained the asymptotic behavior of
$\frac{1}{(1-Y)^p}, p \in \mathbb{N}$.
For $\frac{1}{(1+Y)^p}, p \in \mathbb{N}$, we only give a rough
estimate for coefficients of $\frac{1}{(1+Y)^p}$. By the
binomial expansion and \eqref{seriesofY}, we have
\begin{align*}
\frac{1}{(1+Y(x))^p}&=\sum_{k=0}^{\infty}\binom{p+k-1}{k}(-1)^kY(x)^k\\
&=1+\sum_{n=1}^{\infty}\left(\frac{n^{n-1}}{\Gamma(p)}\sum_{k=0}^{\infty}\binom{n-1}{k}(-1)^{k+1}\frac{\Gamma(p+k+1)}{n^k}\right)\frac{x^n}{n!},
\end{align*}
so that as $n \to \infty$,
\begin{align}\label{eq:orderof(1+Y)^p}
\langle x^n \rangle \frac{1}{(1+Y(x))^p}&=\frac{n^{n-1}}{\Gamma(p)}\sum_{k=0}^{\infty}\binom{n-1}{k}(-1)^{k+1}\frac{\Gamma(p+k+1)}{n^k}\nonumber\\
&\le t_n(p)=O(n^{n+(p-1)/2}).
\end{align}
Now we are in a position to prove Theorem~\ref{thm:asympofF_2}.
\begin{proof}[Proof of Theorem~\ref{thm:asympofF_2}]
By \eqref{eq:diagonalZW}, \eqref{asympoftnp} and
\eqref{eq:orderof(1+Y)^p}, we obtain the leading asymptotic
behavior of coefficients of $F_2(x,x)$ appeared in \eqref{eq:ExpandF2} as
\begin{align*}
\langle x^n \rangle F_2(x,x)
&=-\frac{1}{12}n^{n-1}+O(n^n)+O(n^{n+1/2})+ \frac{143}{192}(n^n+O(n^{n-1/2}))\\
&\ \ \ - \frac{11}{24}\Bigl(\sqrt{\frac{\pi}{2}}n^{n+1/2}+O(n^{n})\Bigr)+\frac{5}{48}(n^{n+1}+O(n^{n+1/2}))\\
&=\frac{5}{48}n^{n+1}+O(n^{n+1/2}),
\end{align*}
which completes the proof.
\end{proof}
\section{Another expression for $F_k(x,y)$}
In this section, we give a proof of Theorem~\ref{thm:expressionofF_k}.
Our proof is based on the combinatorial argument developed in
\cite[Section 6]{W77}.
Firstly, we explain how to obtain a basic graph from a connected bipartite graph.
Fix $k \ge 2$ and take a labeled connected bipartite
$(r,s,r+s-1+k)$-graph $G$ whose vertex set is $V=(V_1,V_2)$
with $|V_1|=r$ and $|V_2|=s$. We delete a leaf and its adjacent edge from $G$,
and repeat this procedure until vanishing all leaves in the resultant graph.
Since we delete only one vertex and one edge in each
procedure, we obtain a labeled connected bipartite $(t,u,t+u-1+k)$-graph without
leaf for some $t \le r$ and $u \le s$.
Clearly, the resultant graph does not depend on the
order of eliminations of leaves, and it is denoted by $G'$. Let
$V'=(V_1',V_2')$ be the vertex set of the graph $G'$. For each
vertex $v \in V'$, we call it a \textit{special point}
if $\mathrm{deg}(v)\ge 3$ and a \textit{normal point} if
$\mathrm{deg}(v)=2$.
Let $r_{{\rm sp}}$ and $s_{{\rm sp}}$ be the
number of special points in $V_1'$ and $V_2'$,
respectively.
By applying the handshaking lemma to the graph $G'$,
we see that $\sum_{v \in V'} ({\rm deg}(v) - 2) = 2(k-1)$
and hence
\begin{align}\label{eq:numofsppoints}
r_{{\rm sp}}+s_{{\rm sp}}\le 2(k-1).
\end{align}
In the graph $G'$, a path whose end vertices are
distinct special points is said to be a \textit{special path}
and a cycle which contains exactly one special point is said to be
a \textit{special cycle}.
Since $G'$ is connected and $\mathrm{deg}(v) \ge 2$, it is clear
that it consists of such
special paths and cycles which are disjoint except at special points.
We distinguish these special paths and cycles as seven kinds and contract them to the minimal ones as in
Figure~\ref{fig:specialpaths} to obtain the \textit{basic graph} $\mathcal{B}(G)$.
\begin{itemize}
\item
An $\alpha_i$\textit{-cycle} is a special cycle with exactly one special
point in $V_i'$ ($i=1,2$).
By the structure of bipartite graphs, these special cycles
contain at least three normal points. The minimal $\alpha_i$-cycle has three normal points
as in Figure~\ref{fig:specialpaths}.
\item A $\beta_j$\textit{-path} is a special path whose end vertices
are two distinct special points in $V_j'$ ($j=1,2$).
By the structure of bipartite graphs, these special paths
contain at least one normal point. The minimal $\beta_j$-path has only one normal point as in Figure~\ref{fig:specialpaths}.
\item A special path whose end vertices are special points
in $V_1'$ and $V_2'$ is called in several ways according to the situation.
For each pair of special points $v_1 \in V_1'$ and $v_2 \in V_2'$, we have two cases.
\begin{itemize}
\item Case(i) there is only one special path connecting $v_1$ and $v_2$:
such a special path is called a $\gamma_1$\textit{-path}.
The length of the minimal $\gamma_1$-path is one.
\item Case(ii) there is more than one special path connecting $v_1$ and $v_2$:
since we are considering a simple graph, there is at most one such a special path of length one,
i.e., joined by an edge. A special path is called
a $\gamma_2$\textit{-path} if the length is three or more
and a $\delta$\textit{-path} if the length is one.
The length of the minimal $\gamma_2$-path is three.
\end{itemize}
\end{itemize}
We decomposed $G'$ into the union of a collection
of $\alpha_i$-cycles, $\beta_j$-paths, $\gamma_k$-paths,
and $\delta$-path. The \textit{basic graph} $\mathcal{B}(G)$ is obtained from $G'$ by contracting
$\alpha_i$-cycles, $\beta_j$-paths, and $\gamma_k$-paths to
the minimal ones as in Figure~\ref{fig:specialpaths}.
In the procedure of contraction, we forget about
labels of vertices.
We summarize the contraction procedures below.
\begin{itemize}
\item If each $\alpha_i$-cycle ($i=1,2$) contains five or more normal points, we
contract it to the minimal $\alpha_i$-cycle, which has three normal points.
\item If each $\beta_j$-path ($j=1,2$) contains three or more normal points, we
contract it to the minimal $\beta_j$-path, which has only one normal point.
\item If each $\gamma_1$-path contain normal points, we
contract it to the minimal $\gamma_1$-path, which has no normal points.
\item If each $\gamma_2$-path contains four or more normal points, we
contract it to the minimal $\gamma_2$-path, which has two normal points.
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\hsize]{sppathcycle2.pdf}
\end{center}
\caption{Seven types of minimal special paths and cycles. The circles denote special points.}
\label{fig:specialpaths}
\end{figure}
We have seen how to make the basic graph $\mathcal{B}(G)$
from a given labeled connected bipartite $(r,s,r+s-1+k)$-graph $G$.
Note that the number of cycles in graphs is invariant by the contractions, so that $\mathcal{B}(G)$
has just $k$ cycles.
We will reconstruct labeled bipartite graphs from each basic graph $\mathcal{B}$
and introduce $J_{\mathcal{B}}(x,y)$ to express $F_k(x,y)$ as sum of $J_{\mathcal{B}}(x,y)$'s.
\begin{proof}[Proof of Theorem~\ref{thm:expressionofF_k}]
For a given labeled connected bipartite $(r,s,r+s-1+k)$-graph $G$,
let $V''=(V_1'',V_2'')$ be the vertex set of $\mathcal{B}(G)$,
and also let
$a_i, b_j, c_k$ and $d$ be the number of
$\alpha_i$-cycle, $\beta_j$-path, $\gamma_k$-path, and $\delta$-paths in $\mathcal{B}(G)$, respectively.
Then, for the number of vertices in $\mathcal{B}(G)$, we have
\begin{align}\label{eq:numofV_1''}
|V_1''|&=r_{\rm sp}+a_1+2a_2+b_2+c_2\le t \le r,\\
\label{eq:numofV_2''}
|V_2''|&=s_{\rm sp}+2a_1+a_2+b_1+c_2\le u \le s.
\end{align}
For the number of edges in $\mathcal{B}(G)$,
since the same number of vertices and edges are deleted by contraction, we have
\begin{equation}
4a_1+4a_2+2b_1+2b_2+c_1+3c_2+d=|V_1''|+|V_2''|+k-1.
\label{eq:numofE''}
\end{equation}
Combining \eqref{eq:numofV_1''}-\eqref{eq:numofE''}
and the inequality \eqref{eq:numofsppoints}, we have
\begin{align}\label{eq:a_1tod}
a_1+a_2+b_1+b_2+c_1+c_2+d&=r_{\rm sp}+s_{\rm sp}+k-1 \nonumber \\
&\le 3(k-1).
\end{align}
Therefore, if $G$ is a labeled connected bipartite
$(r,s,r+s-1+k)$-graph, then $\mathcal{B}(G)$ should satisfy the
conditions \eqref{eq:numofsppoints}-\eqref{eq:a_1tod}.
Now we denote the set of all possible basic graphs having
$k$ cycles by $BG_k$,
i.e.,
\[
BG_k := \{\mathcal{B}(G) : \text{$G$ is a labeled bipartite
$(r,s,r+s-1+k)$-graph for some $r,s$}. \}
\]
It follows from \eqref{eq:a_1tod} that $BG_k$ is a
\textit{finite} set.
For fixed $\mathcal{B} \in BG_k$,
let $j_{\mathcal{B}}(r,s)$ be the number of labeled connected bipartite
$(r,s,r+s-1+k)$-graphs $G$ such that $\mathcal{B}(G) = \mathcal{B}$.
We define the exponential generating function of
$j_{\mathcal{B}}(r,s)$ as
\begin{align*}
J_{\mathcal{B}}=J_{\mathcal{B}}(x,y):=\sum_{r,s=0}^{\infty}j_{\mathcal{B}}(r,s)\frac{x^ry^s}{r!s!}.
\end{align*}
We will show below that $J_{\mathcal{B}}(x,y)$ is expressed by a
rational function of $T_x$ and $T_y$.
To this end, we count $j_{\mathcal{B}}(r,s)$ by reversing the
procedure of contraction above, i.e., by adding pairs of a normal point and its
adjacent edge in $\mathcal{B}$ and rearranging labels of $(r,s)$
vertices.
We construct bipartite $(r,s,r+s-1+k)$-graphs from $\mathcal{B}$ by two steps as
follows. \\
Step 1:
Take $\mathcal{B} \in BG_k$. Let $V'' = (V_1'', V_2'')$ be the vertex set of $\mathcal{B}$ and
$M:=a_1+a_2+b_1+b_2+c_1+c_2$
be the number
of all minimal special paths and cycles in $\mathcal{B}$ except $\delta$-paths.
Take $t$ and $u$ such that $|V_1''| \le t \le r$ and
$|V_2''| \le u \le s$.
We label all minimal $\alpha_i$-cycles, $\beta_j$-paths and
$\gamma_k$-paths in $\mathcal{B}$, say, $\mathsf{s}_1,\mathsf{s}_2,\dots,\mathsf{s}_M$,
and we add pairs of a normal point and its adjacent edge in these special paths/cycles.
By the structure of bipartite graphs, for every $j=1,2,\dots, M$, the number of added pairs in
each $\mathsf{s}_j$ is even, and the numbers of added normal
points in $V_1''$ and $V_2''$ are equal, which we denote by $m_j$.
Hence, a necessary condition for the numbers of added vertices in $V_1''$ and
$V_2''$ is $t-|V_1''|=u-|V_2''| =\sum_{j=1}^M m_j$.
Combining \eqref{eq:numofV_1''} and \eqref{eq:numofV_2''} with the
necessary condition, the non-negative integers $\{m_j\}_{j=1}^M$ satisfy
\begin{align}
m_1+m_2+\cdots +m_M &= t-(r_{\rm sp}+a_1+2a_2+b_2+c_2),
\label{eq:addingptsinV_1''}\\
m_1+m_2+\cdots +m_M &= u-(s_{\rm sp}+2a_1+a_2+b_1+c_2).
\label{eq:addingptsinV_2''}
\end{align}
Let $y_{\mathcal{B}}(t,u)=y_{\mathcal{B}}(t,u,r_{{\rm sp}},s_{{\rm sp}},a_1,a_2,b_1,b_2,c_1,c_2)$
be the number of the solutions $\{m_j\}_{j=1}^M$ of \eqref{eq:addingptsinV_1''} and
\eqref{eq:addingptsinV_2''}.
For each solution $\{m_j\}_{j=1}^M$, we obtain
an unlabeled connected bipartite $(t,u,t+u-1+k)$-graph,
and hence $y_{\mathcal{B}}(t,u)$ of those from $\mathcal{B}$. \\
Step 2:
Take one of $y_{\mathcal{B}}(t,u)$ of unlabeled connected bipartite
$(t,u,t+u-1+k)$-graphs and call its vertices $T_1,\dots, T_t$ and $U_1,\dots, U_u$.
Let $\mathcal{I}_{t,u}$ the set of $\{(r_{1i}, s_{1i})\}_{i=1}^t$
and $\{(r_{2j}, s_{2j})\}_{j=1}^u$ such that
$r_{1i}\ge1, r_{2j}\ge0, s_{1i}\ge0, s_{2j}\ge 1$,
$\sum\limits_{i=1}^tr_{1i}+\sum\limits_{j=1}^ur_{2j}=r$ and
$\sum\limits_{i=1}^ts_{1i}+\sum\limits_{j=1}^us_{2j}=s$.
For each $\{(r_{1i}, s_{1i})\}_{i=1}^t$
and $\{(r_{2j}, s_{2j})\}_{j=1}^u$ in $\mathcal{I}_{t,u}$,
we attach a rooted tree of size $(r_{1i}, s_{1i})$ to
$T_i$ for $i=1,2,\dots,t$ and a rooted tree of size
$(r_{2j}, s_{2j})$ to $U_j$ for $j=1,2,\dots,u$,
respectively.
Let $\phi(r,s,t,u)$ be the number of these
bipartite $(r,s,r+s-1+k)$-graphs. Then, by counting $t$
rooted trees whose roots are in $V_1$ and $u$ rooted trees
whose roots are in $V_2$, we have
\begin{align}\label{eq:phi}
\phi(r,s,t,u)
&=\sum\nolimits'\binom{r}{r_{11},\dots,r_{1t},r_{21},\dots,r_{2u}}\binom{s}{s_{11},\dots,s_{1t},s_{21},\dots,s_{2u}}
\nonumber \\
&\quad \times
\prod_{i=1}^tr_{1i}^{s_{1i}}s_{1i}^{r_{1i}-1}\prod_{j=1}^ur_{2j}^{s_{2j}-1}s_{2j}^{r_{2j}},
\end{align}
where the summation $\sum\nolimits'$ is taken over the set $\mathcal{I}_{t,u}$. \\
By the above two steps, we obtain all labeled connected bipartite $(r,s,r+s-1+k)$-graphs from
the basic graph $\mathcal{B}$. However, not all of them are different
because of forgetting labels $\mathsf{s}_1, \dots, \mathsf{s}_M$ after attaching labeled rooted trees to all vertices.
Indeed, if $g_{\mathcal{B}}$ is the number of automorphisms of
$\mathcal{B}$, then every graph appears exactly $g_{\mathcal{B}}$ times.
Hence, we have
\begin{align*}
j_{\mathcal{B}}(r,s)=\sum_{\substack{|V_1''| \le t \le r \\ |V_2''| \le u \le s}}\frac{y_{\mathcal{B}}(t,u)\phi(r,s,t,u)}{g_{\mathcal{B}}}.
\end{align*}
Using this, we have
\begin{align}\label{eq:J_B(x,y)}
J_{\mathcal{B}}(x,y)=\frac{1}{g_{\mathcal{B}}}\sum_{\substack{|V_1''|\le t\\ |V_2''|\le u}}y_{\mathcal{B}}(t,u)\sum_{\substack{t\le r \\ u \le s}}\phi(r,s,t,u)\frac{x^ry^s}{r!s!}.
\end{align}
For the summation in $r$ and $s$, by \eqref{eq:phi}, we have
\begin{align*}
\sum_{\substack{t\le r\\ u \le s}}\phi(r,s,t,u)\frac{x^ry^s}{r!s!}
&=\sum_{\substack{t\le r\\ u \le s}}\sum\nolimits'\prod_{i=1}^tr_{1i}^{s_{1i}}s_{1i}^{r_{1i}-1}\frac{x^{r_{1i}}y^{s_{1i}}}{r_{1i}!s_{1i}!}\prod_{j=1}^ur_{2j}^{s_{2j}-1}s_{2j}^{r_{2j}}\frac{x^{r_{2j}}y^{s_{2j}}}{r_{2j}!s_{2j}!}\nonumber\\
&=T_x^tT_y^u.
\end{align*}
On the other hand, by a straightforward calculation, we have
\begin{align}\label{eq:sumofy_{B}(t,u)T_x^tT_y^u}
\sum_{\substack{|V_1''|\le t\\ |V_2''|\le
u}}y_{\mathcal{B}}(t,u)T_x^tT_y^u
&=\sum_{\substack{|V_1''|\le t\\ |V_2''|\le u}}\ \sum_{\substack{m_1,\dots,m_M\\
\sum m_j=t-|V_1''|=u-|V_2''|}}T_x^tT_y^u\nonumber\\
&=T_x^{|V_1''|}T_y^{|V_2''|}\sum_{\substack{|V_1''|\le t\\
|V_2''|\le u}}\ \sum_{\substack{m_1,\dots, m_M\nonumber\\
\sum m_j=t-|V_1''|=u-|V_2''|}}(T_xT_y)^{\sum_{j=1}^{M} m_j}\\
&=T_x^{|V_1''|}T_y^{|V_2''|}\sum_{n\ge0}\ \sum_{\substack{m_1,\dots,m_M\\
\sum m_j=n}}(T_xT_y)^{\sum_{j=1}^{M} m_j}\nonumber\\
&=T_x^{|V_1''|}T_y^{|V_2''|}\prod_{j=1}^{M}\Bigl(\sum_{m_j\ge0}(T_xT_y)^{m_j}\Bigr)\nonumber\\
&=T_x^{|V_1''|}T_y^{|V_2''|}(1-T_xT_y)^{-M}.
\end{align}
Combining \eqref{eq:numofV_1''}, \eqref{eq:numofV_2''}, \eqref{eq:J_B(x,y)} and \eqref{eq:sumofy_{B}(t,u)T_x^tT_y^u}, we obtain \eqref{eq:J_B}. Since non-isomorphic basic graphs with $k$ cycles lead non-isomorphic labeled connected bipartite $(r,s,r+s-1+k)$-graphs, taking a summation $J_{\mathcal{B}}$ with respect to $\mathcal{B} \in BG_k$, we obtain \eqref{eq:expressionofF_k}, which completes the proof.
\end{proof}
We give an example of Theorem~\ref{thm:expressionofF_k} for $k=2$.
\begin{ex}[$k=2$]
Let us consider all the basic graphs for $k=2$ and compute $F_2$.
From the conditions \eqref{eq:numofsppoints} and \eqref{eq:a_1tod}, we have
\begin{align*}
& r_{\mathrm{sp}} + s_{\mathrm{sp}} \leq 2, \\
& a_{1}+a_{2}+b_{1}+b_{2}+c_{1}+c_{2}+d =r_{\mathrm{sp}}+s_{\mathrm{sp}}+1 \leq 3.
\end{align*}
As the result, the possible combinations of
numbers of special points are
$(r_{\mathrm{sp}},s_{\mathrm{sp}})=(1,0),(0,1),(1,1),(2,0),(0,2)$.
We compute $J_{\mathcal{B}}$ for each of these cases.
For instance, the calculation procedure is described below for the case of $(r_{\mathrm{sp}},s_{\mathrm{sp}})=(1,0)$.
First, consider the numbers
of cycles and paths that make up the basic graphs.
By using
\begin{align*}
a_{1}+a_{2}+b_{1}+b_{2}+c_{1}+c_{2}+d =2,
\end{align*}
the number of $a_1$ to $d$ is $(a_1,a_2,b_1,b_2,c_1,c_2,d)=(2,0,0,0,0,0,0)$.
As the result, the basic graph is a combination of two $\alpha_1$-cycles.
We define this basic graph as $\mathcal{B}_1$.
Note that basic graphs are unlabeled.
Next, let us compute the number of graph automorphism $g_{\mathcal{B}_1}$.
We label each of the vertices appropriately.
For the labeled basic graph, there are $2!$ ways to arrange the two $\alpha_1$-cycles.
There are two possible ways to label the vertices of each $\alpha_1$-cycle:
$1 \to 2 \to 3 \to 4 \to 1$ with the special point as 1, or in reverse $1 \to 4 \to 3 \to 2 \to 1$.
Therefore $g_{\mathcal{B}_1}=2!\times 2^2 = 8$.
Consequently, from \eqref{eq:J_B} we obtain
\begin{align*}
J_{\mathcal{B}_1}(x,y)=\frac{T_x^3T_y^4}{8(1 - T_xT_y)^2}.
\end{align*}
We can derive the others by the same calculation.
Therefore,
\begin{align*}
\sum_{\mathcal{B} \in B G_{2}} J_{\mathcal{B}}(x, y) =&\frac{T_x^3T_y^4}{8(1 - T_xT_y)^2} + \frac{T_x^4T_y^3}{8(1 - T_xT_y)^2} + \frac{T_x^4T_y^5}{8(1-T_x T_y)^3} \\
& +\frac{T_x^2T_y^3}{12(1-T_x T_y)^3} + \frac{T_x^5T_y^4}{8(1-T_x T_y)^3} + \frac{T_x^3T_y^2}{12(1-T_x T_y)^3}\\
&+\frac{T_x^4T_y^4}{6(1-T_x T_y)^3}+\frac{T_x^3T_y^3}{2(1-T_x T_y)^2} + \frac{T_x^4T_y^4}{4(1-T_x T_y)^3}\\
=&\frac{W^{2}(2+3 W)}{24(1-W)^{3}} Z+\frac{W^{3}(6-W)}{12(1-W)^{3}} \\
=&F_{2}(x, y),
\end{align*}
where the nine terms correspond to the nine basic graphs in Figure~\ref{fig:basic_graphs}, respectively.
Hence, the result of the calculation by using basic graphs is consistent with $F_{2}(x,y)=f_2(Z,W)$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.8\hsize]{basic_graphs_k=2.pdf}
\end{center}
\caption{Basic graphs for $k=2$}
\label{fig:basic_graphs}
\end{figure}
\end{ex}
\begin{proof}[Proof of Corollary~\ref{cor:intro}]
From \eqref{eq:expressionofF_k}, \eqref{eq:J_B} and
\eqref{eq:a_1tod}, we see that
\begin{equation}
F_k(x,y)
= \frac{1}{(1-T_xT_y)^{3(k-1)}}
\sum_{\mathcal{B} \in BG_k} \frac{1}{g_{\mathcal{B}}} T_x^{\alpha_{\mathcal{B}}}
T_y^{\beta_{\mathcal{B}}} (1-T_xT_y)^{p_{\mathcal{B}}},
\label{eq:poly}
\end{equation}
where $\alpha_{\mathcal{B}} = r_{{\rm sp}}+a_1+2a_2+b_2+c_2$,
$\beta_{\mathcal{B}} = s_{{\rm sp}}+2a_1+a_2+b_1+c_2$, and
\begin{align}
p_{\mathcal{B}} &= 3(k-1) - (a_1+a_2+b_1+b_2+c_1+c_2) \nonumber\\
&= 2(k-1) - (r_{{\rm sp}} + s_{{\rm sp}}) + d \nonumber\\
&= \sum_{v \in \text{special points}} (\mathrm{deg}(v) -3) + d\nonumber\\
&\ge 0.
\label{eq:p_B}
\end{align}
Note that there are some basic graphs $\mathcal{B} \in BG_k$ such that $p_{\mathcal{B}}=0$.
For example,
we can construct a basic graph $\mathcal{B}^*$ with $r_{{\rm sp}}=2(k-1)$, $a_1=2$, $b_1=3k-5$ and other constants vanishing as follows:
we label all $2(k-1)$ special points in $V''_1$, say, $r_1, r_2, \dots, r_{2(k-1)}$. We attach an $\alpha_1$-cycle to each of $r_1$ and $r_{2(k-1)}$,
and then connect $r_{2j-1}$ with $r_{2j}$ ($j=1,2,\dots,k-1$) by a $\beta_1$-path
and $r_{2j}$ with $r_{2j+1}$ ($j=1,2,\dots,k-2$) by two $\beta_1$-paths.
Then, we obtain $\mathcal{B}^*$.
Remark that in the case of $k=2$, $\mathcal{B}^*$ corresponds to the top-right graph in Figure~\ref{fig:basic_graphs}.
Clearly, $s_{{\rm sp}}=d=0$ holds for all $k\ge2$, and the calculation in \eqref{eq:p_B} gives $p_{\mathcal{B}^*}=0$.
From this observation, the numerator of the right-hand side of \eqref{eq:poly} turns out to be a polynomial of the following form
\[
Q(x,y) = \sum_{i=1}^m C_i x^{a_i} y^{b_i} +\sum_{i=m+1}^{m+n} C_i x^{a_i} y^{b_i} (1- xy)^{p_i},
\]
for some positive integers $m$ and $n$. Here $a_i, b_i$ are non-negative integers, $p_i$ is a positive integer
and $C_i > 0$ for all $i$.
If $Q(x,y)$ has a factor $1-xy$,
plugging $y=x^{-1}$ in both sides yields $0=\sum_{i=1}^m C_i > 0$, which is a contradiction. Hence,
$Q(x,y)$ does
not have the factor $1-xy$, which implies that
the numerator of the right-hand side of \eqref{eq:poly} does not have a factor $1-T_xT_y$.
\end{proof}
Finally, we remark on another proof of Proposition
\ref{prop:Uintro} using a similar argument in the proof of
Theorem~\ref{thm:expressionofF_k}, which
is a bipartite version of the combinatorial argument discussed in
\cite[Section 5]{W77}.
We use the same notation as above.
In the preliminary step, we delete leaves and adjacent edges repeatedly.
In this case, by this procedure,
we obtain the unique cycle of length, say $2t$.
Let $r,s \ge 2$ be fixed and $V''
= (V_1'', V_2'')$ be a vertex set. Take $t$ such that $2 \le
t \le \min\{r,s\}$, and consider an unlabeled bipartite
unicyclic graph whose length of the cycle is $2t$. Clearly,
$|V_1''|=|V_2''|=t$. For each of vertices of this graph, we
attach a rooted tree in a similar way of Step 2 in the proof
of Theorem \ref{thm:expressionofF_k}. To create $2t$ rooted
trees, we partition $(r,s)$ vertices into $2t$ vertex sets,
and all of these partitions are in $\mathcal{I}_{t,t}$. By this
procedure, we obtain $\phi(r,s,t,t)$ labeled connected
bipartite $(r,s,r+s)$-graphs, where $\phi(r,s,t,u)$ is
defined in \eqref{eq:phi}. For each of the obtained graphs,
there are $2t$ automorphisms due to the cycle and labels of
roots of rooted trees. Let $j(r,s)$ be the number of labeled
connected bipartite $(r,s,r+s)$-graphs. Then, we have
\begin{align*}
j(r,s)=\sum_{2 \le t \le \min\{r,s\}}\frac{\phi(r,s,t,t)}{2t}.
\end{align*}
Let $J(x,y)$ be the exponential generating function for $j(r,s)$, and we have
\begin{align*}
J(x,y)&=\sum_{r,s=0}^{\infty}j(r,s)\frac{x^ry^s}{r!s!}
=\sum_{t=2}^{\infty}\frac{1}{2t}\sum_{\substack{t \le r \\ t\le s}}\frac{\phi(r,s,t,t)}{r!s!}x^ry^s\\
&=\frac{1}{2}\sum_{t=2}^{\infty}\frac{(T_xT_y)^t}{t}
=-\frac{1}{2}(\log(1-T_xT_y)+T_xT_y)=F_1(x,y),
\end{align*}
which completes the combinatorial proof of Proposition \ref{prop:Uintro}.
\section*{Acknowledgment}
This work was supported by JSPS KAKENHI Grant Numbers JP18H01124 and JP20K20884,
JSPS Grant-in-Aid for Transformative Research Areas (A) JP22H05105,
and JST CREST Mathematics (15656429).
TS was also supported in part by JSPS KAKENHI Grant Numbers, JP20H00119 and JP21H04432.
|
1,477,468,751,443 | arxiv | \section{Introduction}
The classical action of the Standard Model (SM) of particle physics is close to being conformally invariant. The only dimensionful coupling constants it features are given by the Higgs mass and its vacuum expectation value (vev), the latter setting the scale of electroweak (EW) symmetry breaking at $v\sim$ 246~GeV. Such a value is remarkably small compared to the Planck mass $M_{\rm P}\sim10^{19}$~GeV, which is set by the strength of the gravitational coupling. The huge gap between the two scales defines the hierarchy problem. A fourth dimensionful parameter, the cosmological constant, is responsible for the observed late acceleration of the Universe. The cosmological constant scale is $10^{-123}$ smaller than the Planck scale, leading to a second hierarchy problem in the SM coupled to gravity.
In this work, we embed the SM and General Relativity (GR) in a larger theory which exhibits local scale invariance classically. All couplings are therefore dimensionless. A mass scale arises through gauge fixing the conformal symmetry, from which all dimensionful couplings can be derived. Thus, all couplings which characterize fundamental physics at low energy scales are shown to have a common origin, in the same spirit as in Ref.~\cite{Bars:2013yba}. The role of EW symmetry breaking is crucial in this respect and is realized by means of a potential having the same form as the Higgs-dilaton potential, which was considered in Refs.~\cite{Bars:2013yba,Shaposhnikov:2008xb}.
A natural framework in which scale invariance can be realized as a local symmetry is given by a generalization of Riemannian geometry, known as Weyl geometry. A Weyl manifold is defined as an equivalence class of conformally equivalent Riemannian manifolds, equipped with a notion of parallel transport which preserves the metric only up to local rescalings~\cite{calderbank1997einstein}. Such non-Riemannian structures were first introduced by Weyl in pursuit of a unification of gravity and electromagnetism \cite{weyl:1918}. They were later reconsidered in an early paper by Smolin~\cite{Smolin:1979uz} in an attempt to reformulate gravity as a renormalizable quantum field theory. In this paper, as in Ref.~\cite{Smolin:1979uz}, Weyl geometry and conformal invariance are used to motivate the occurrence of new degrees of freedom in the gravitational sector and as guiding principles to build the action functional. Weyl geometry was later rediscovered independently by Cheng \cite{Cheng:1988zx}, who used it to formulate a model with no Higgs particle.
Conformal invariance imposes strong constraints on the terms that can appear in the action and enriches the gravitational sector with a scalar and a vector field. The theory thus obtained is a generalization of Brans-Dicke theory and of conformally invariant gravity theories, such as the one considered in Ref.~\cite{Bars:2013yba}. When the Weyl vector is pure gauge, the theory is equivalent to Brans-Dicke, of which it provides a geometric interpretation. This particular case has appeared in the literature under the name of Weyl Integrable Space-Time (WIST)~\cite{Romero:2012hs,Almeida:2013dba,Salim:1996ei}. However, in those works an additional assumption motivated by Ref.~\cite{Ehlers2012} is made about the free fall of test bodies, which marks a difference with Brans-Dicke. For applications of WIST to cosmology and to the study of spacetime singularities, see \emph{e.g.} Refs.~\cite{Lobo:2015zaa,Gannouji:2015vva}. Generalised scale invariant gravity theories were also obtained in Ref.~\cite{Padilla:2013jza}, by gauging the global conformal symmetry of (a subset of) the Horndeski action with the introduction of the Weyl vector.
Our framework is distinct from conformal gravity~\cite{Mannheim:1988dj,Maldacena:2011mk,Mannheim:2011ds}, where the affine connection is the Levi-Civita one also in the gravity sector. In that case, conformal symmetry is implemented by taking the square of the Weyl tensor as the Lagrangian. The Weyl tensor squared also appears in the bosonic spectral action in the context of noncommutative geometry~\cite{Kurkov:2014twa,Sakellariadou:2016dfl} and in the computation of the (formal) functional integral for quantum gravity~\cite{Hooft:2010ac}.
In this paper we construct an effective field theory with local conformal invariance and show how the SM of particle physics and GR are recovered from it by means of a two-stage spontaneous symmetry breaking. Our proposal is based on a generalisation of Riemannian geometry, namely Weyl geometry, which leads to the introduction of new gravitational degrees of freedom: a scalar field $\phi$ and the Weyl vector $B_{\mu}$. There has been a recent surge of interest in the role of conformal symmetry in gravitational physics, see \emph{e.g.} Refs.~\cite{Bars:2013yba,Hooft:2010ac,Gielen:2015uaa,Gielen:2016fdb}, suggesting that it may play a role in Quantum Gravity. It is therefore possible that the gravitational theory emerging in the classical limit would also display such a symmetry. In this sense, our work is motivated by similar considerations to the ones usually put forward for the introduction of modified gravity theories, see \emph{e.g.} Refs.~\cite{Sotiriou:2007yd,Sotiriou:2008rp,Capozziello:2011et}. In addition, we adopt local conformal invariance as a guiding principle in selecting the action functional \emph{and} the geometric structure of spacetime. The enriched gravitational sector is to be interpreted as purely classical. SM fields are quantized as usual on the classical curved background defined by $g_{\mu\nu}$ \emph{and} $\phi$, $B_\mu$. This can be considered as a generalization of what is usually done in conventional quantum field theory on curved spacetimes.
We would like to mention that the same geometric setting and symmetry breaking process were considered in an unpublished work by Nishino and Rajpoot\footnote{Courtesy of the authors.} \cite{Nishino:2004kb}, although their motivations were different. In that paper the authors point out issues with renormalisability and unitarity in their model. Other aspects of the quantum theory are discussed in Refs.~\cite{Nishino:2009in,Nishino:2011zz}. Furthermore, the authors of Ref.~\cite{Nishino:2004kb} claim that local conformal invariance ``inevitably leads to the introduction of General Relativity". We disagree with their statement. Local conformal invariance of the SM sector only leads to the introduction of the Weyl vector, which is also not enough to determine the affine connection of a Weyl spacetime. Moreover, in our approach there are no issues with renormalisability and unitarity since our model is a \emph{classical} effective field theory.
The plan of the paper is the following. In Section~\ref{sec:Weyl} we recall the fundamentals of Weyl geometry and introduce the notation. In Section~\ref{Theory} we formulate our effective field theory and discuss how the Higgs and the scalar fields couple to gravity. In Section~\ref{EW SSB} we discuss the EW symmetry breaking and show how the dimensionful couplings which govern low energy physics are determined from the parameters of the model and from the scale of ``broken'' conformal symmetry. In Section~\ref{Sec:CouplingSM} we study the other sectors of the SM and show that no further modification is needed to achieve compatibility with local conformal invariance. In Section~\ref{Sec:Fluids} we consider the approximate description of matter as a fluid, following from the underlying field theory of Section~\ref{Sec:Fluids}, and use it to derive the equations of motion of test bodies. In Section~\ref{Section:Alternative} we consider an alternative, phenomenological model for the motion of macroscopic test bodies.
We review our results in the Conclusion, Section~\ref{Conclusions}, where we also examine the relation between our proposal and earlier ones in the literature.
In Section~\ref{Sec:Discussion} we discuss important features of our results and point at directions for future work.
\section{\label{sec:Weyl} Weyl geometry}
We follow Ref.~\cite{Smolin:1979uz} to introduce the basic concepts and notation, although our conventions for the Riemann tensor are different and coincide with those in Ref.~\cite{Wald:1984rg}. A Weyl manifold is a conformal manifold, equipped with a torsionless connection, called Weyl connection, that preserves the conformal structure. We thus consider a torsion-free affine connection which satisfies the condition
\begin{equation}\label{eq:DefWeylConnection}
\nabla_{\lambda}g_{\mu\nu}=B_{\lambda}\;g_{\mu\nu}\; .
\end{equation}
Equation~(\ref{eq:DefWeylConnection}) defines the Weyl connection $\nabla_{\lambda}$, which is a particular case of a connection with non-metricity (see \emph{e.g.} Ref.~\cite{Sotiriou:2006qn}). The Levi-Civita connection will instead be denoted by $D_{\lambda}$.
The connection coefficients are given by
\begin{equation}\label{eq:ConnectionCoefficients}
\Gamma^{\sigma}_{\mu\nu}=\left\{ {\sigma \atop \mu\;\nu} \right\}-\frac{1}{2}\left(\delta^{\sigma}_{\mu}\,B_{\nu}+\delta^{\sigma}_{\nu}\,B_{\mu}-g_{\mu\nu}\,B^{\sigma}\right)\;.
\end{equation}
Under a local conformal transformation\footnote{Local conformal transformations are also known as Weyl rescalings.}
\begin{equation}\label{eq:LocalConformalMetric}
g_{\mu\nu}\rightarrow\tilde{g}_{\mu\nu}=\Omega^2g_{\mu\nu}~,
\end{equation}
the Weyl one-form $B_\mu$ transforms as an Abelian gauge field
\begin{equation}\label{eq:TransformationLawWeylVector}
B_\mu\rightarrow\tilde{B}_\mu=B_\mu+2\Omega^{-1}\nabla_{\mu}\Omega~,
\end{equation}
so that the condition given by Eq.~(\ref{eq:DefWeylConnection}) is preserved. The connection coefficients in Eq.~(\ref{eq:ConnectionCoefficients}) are by definition conformally invariant.
The components of the Riemann curvature tensor in a local chart are given by
\begin{equation}\label{eq:defineRiemann}
R_{\mu\nu\rho}^{\phantom{a}\ph\phantom{a}\sigma}=-\partial_\mu\Gamma^{\sigma}_{\nu\rho}+\partial_\nu\Gamma^{\sigma}_{\mu\rho}-\Gamma^{\sigma}_{\mu\kappa}\Gamma^{\kappa}_{\nu\rho}+\Gamma^{\sigma}_{\nu\kappa}\Gamma^{\kappa}_{\mu\rho}\;.
\end{equation}
The Riemann tensor satisfies the following properties, as in the standard case:
\begin{enumerate}[label=\alph*)]
\item ~$R_{\mu\nu\rho}^{\phantom{a}\ph\phantom{a}\sigma}=-R_{\nu\mu\rho}^{\phantom{a}\ph\phantom{a}\sigma}$~;
\item ~$R_{[\mu\nu\rho]}^{\phantom{a}\ph\phantom{a}\ph\sigma}=0$, which follows from the symmetry of the connection coefficients, \emph{i.e.} the vanishing of the torsion~;
\item ~$\nabla_{[\lambda}R_{\mu\nu]\rho}^{\phantom{a}\ph\phantom{a}\ph\sigma}=0$~.
\end{enumerate}
Antisymmetry over the last two indices, which holds in the standard case, is replaced by
\begin{equation}\label{eq:FourhtPropertyRiemann}
R_{\mu\nu\rho\sigma}=-R_{\mu\nu\sigma\rho}+H_{\mu\nu}\;g_{\rho\sigma}\;,
\end{equation}
where $H_{\mu\nu}$ is the field strength of $B_{\mu}$, defined as in electromagnetism
\begin{equation}\label{eq:FieldStrength}
H_{\mu\nu}=\nabla_{\mu}B_{\nu}-\nabla_{\nu}B_{\mu}=
\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}\; .
\end{equation}
The Riemann curvature of the Weyl connection, defined by Eq.~(\ref{eq:DefWeylConnection}), has the following expression\footnote{Square brackets denote antisymmetrization, as in $T_{[\mu\nu]}=\frac{1}{2}\left(T_{\mu\nu}-T_{\nu\mu}\right)$.}
\begin{equation}\label{eq:RiemannDefinition}
\begin{split}
R_{\mu\nu\rho}^{\phantom{a}\ph\phantom{a}\sigma}=&R_{\mu\nu\rho}^{0\phantom{a}\ph\sigma}+\delta^\sigma_{[\nu} D_{\mu]} B_\rho+\delta^\sigma_\rho D_{[\mu}B_{\nu]}-g_{\rho[\nu}D_{\mu]}B^{\sigma}\\ &-\frac{1}{2}\left(B_{[\mu}\,g_{\nu]\rho}B^{\sigma}+\delta^{\sigma}_{[\mu}\,B_{\nu]}B_{\rho}+g_{\rho[\mu}\,\delta^{\sigma}_{\nu]}B_\lambda B^\lambda \right)~.
\end{split}
\end{equation}
In the last equation, $R_{\mu\nu\rho}^{0\phantom{a}\ph\sigma}$ is the Riemann tensor of the Levi-Civita connection. It can be computed from Eq.~(\ref{eq:defineRiemann}), using the Christoffel symbols as the connection coefficients
\begin{equation}\label{eq:defineOrdinaryRiemann}
\begin{split}
R_{\mu\nu\rho}^{0\phantom{a}\ph\sigma}=-\partial_\mu\left\{ {\sigma \atop \nu\;\rho} \right\}+\partial_\nu\left\{ {\sigma \atop \mu\;\rho} \right\}-\left\{ {\sigma \atop \mu\;\kappa} \right\}\left\{ {\kappa \atop \nu\;\rho} \right\}\\+\left\{ {\sigma \atop \nu\;\kappa} \right\}\left\{ {\kappa \atop \mu\;\rho} \right\}\;.
\end{split}
\end{equation}
Defining the Ricci tensor by contracting the second and the fourth indices of the Riemann curvature in Eq.~(\ref{eq:RiemannDefinition})
\begin{equation}
R_{\mu\nu}=R_{\mu\sigma\nu}^{\phantom{a}\ph\phantom{a}\sigma}\; ,
\end{equation}
one has
\begin{equation}\label{eq:RicciTensorExpand}
\begin{split}
R_{\mu\nu}=R^0_{\mu\nu}+D_\mu B_\nu +\frac{1}{2}H_{\mu\nu}+\frac{1}{2}g_{\mu\nu}D_{\sigma}B^{\sigma}\\
+\frac{1}{2}\left(B_\mu B_\nu-g_{\mu\nu}B_\sigma B^\sigma\right)~.
\end{split}
\end{equation}
Note that, as a consequence of Eq.~(\ref{eq:FourhtPropertyRiemann}), the Ricci tensor is not symmetric. In fact, one has
\begin{equation}
R_{[\mu\nu]}=H_{\mu\nu}\; .
\end{equation}
The Riemann and the Ricci tensors are by definition conformally invariant.
The Ricci scalar is then defined as
\begin{equation}\label{eq:RicciScalarDefine}
R=g^{\mu\nu}R_{\mu\nu}~.
\end{equation}
Under a conformal transformation the Ricci scalar reads
\begin{equation}\label{eq:RescalingScalarCurvature}
R\rightarrow\tilde{R}=\Omega^{-2}R~.
\end{equation}
Substituting Eq.~(\ref{eq:RicciTensorExpand}) into Eq.~(\ref{eq:RicciScalarDefine}), the Ricci scalar is
\begin{equation}\label{eq:RicciScalarExpand}
R=R^0+3D_\mu B^\mu-\frac{3}{2}B_\mu B^\mu~,
\end{equation}
where $R^0$ is the Ricci scalar computed from the ordinary Riemann curvature, Eq.~(\ref{eq:defineOrdinaryRiemann}).
\section{A geometric scalar-vector-tensor theory}\label{Theory}
\subsection{The simplest model}\label{sec:SimpleModel}
Our aim is to build an action functional for gravity which is conformally invariant. We will follow Smolin for its derivation~\cite{Smolin:1979uz}. From Eqs.~(\ref{eq:LocalConformalMetric}),~(\ref{eq:RescalingScalarCurvature}) we see that the simplest action displaying such property is
\begin{equation}\label{eq:Action}
S_{\rm g}=\int \mbox{d}^4 x\sqrt{-g}\; \xi_\phi\phi^2 R~,
\end{equation}
where $\xi_\phi$ is a coupling constant and $\phi$ is a real scalar field transforming under local rescalings, Eq.~(\ref{eq:LocalConformalMetric}), according to its canonical dimensions\footnote{Note that the transformation properties of the volume element $\mbox{d}^4 x\sqrt{-g}$ under conformal transformations are determined by those of the determinant of the metric (coordinates are not rescaled). From Eq.~(\ref{eq:LocalConformalMetric}) we have $\sqrt{-g}\to\Omega^4\sqrt{-g}$. This is important when checking conformal invariance of the action~(\ref{eq:Action}).}
\begin{equation}
\phi\rightarrow\tilde{\phi}=\Omega^{-1}\phi~.
\end{equation}
We impose the further requirements that the equations of motion shall contain no derivatives higher than second order and no inverse powers of the scalar field $\phi$ shall appear in the action. Equation~(\ref{eq:Action}) is therefore singled out as the unique action satisfying the above conditions, in the case of a single non-minimally coupled real scalar field. The scalar field contributes another term to the action
\begin{equation}\label{eq:PhiSector}
S_{\rm s}=\int \mbox{d}^4 x\sqrt{-g}\; \left[-\frac{\omega}{2}\; g^{\mu\nu}\left(\partial_\mu\phi+\frac{1}{2}B_{\mu}\phi\right) \left(\partial_\nu\phi+\frac{1}{2}B_{\nu}\phi\right)\right]~,
\end{equation}
where a minimal coupling to the Weyl one-form $B_{\mu}$ has been considered in order to make the action consistent with the principle of local conformal invariance, and $\omega$ is the Brans-Dicke parameter. Lastly, $B_{\mu}$ is made dynamical by adding a kinetic term to the action
\begin{equation}\label{eq:YMaction}
S_{\rm v}=\int \mbox{d}^4 x\sqrt{-g}\; \left[-\frac{1}{4f^2}\; H_{\mu\nu}H^{\mu\nu}\right]~,
\end{equation}
in complete analogy with electrodynamics. The field strength $H_{\mu\nu}$ of $B_\mu$ is defined as in Eq.~(\ref{eq:FieldStrength}). The action (\ref{eq:YMaction}) is the Yang-Mills action for an Abelian gauge field. It represents the most natural choice which is compatible with local scale invariance, since the Yang-Mills action is conformally invariant in four dimensions. The parameter $f$ is a universal coupling costant. The action $S_{\rm g}+S_{\rm s}+S_{\rm v}$ defines the extended gravitational sector of the theory.
The scalar field $\phi$ introduced above can be interpreted as a dilaton. In fact, it gives the strength of the gravitational coupling. However, since we are considering \emph{local} conformal symmetry, the dilaton $\phi$ can be eliminated by an appropriate gauge fixing, as we will show in the next section. Gauge fixing also yields a massive vector $B_\mu$ in the spectrum, thus preserving the total number of degrees of freedom. We should point out that there are other gauge choices in which $\phi$ is instead dynamical, such as those considered in Ref.~\cite{Bars:2015trh}.
\subsection{Coupling the Higgs field to gravity}
The theory given in the previous section can be immediately extended to include the Standard Model Higgs field. In fact, we will show that it is possible to embed the SM in a theory with local conformal invariance. As a result, all dimensionful parameters such as the gravitational constant, the Higgs vev, the Higgs mass, and the cosmological constant will all have a common origin. The tensor sector is given by
\begin{equation}\label{eq:PhiHiggsTensorSector}
S_{\rm g}=\int \mbox{d}^4 x\sqrt{-g}\; \left(\xi_{\phi}\;\phi^2+2\xi_H\; H^{\dagger}H\right) R~,
\end{equation}
where $\xi_{\phi}$, $\xi_H$ are dimensionless couplings. The Higgs kinetic term, including a minimal coupling to the Weyl one-form, is given by
\begin{equation}\label{eq:HiggsSector}
\begin{split}
&S_{\rm H}=\\
&\int \mbox{d}^4 x\sqrt{-g}\; \left[-g^{\mu\nu}\left(\partial_\mu H^{\dagger}+ \frac{1}{2}B_{\mu} H^{\dagger}\right)\left(\partial_\nu H+ \frac{1}{2}B_{\nu} H\right)\right]~.
\end{split}
\end{equation}
When introducing Yang-Mills connections corresponding to the SM gauge group, partial and covariant derivatives are replaced by gauge covariant derivatives.
We can then introduce a Higgs-dilaton potential as in Ref.~\cite{Shaposhnikov:2008xb},
\begin{equation}\label{eq:HiggsDilatonPotential}
V(\phi,H)=\frac{\lambda}{4}\left(H^{\dagger}H-\kappa^2\phi^2\right)^2+\lambda^
{\prime}\phi^4~,
\end{equation}
where $\lambda$, $\lambda^{\prime}$, $\kappa$ are dimensionless parameters.
Fixing the gauge in such a way that $\phi$ takes a constant value $\phi_0$ everywhere in spacetime, the Higgs-dilaton potential takes the form of the usual Mexican hat potential, including a cosmological constant term, namely
\begin{equation}\label{eq:Higgs-Dilaton}
V(\phi_0,H)=\frac{\lambda}{4}\left(H^{\dagger}H-\kappa^2\phi_0^2\right)^2+\lambda^{\prime}\phi_0^4~.
\end{equation}
We can write the Higgs doublet in the unitary gauge
\begin{equation}
H=\frac{1}{\sqrt{2}}\begin{pmatrix} 0\\ h\end{pmatrix}~.
\end{equation}
It is then readily seen that EW symmetry breaking fixes the values of the gravitational coupling $G$, the Higgs vev $v$, as well as the Higgs mass $\mu$, and the cosmological constant $\Lambda$, in terms of the scale of conformal symmetry breaking $\phi_0$, as (cf. Ref.~\cite{Bars:2013yba})
\begin{equation}
\begin{split}
\frac{\Lambda}{8\pi G}=\lambda^{\prime}\phi_0^4~&,~ \hspace{1em} \frac{v^2}{2}=\kappa^2\phi_0^2~,\\~ \hspace{1em} \frac{1}{16\pi G}=\xi_{\phi}\;\phi_0^2+\xi_H\; v^2~&,~ \hspace{1em} \mu^2=-\lambda\kappa^2\phi_0^2~.
\end{split}
\end{equation}
The conformally invariant theory of gravity given here can be seen as a generalization of other theories with local conformal invariance proposed in the literature. Considering Eq.~(\ref{eq:RicciScalarExpand}), we can rewrite the total action given by the sum of the $S_{\rm g}$, $S_{\rm s}$, $S_{\rm H}$ and $S_{\rm v}$ contributions from Eqs.~(\ref{eq:PhiHiggsTensorSector}), (\ref{eq:PhiSector}), (\ref{eq:HiggsSector}) and (\ref{eq:YMaction}), respectively, and including the potential Eq.~(\ref{eq:HiggsDilatonPotential}) as
\begin{equation}
\begin{split}
S=&\int \mbox{d}^4 x\sqrt{-g}\; \bigg[\left(\xi_{\phi}\;\phi^2+2\xi_H\; H^{\dagger}H\right) R^0-\frac{\omega}{2}\partial^\mu\phi \partial_\mu\phi\\ &-\frac{1}{2}(\omega+12\xi_{\phi})\phi B^\mu\partial_\mu\phi -\frac{1}{8}(\omega+12\xi_\phi)\phi^2B_\mu B^\mu\\& -\partial^\mu H^{\dagger}\partial_\mu H-\frac{1}{2}(1+12\xi_H)B^\mu(H^{\dagger}\partial_\mu H+\partial_\mu H^{\dagger} H)\\&-\frac{1}{4}(1+12\xi_H)H^{\dagger}H\; B_\mu B^\mu-\frac{1}{4f^2}\; H_{\mu\nu}H^{\mu\nu}
\\&- \frac{\lambda}{4}\left(H^{\dagger}H-\kappa^2\phi^2\right)^2-\lambda^
{\prime}\phi^4\bigg]~,
\end{split}
\end{equation}
up to a surface term.
\section{EW symmetry breaking and the scalar-tensor-vector\\gravity}\label{EW SSB}
As a consequence of the spontaneous breakdown of conformal and EW symmetries, the vector $B^{\mu}$ acquires a mass. This can be seen by looking at Eqs.~(\ref{eq:PhiSector}),~(\ref{eq:PhiHiggsTensorSector}),~(\ref{eq:HiggsSector}) and taking into account Eq.~(\ref{eq:RicciScalarExpand}). In fact, excluding interactions with other matter fields and with the Higgs boson, the action of $B^{\mu}$ reads
\begin{equation}\label{eq:MassiveVectorAction}
S_{\rm v}=\int \mbox{d}^4 x\sqrt{-g}\; \left[-\frac{1}{4f^2}\; H_{\mu\nu}H^{\mu\nu}-\frac{1}{2}m_B^2\; B_{\mu}B^{\mu}\right]~,
\end{equation}
with
\begin{equation}
m_B^2= 3\left(\xi_{\phi}\;\phi_0^2+\xi_H\; v^2\right)+\frac{\omega}{4}\phi_0^2+\frac{v^2}{4}=\frac{3}{16\pi G}+\frac{v^2}{4}\left(\frac{\omega}{2\kappa^2}+1\right)~.
\end{equation}
It is possible to rewrite the action of the vector field in canonical form, by expanding the first term in Eq.~(\ref{eq:MassiveVectorAction}) and rescaling the field as $B^{\mu}\rightarrow f\; B^{\mu}$. We have
\begin{equation}\label{eq:ProcaActionBField}
\begin{split}
S_{\rm v}= \int \mbox{d}^4 x\sqrt{-g}\; \bigg[ -\frac{1}{2}\Big( (D^{\mu}B^{\nu})(D_{\mu}B_{\nu})&-(D^{\nu}B^{\mu})(D_{\mu}B_{\nu})\Big)\\ & -\frac{1}{2}f^2 m_B^2\; B_{\mu}B^{\mu}\bigg]~.
\end{split}
\end{equation}
Hence, the physical mass squared of the vector is given by
\begin{equation}
m_{\rm v}^2=f^2 m_B^2~.
\end{equation}
Equation~(\ref{eq:ProcaActionBField}) is the Proca action in a curved spacetime. Sources $j^{\mu}$ for the field $B_{\mu}$ come from the other sectors of the theory; they are covariantly conserved, $D_{\mu}j^{\mu}=0$, as a consequence of the minimal coupling prescription. From the equations of motion one gets the subsidiary condition $D_{\mu}B^{\mu}=0$ (since $m_v^2\neq0$), which restricts the number of degrees of freedom of the vector field to three, namely two transverse modes and a longitudinal mode. Hence, counting degrees of freedom before and after the breaking of conformal invariance gives the same result. In analogy with the Higgs mechanism, we can say that the vector field $B^{\mu}$ acquires a mass and a longitudinal polarization mode as a result of conformal symmetry breaking. The dilaton $\phi$ can be completely decoupled from the theory by choosing a suitable gauge, as it happens for the Goldstone boson in the unitary gauge (see however the remark at the end of Section~\ref{sec:SimpleModel}). In fact, a stronger result holds: the kinetic term of $\phi$ is identically vanishing, which makes the field non-dynamical. Only its constant value $\phi_0$ appears in all equations written in this gauge.
Before closing this section, we want to specify the connection between our model and the ones in the literature about conformal invariance in gravity and cosmology. Choosing the particular values of the parameters $\xi_H=\frac{\xi_\phi}{\omega}=-\frac{1}{12}$, the Higgs and the dilaton fields are completely decoupled from the vector field, which yields the action
\begin{equation}\label{eq:BarsTurokAction}
\begin{split}
S=\int \mbox{d}^4 x\sqrt{-g}\; \bigg[&-\left(\frac{\omega}{12}\phi^2+\frac{1}{6}\; H^{\dagger}H\right) R^0\\ &-\frac{\omega}{2}\partial^\mu\phi \partial_\mu\phi -\partial^\mu H^{\dagger}\partial_\mu H
-V(\phi,H)\bigg]~,
\end{split}
\end{equation}
with $V(\phi,H)$ the Higgs-dilaton potential given from Eq.~(\ref{eq:HiggsDilatonPotential}).
Equation~(\ref{eq:BarsTurokAction}) is the action of two scalar fields with conformal coupling to curvature; it is the model considered in Ref.~\cite{Bars:2013yba}, for $\omega=-1$. Writing the Higgs field in the unitary gauge, the action Eq.~(\ref{eq:BarsTurokAction}) can be also seen as equivalent to the conformally invariant two-field model of Ref.~\cite{Kallosh:2013hoa} with $\mbox{SO(1,1)}$ symmetry.
\section{Coupling to SM fields}\label{Sec:CouplingSM}
So far, we have focused our attention on the gravitational sector of the theory, given by the fields $g_{\mu\nu}$, $B_\mu$ and $\phi$, and considered their couplings to the Higgs doublet. In this section we will focus on their couplings to SM fields and study whether the framework of Weyl geometry introduces any modifications to such sectors. We will discuss separately the cases of gauge bosons and spin-$1/2$ fermions (leptons and quarks).
Let us consider a gauge field $A_\mu^a$, where $a$ is an internal index labelling components in the Lie algebra of the gauge group. Its kinetic term is given by the square of its field strength\footnote{$g$ is the gauge coupling constant, $f^{abc}$ are the structure constants of the gauge group. In the Abelian case the second term in Eq.~(\ref{eq:GaugeFieldStrength}) vanishes.},
defined using the affine connection $\nabla_\mu$
\begin{equation}\label{eq:GaugeFieldStrength}
F_{\mu\nu}^a=\nabla_\mu A^a_\nu - \nabla_\nu A^a_\mu + g f^{abc} A_\mu^bA_\nu^c.
\end{equation}
It is well known that for all symmetric (\emph{i.e.} torsion-free) connections $\nabla_\mu$ the above can be rewritten as
\begin{equation}
\begin{split}
F_{\mu\nu}^a=&D_\mu A^a_\nu - D_\nu A^a_\mu + g f^{abc} A_\mu^bA_\nu^c\\
=&\partial_\mu A^a_\nu - \partial_\nu A^a_\mu + g f^{abc} A_\mu^bA_\nu^c~.
\end{split}
\end{equation}
In particular, this is true in the case when $\nabla_\mu$ is the Weyl connection. Hence, there is no direct coupling between the Weyl vector and gauge bosons. The kinetic term of the gauge boson $A_\mu^a$ is given by the standard Yang-Mills action
\begin{equation}
S_{\rm YM} =-\frac{1}{4}\int \mbox{d}^4 x\sqrt{-g}\; F^a_{\mu\nu}F^{a\,\mu\nu}~,
\end{equation}
which is conformally invariant in four dimensions. %
The scalar field $\phi$ is real in our model, therefore it does not couple to ordinary gauge fields through the minimal coupling prescription. Although it is certainly possible to generalize the model to allow for non minimal couplings, they can potentially spoil conformal invariance or renormalizability of the SM (or both).
The description of the dynamics of fermions on curved spacetime requires the introduction of a tetrad and of a spin connection. The action of a massless Dirac spinor is given by (see \emph{e.g.} \cite{Parker:2009uva})
\begin{equation}\label{eq:ActionDirac}
S_{\rm Dirac}=\int \mbox{d}^4 x\sqrt{-g}\; i \overline{\psi}\gamma^c e^{\mu}_c\left(\partial_\mu+\frac{1}{8}[\gamma^a,\gamma^b]\,e_a^{\;\nu}(D_\mu e_{b\,\nu})\right)\psi~.
\end{equation}
Observe that Eq.~(\ref{eq:ActionDirac}) uses the Levi-Civita connection $D_\mu$. The reason for this choice will be clear from the following.
Latin indices are used for the Lorentzian frame defined pointwise by the tetrad $e^a_\mu$
\begin{equation}
e^a_{\,\mu} e_{a\,\nu}=g_{\mu\nu},\hspace{1em} e^a_{\,\mu} e^{b\,\mu}=\eta^{ab}~.
\end{equation}
$\eta_{ab}$ is the Minkowski metric ${\rm diag}$(-1,1,1,1). The gamma matrices in Eq.~(\ref{eq:ActionDirac}) are the flat ones $\{\gamma^a,\gamma^b\}=2\eta^{ab}$.
Under a conformal transformation, each field in Eq.~(\ref{eq:ActionDirac}) transforms according to its conformal weight
\begin{equation}
\psi\rightarrow \tilde{\psi}=\Omega^{-3/2}\psi,\quad \overline{\psi}\rightarrow \tilde{\overline{\psi}}=\Omega^{-3/2}\overline{\psi},\quad e^a_{\,\mu}\rightarrow\tilde{e}^a_{\,\mu}=\Omega\, e^a_{\,\mu}~.
\end{equation}
It is possible to check by explicit computation that, under such a transformation, all terms involving derivatives of the function $\Omega$ cancel in Eq.~(\ref{eq:ActionDirac}). Hence, the action of a Dirac fermion defined using the Levi-Civita connection is conformally invariant. The same conclusion can also be reached by looking at the square of the Dirac operator defined by Eq.~(\ref{eq:ActionDirac}). In this way, one finds a generalization of the Klein-Gordon equation with a non-minimal coupling to curvature, which turns out to be conformally invariant \cite{Parker:2009uva,konno:1988}.
In Ref.~\cite{Cheng:1988zx} the action of a Dirac particle was defined by considering a generalization of Eq.~(\ref{eq:ActionDirac}) which makes both terms in the bracket separately conformally invariant, when acting on $\psi$. Namely, the Weyl connection is considered instead of the Levi-Civita connection and the coupling to the Weyl vector is included, with the appropriate coupling constant given by the conformal weight of the spinor
\begin{equation}\label{Eq:ActionDirac2}
\int \mbox{d}^4 x\sqrt{-g}\; i \overline{\psi}\gamma^c e^{\mu}_c\left(\partial_\mu+\frac{3}{4}B_\mu+\frac{1}{8}[\gamma^a,\gamma^b]\,e_a^{\;\nu}(\nabla_\mu e_{b\,\nu})\right)\psi~.
\end{equation}
However, it turns out that this action is equal to the one in Eq.~(\ref{eq:ActionDirac}), since the terms involving the Weyl vector cancel exactly. More details are given in the Appendix.
We conclude this section by stressing that the requirement of local conformal invariance does not introduce new direct couplings of the elementary matter fields (with the only exception of the Higgs) with the new fields $\phi$ and $B_\mu$. Their interactions with leptons, quarks and gauge bosons can only be mediated by the gravitational field $g_{\mu\nu}$ or the Higgs field. This has important implications for the dynamics of matter in a gravitational field.
\section{Motion of fluids and test particles}\label{Sec:Fluids}
In the previous section we showed that the dynamics of free vector and spinor fields is determined solely by the Levi-Civita connection. The only field in the gravitational sector with whom they can interact directly is the metric tensor $g_{\mu\nu}$. A description of matter which is particularly convenient for applications to macroscopic physics (\emph{e.g.} astrophysics, cosmology) in certain regimes, is in terms of perfect fluids. Following Ref.~\cite{Brown:1992kc}, the matter action for a perfect and isentropic fluid is given by
\begin{equation}\label{eq:ActionMatter}
S_{\rm matter}=-\int\mbox{d}^4 x\sqrt{-g} \left[\rho\left(\frac{|J|}{\sqrt{-g}}\right)+J^{\mu}(\partial_\mu\chi+\beta_A\partial_\mu\alpha^A)\right]~.
\end{equation}
$J^{\mu}$ represents the densitized particle number flux (with $|J|\equiv\sqrt{-J^{\mu}J_{\mu}}$~), which can be written as
\begin{equation}\label{eq:RelationFluxVelocity}
J^{\mu}=n\sqrt{-g}\,U^{\mu}~,
\end{equation}
where $n$ is the particle number density and $U^{\mu}$ the four-velocity of the fluid. Using Eq.~(\ref{eq:RelationFluxVelocity}) the particle number density can be computed as
\begin{equation}
n=\frac{|J|}{\sqrt{-g}}~.
\end{equation}
$\chi$ is a Lagrange multiplier enforcing particle number conservation. Additional constraints can be imposed. In fact, interpreting $\alpha^A$ ($A=1,2,3$) as Lagrangian coordinates for the fluid, the Lagrange multipliers $\beta_A$ impose the condition that the fluid flows along lines of constant $\alpha^A$. The stress-energy tensor obtained from the action Eq.~(\ref{eq:ActionMatter}) takes the form
\begin{equation}\label{eq:StressEnergy}
T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta S_{\rm matter}}{\delta g^{\mu\nu}}= (\rho+p)\, U_\mu U_\nu + p\, g_{\mu\nu}~,
\end{equation}
having defined the pressure as (see \cite{Brown:1992kc,Misner:1974qy})
\begin{equation}
p=n\frac{\partial\rho}{\partial n}-\rho~.
\end{equation}
The dynamics of the fluid is obtained by looking at the stationary points of the action (\ref{eq:ActionMatter}). In particular, diffeomorphism invariance implies that the stress-energy tensor is covariantly conserved
\begin{equation}\label{eq:ConservationStressEnergy}
D^\mu T_{\mu\nu}=0.
\end{equation}
Notice that the local conservation law Eq.~(\ref{eq:ConservationStressEnergy}) is formulated in terms of the Levi-Civita connection. We remark that, as it is well-known, the argument above leading to Eq.~(\ref{eq:ConservationStressEnergy}) applies to all matter fields (including elementary ones, considered in the previous section) as long as interactions with other species are negligible. The Higgs field represents an exception, since it has direct couplings to the Weyl vector.
Different regimes have to be considered for the dynamics of matter, depending on the energy scale. Above the scale of EW symmetry breaking (and regardless of the fact that conformal symmetry is broken or unbroken), all particles are massless and can be described as a perfect radiation fluid $\rho_{\rm rad}(n)\propto n^{4/3}$. At lower scales and after the spontaneous breakdown of EW symmetry, photons and neutrinos remain massless, while baryonic matter\footnote{As in the ordinary usage of the word by cosmologists, \emph{i.e.} including leptons and actual baryons.} is characterized by $\rho_{\rm bar}(n)\propto n$. As far as the dynamics of matter fields alone\footnote{Again, with the exception of the Higgs field.} is concerned, there is no difference with the corresponding equations obtained in GR. Interactions with $B_\mu$ and $\phi$ can only be mediated by the gravitational field $g_{\mu\nu}$ or the Higgs field $H$. As it is well known, the dynamics of a small test body can be obtained from the conservation law Eq.~(\ref{eq:ConservationStressEnergy}) \cite{Geroch:1975uq}. This is readily seen for dust ($p=0$), in which case the worldline of each dust particle is a geodesic of the Levi-Civita connection, \emph{i.e.} the four-velocity satisfies the equation
\begin{equation}
U^\mu D_\mu U^\nu=0~.
\end{equation}
Geodesic motion of test bodies is a consequence of the coupled dynamics of the gravitational field and matter \cite{Geroch:1975uq}, not an independent physical principle. Hence, the connection that is used to define the parallel transport of \emph{physical} objects is \emph{not an independent prescription} fixed at the outset, but it is instead a consequence of the dynamics. Although this is a well-known result in General Relativity (see Ref.~\cite{Geroch:1975uq}), to the best of the authors' knowledge it has not been stressed previously in a non-Riemannian framework. In our case, the dynamics follows from an action principle which we built using local conformal invariance as an additional guiding principle. The Weyl connection is used as a tool to implement this principle in a natural way in the gravitational sector. It turns out that local conformal invariance in the sector of gauge bosons and spin-$1/2$ fermions does not require using a non-metric connection. The standard minimal coupling to the gravitational field is enough to ensure that conformal invariance holds as a local symmetry.
We would like to stress at this point that, although our approach is based on Weyl geometry as a framework for a dynamical theory of gravity, it differs from Weyl's original formulation in certain important respects. The main objection against Weyl geometry as a framework for gravitational physics is based on a criticism moved by Einstein against Weyl's original proposal.
Einstein's argument is the following. If a vector is parallel transported along a closed path, with parallel transport defined by the Weyl connection $\nabla_\mu$ instead of the Levi-Civita connection $D_\mu$, the norm of the vector changes as a result. This would have obvious physical consequences. In fact, considering any two paths in spacetime having the same starting and end points, rod's lengths and clock's rates would depend on their histories\footnote{The same argument would also apply for parallel transport given by other non-metric connections $\nabla_\lambda g_{\mu\nu}=Q_{\lambda\mu\nu}$ with non-vanishing Weyl vector, defined as the trace of the non-metricity $B_\mu=\frac{1}{4}Q^{\phantom{a}\phantom{a}\lambda}_{\mu\lambda}$.}. This is known as the \emph{second clock effect}. Any theory leading to such effects is clearly non-physical\footnote{The Aharonov-Bohm effect is an analogue of this effect which is instead physical. In that case though, the gauging is not done in physical space, as in Weyl's original proposal, but in the internal space given by the phase of the wave-function.}.
It is worth stressing that this is an argument against the use of the Weyl connection as the one defining parallel transport of \emph{physical} objects, such as rods and clocks. This is clearly not the case in our model. In fact, the dynamics of all elementary matter fields (with the important exception of the Higgs) only involves the Levi-Civita connection $D_\mu$. Hence, it does not entail any direct couplings to the new fields in the gravitational sector. Classical test particles move along geodesics defined by $D_\mu$, as in GR.
\section{An alternative proposal}\label{Section:Alternative}
In this section we suggest an alternative possibility for the dynamics of matter in the extended geometric framework of Weyl geometry. The reader must be aware that this proposal is entirely different in spirit from the one discussed in Sections~\ref{Sec:CouplingSM},~\ref{Sec:Fluids}. In fact, we will put aside for the time being the problem of finding a conformal invariant extension of the SM, and only focus on some classical aspects of the extended geometric framework. In particular, we will consider a different model to describe the motion of matter as classical test bodies. We assume a \emph{phenomenological} point of view and the existence of a conformal symmetry breaking mechanism from which mass scales originate. A specific coupling of classical test bodies with the Weyl vector is assumed, which is consistent with conformal symmetry in the unbroken phase.
We assume that the dynamics of a test particle is given by the following action
\begin{equation}\label{Eq:ActionTestParticle}
S_{\rm \scriptscriptstyle TP}=\frac{1}{2}\int\mbox{d} t\; \left[e^{-1}\dot{x}^\mu\dot{x}_\mu-m^2 e\right]-q\int\mbox{d} t\;B_\mu\dot{x}^\mu~.
\end{equation}
In Eq.~(\ref{Eq:ActionTestParticle}), $t$ is an arbitrary parameter on the world-line and the \emph{einbein} $e$ is a Lagrange multiplier. The second term represents the interaction of the test particle with the Weyl vector (with coupling $q$), which forms part of the extended gravitational background. In the conformally invariant phase, all dimensionful parameters must vanish. Hence, one has $m^2=0$ for all particles. The action (\ref{Eq:ActionTestParticle}) is then conformally invariant, with the metric and the Weyl vector transforming as in Eqs.~(\ref{eq:LocalConformalMetric}),~(\ref{eq:TransformationLawWeylVector}) and the einbein transforming as
\begin{equation}
e\to\tilde{e}= \Omega^2 e~.
\end{equation}
Variation of the action w.r.t. $e$ and $x^{\mu}$ in the massless case yields
\begin{align}
&\dot{x}^\mu\dot{x}_\mu=0~,\\
&\ddot{x}^\mu+\hspace{-5pt}\phantom{o}^{\rm \scriptscriptstyle LC}\Gamma_{\nu\kappa}^\mu\dot{x}^\nu\dot{x}^\kappa-eq H^{\mu}_{~\nu}\dot{x}^\nu=\left(\frac{\mbox{d}}{\mbox{d} t}\log e\right)\dot{x}^\mu~\label{eq:eom_massless}.
\end{align}
We can partially fix the world-line parametrization by requiring that the particle follows an affinely parametrized geodesic in the case $q=0$, which implies $\dot{e}=0$. We will denote the constant value of the einbein by $\hspace{-6pt}\phantom{a}^o e$. Making use of this additional assumption, Eq.~(\ref{eq:eom_massless}) thus reads
\begin{equation}\label{eq:eom_massless:simplified}
\ddot{x}^\mu+\hspace{-5pt}\phantom{o}^{\rm \scriptscriptstyle LC}\Gamma_{\nu\kappa}^\mu\dot{x}^\nu\dot{x}^\kappa-\hspace{-6pt}\phantom{a}^o e q H^{\mu}_{~\nu}\dot{x}^\nu=0~.
\end{equation}
Note that the coupling with the field strength in Eq.~(\ref{eq:eom_massless:simplified}) is entirely arbitrary. In fact, it depends on the time parametrization or, equivalently, on the choice of conformal frame. This freedom is essentially related to the fact that null curves are by definition invariant under conformal transformations, and to the absence of a basic time scale. We will come back to this issue later on.
In the broken-symmetry phase (\emph{i.e.} after conformal and EW SSB), mass scales are allowed. In this case, the dynamics is given by the action (\ref{Eq:ActionTestParticle}) with $m^2\neq0$. However, this is no longer conformally invariant. Solving the equations of motion for the einbein, the action (\ref{Eq:ActionTestParticle}) reduces to
\begin{equation}\label{Eq:ActionTestParticle1}
\phantom{a}^{\scriptscriptstyle 1}S_{\rm \scriptscriptstyle TP}=-m\int\mbox{d} t\; \sqrt{-\dot{x}^\mu\dot{x}_\mu}-q\int\mbox{d} t\;B_\mu\dot{x}^\mu~.
\end{equation}
Extremizing the action (\ref{Eq:ActionTestParticle1}) we obtain the equation of motion
\begin{equation}\label{Eq:EOMmassive}
\ddot{x}^\mu+\hspace{-5pt}\phantom{o}^{\rm \scriptscriptstyle LC}\Gamma_{\nu\kappa}^\mu\dot{x}^\nu\dot{x}^\kappa-\frac{q}{m} H^{\mu}_{~\nu}\dot{x}^\nu=0
\end{equation}
Note that in the massive case affine parametrization is automatically enforced when $q=0$. By comparing the equations of motion (\ref{Eq:EOMmassive}) and (\ref{eq:eom_massless:simplified}), we observe that there is in general a discontinuity in the coupling of the particle's velocity to the field strength $H^{\mu\nu}$ in the limit $m^2\to0$. In fact, for a fixed value of $q$ (\emph{i.e.} independent on $m$), the coefficient of $H^{\mu}_{~\nu}\dot{x}^\nu$ in Eq.~(\ref{Eq:EOMmassive}) diverges in the massless limit, whereas it is entirely arbitrary in the strictly massless case (Eq.~(\ref{eq:eom_massless:simplified})). However, if we assume that there is a continuous phase transition that gives rise to all mass scales, we can match the dynamics of the particle in the two phases by promoting $q$ to a function of the mass and requiring $q\propto m$. In principle, the proportionality constant can depend on the the internal constitution of the test body. However, if we assume that it is universal, the motion of test bodies is the same as in the scalar-tensor-vector theory (MOG) of Ref.~\cite{Moffat:2005si} provided that $q/m=\kappa_g=\sqrt{\alpha G}$ (for the definition of the parameters\footnote{The parameter $\kappa_g$ used in Ref.~\cite{Moffat:2005si} should not be confused with the parameter $\kappa$ used in the rest of this paper, \emph{e.g.} in Eq.~(\ref{eq:Higgs-Dilaton}).} $\alpha$ and $\kappa_g$ see Ref.~\cite{Moffat:2005si}).
Some remarks are in order:
\begin{enumerate}[label=$\roman*$)]
\item This approach, as the one discussed previously in Sections~\ref{Sec:CouplingSM},~\ref{Sec:Fluids}, is also immune from the second clock problem, but for a different reason. In fact, the motion of test particles in this case does not follow geodesics of the Levi-Civita connection, but is also influenced by the Weyl vector. However, only its field strength $H_{\mu\nu}$ appears in the equations of motion (\ref{eq:eom_massless}),~(\ref{eq:eom_massless:simplified}),~(\ref{Eq:EOMmassive}). Hence, the rate of a clock does not change by going around a closed path.
\item Despite the formal analogy of the action (\ref{Eq:ActionTestParticle1}) with that of a charged particle in classical electrodynamics, the Weyl vector $B_\mu$ should not be identified with the electromagnetic field potential $A_\mu$. In fact, although their transformation properties are similar (under local conformal transformations vs. gauge transformations), the other fields (\emph{e.g.} the metric tensor) do not behave in the same way under a conformal or an internal $\mbox{U}(1)$ transformation.
\item The approach followed in this Section is a phenomenological one, which assumes a strictly macroscopic point of view. No attempt is made to connect it to an underlying field theory which is compatible with conformal invariance. In fact, as shown in Sections~\ref{Sec:CouplingSM},~\ref{Sec:Fluids}, minimal conformal invariant extensions of the SM and GR do not lead to similar dynamics for test bodies. Rather, they imply that test bodies follow geodesics of the Levi-Civita connection as in GR.
\item It is not clear yet how a universal coupling to $B_\mu$ with coupling constant $q\propto m$ may arise from the point of view of quantum field theory, in a way which is at the same time compatible with the conformal symmetry breaking scenario outlined in the previous sections, and with the Higgs mechanism.
\end{enumerate}
\section{Conclusion}\label{Conclusions}
We considered an extension of GR and SM with local conformal invariance. The purpose is to provide a new framework for the study of conformal symmetry and its relation to fundamental physics at high energy scales. This is achieved by considering a generalization of Riemannian geometry, first introduced by Weyl and later proposed by Smolin~\cite{Smolin:1979uz}. The affine connection is no longer given by the Levi-Civita connection, as only the conformal structure of the metric is preserved by parallel transport. This leads to the introduction of a gauge vector $B_{\mu}$ in the gravitational sector: the Weyl vector. A scalar field $\phi$ is also needed in order to build a conformally invariant action functional. The framework is that of a classical effective field theory of gravity. The interpretation of our model is similar to that of quantum field theory in curved spacetime. SM fields can be quantized as usual, with $g_{\mu\nu}$, $B_{\mu}$ and $\phi$ representing \emph{classical} background fields\footnote{This is clearly the case for the metric $g_{\mu\nu}$ and the Weyl vector $B_{\mu}$ since they define the classical geometric structure of spacetime, see Eq.~(\ref{eq:DefWeylConnection}). In fact, either they are both classical or both quantum. The status of the field $\phi$ is a more subtle issue and both cases are possible \emph{a priori}. Only a careful analysis of the implications of the two possibilities can determine which one is correct.}.
Our model is a generalization of previous works in the scientific literature on conformal symmetry in gravity theories \cite{Bars:2013yba,Kallosh:2013hoa}, which can be recovered as a particular case of our model. The main difference in our approach is due to the introduction of a new geometric degree of freedom, represented by the Weyl vector field entering the definition of the Weyl connection. Suitable choices of some parameters of the theory lead to the decoupling of the Weyl vector from the Higgs and the scalar fields. Although, in the general case its dynamics cannot be neglected. After gauge fixing the conformal symmetry (which can be interpreted as a spontaneous symmetry breaking) and EW symmetry breaking, the Weyl vector acquires a mass and the scalar is completely decoupled from the theory. The relevance of the scalar for low energy physics lies in the fact that, through gauge fixing, it leads to the introduction of a physical scale in a theory which is scale-free at the outset. All dimensionful parameters of the SM and gravity can be expressed in terms of it and of the dimensionless parameters of the theory.
Einstein's criticism to Weyl's original proposal is addressed in our model, which is not affected by the \emph{second clock effect}. In fact, we showed in Sections~\ref{Sec:CouplingSM},~\ref{Sec:Fluids} that the affine connection that defines parallel transport of physical obejcts, such as \emph{e.g.} clocks and rods, is the Levi-Civita connection. Test particles move along Levi-Civita geodesics as in GR. We remark that this is not prescribed at the outset. It is instead a consequence of the dynamics, which has been formulated using conformal invariance as a guiding principle. The Weyl connection \emph{does} play a role in determining the gravitational sector of the theory, although it does not determine the motion of test particels\footnote{It is remarkable that essentially the same observation was made by Weyl in a reply to Einstein's comment to his original paper. We quote from the English translation contained in Ref.~\cite{ORaifeartaigh:1997dvq}: \emph{``It is to be observed that the mathematical ideal of vector-transfer {\rm (Authors' Note: \emph{i.e.}, parallel transport)}, on which the construction of the geometry is based, has nothing to do with the real situation regarding the movement of a clock, which is determined by the equations of motion''}.}. Furthermore, the introduction of the $B_\mu$ field is necessary in order to build a conformally invariant action functional for scalar fields in four dimensions, but has no (direct) effects on radiation and baryonic matter.
SM fields do not couple to the new fields in the gravitational sector, with the exception of the Higgs. Their interactions with $\phi$ and $B_\mu$ can only be mediated by gravity or the Higgs field.
\section{Discussion and Outlook}\label{Sec:Discussion}
We would like to stress that the present model does not necessarily offer a resolution of the naturalness (or hierarchy) problem. In fact, such problem is now translated in the fine-tuning of its dimensionless parameters. Namely, the hierarchy of the Planck versus EW scale leads to $\frac{v^2}{M_{Pl}^2}=\frac{\xi_\phi}{2\kappa^2}+\xi_H\sim 10^{-34}$. Nevertheless, classical conformal invariance of the extended SM sector is important as a guideline for model building, since it restricts the class of allowed couplings to those having dimensionless coupling constants \cite{Heikinheimo:2013fta}. Furthermore, the possibility of addressing the hierarchy problem in conformally invariant extensions of SM has been considered in \emph{e.g.} Ref.~\cite{Meissner:2006zh} and in earlier works Refs.~\cite{Buchmuller:1988cj,Buchmuller:1990pz}. In the models considered in those works, the EW and the Planck scales are determined by non-trivial minima of the one-loop effective potential in the Higgs-dilaton sector\footnote{The mechanism is a generalization of the one originally proposed by Coleman and Weinberg in Ref.~\cite{Coleman:1973jx}.}. It will be the subject of future work to study whether a similar mechanism could be implemented consistently within our framework. In fact, whereas it is clear that the Weyl vector cannot be quantized without also quantizing the metric, one may speculate that the scalar field $\phi$ should be treated on the same footing of matter fields and be regarded as quantum. Hence, similar analysis as in the works cited above should be carried out to check the viability of such working hypothesis. In the affirmative case, it would be possible to address the important point concerning the exact value of the scale $\phi_0$, which we regarded as a free parameter in this work\footnote{Similarly, the scale of ``conformal symmetry breaking'' (gauge fixing) $\phi_0$ is a free parameter in all models with classical conformal invariance see \emph{e.g.} Refs.~\cite{Bars:2013yba,Kallosh:2013hoa}.}.
In future work we will explore the physical consequences of our model for cosmology and astrophysics. In particular, it would be interesting to study whether $B_\mu$ could represent a valid dark matter candidate, as it was first hinted by \cite{Cheng:1988zx}. If this was the case, it would have a substantially different interpretation from standard dark matter. In fact, the Weyl vector should not be regarded, strictly speaking, as matter but as a property of the spacetime geometry. Important viability checks for the model require the determination of constraints on the couplings of $\phi$ and $B_\mu$ to the Higgs that may come from collider physics. The Weyl vector $B_\mu$ is a classical background field; hence, it can only contribute external lines to the diagrams describing known processes. This is true also for the scalar $\phi$, if this is to be regarded as classical. If, on the other hand, $\phi$ is treated as a quantum field, there will be a new scalar entering loop diagrams. In this case, it is crucial to determine which values of the coupling constants (such as \emph{e.g.} $\xi_\phi$, $\xi_H$) in the bare action are such that renormalizability of SM is not spoiled (see \emph{e.g.} the analysis in Ref.~\cite{Coriano:2012nm}). Addressing this question may also help to shed some light on the ``naturalness'' of the particular choice of parameters\footnote{Commonly known as conformal couplings, since in the standard framework of Riemannian geometry these are the unique values that make the kinetic terms of $\phi$ and $H$ conformally invariant.} $\xi_H=\frac{\xi_\phi}{\omega}=-\frac{1}{12}$ within the broader framework of Weyl geometry. The phenomenology of the model should be studied in detail both in the case where $\phi$ is quantized and when it represents instead a classical background. Detailed studies of the consequences for gravitational experiments are also in order and will be the subject of future work. In particular, we plan to explore in a future work the possible observable consequences of the enriched gravity sector and its implications for astrophysics and cosmology.
It is also worth studying the possible relations between our model and modified gravity theories such as the scalar-tensor-vector theory (MOG) considered in Ref.~\cite{Moffat:2005si}. A preliminary study of the possibility of such a connection was carried out in Section~\ref{Section:Alternative}. This was done by considering a purely phenomenological model for the dynamics of test bodies, which could represent an alternative scenario to the theoretical model analyzed in Sections~\ref{Sec:CouplingSM},~\ref{Sec:Fluids}.
\begin{acknowledgement}
The research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the
Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through
the Ministry of Research and Innovation. The work of MS is also partially supported by STFC (UK) under the research grant
ST/L000326/1. MdC and MS would like to thank Simon Salamon for useful discussions and Perimeter Institute for the hospitality while this work was completed. MdC is also grateful to Roberto Oliveri and J\'er\'emie Quevillon for their constructive comments on the draft. JWM would like to thank Philip Mannheim for useful discussions. The authors would like to thank Subhash Rajpoot for drawing our attention to his work with Hitoshi Nishino. The authors wish to acknowledge helpful feedback received from the anonymous referees, that prompted us to clarify some crucial physical aspects of the model.
\end{acknowledgement}
|
1,477,468,751,444 | arxiv | \section{Introduction}
Phase-ordering dynamics after a cosmological phase transition
may explain the origin of structure in our universe \cite {ShelVil}.
Our interest is in the role of one class of theories - cosmic strings
\cite {HindKib}.
Recent attention has focused on cosmic strings formed during the
breaking of
the grand unified symmetry, which naturally generates perturbations
in the matter and radiation fields of the right order for structure
formation. However, to
make diagnostic predictions between cosmic strings, other
defects and inflation-based theories we need to use a detailed theory
of the cosmological evolution of perturbations.
There exists a well-founded theory
for the standard inflationary scenario of gaussian fluctuations
in the gravitational potential. In extending the theory to cover
defect-based
theories there are three principal issues to be faced.
Firstly, although the initial conditions are random, they are not
gaussian which makes averaging over the ensembles of defects a
numerical problem. Secondly, the creation
of the defects must conserve energy-momentum: hence there are
compensating fluctuations
in the fluid components on super-horizon scales.
Thirdly, in the subsequent evolution, the defect stress-energy
$\Theta_{\mu\nu}$ must
be included in the Einstein equations. Defects form an unusual
component of the
total stress-energy, in that the metric perturbations
only affect the evolution of the defect network to second order, so in
the linear
approximation we can evolve the network in the unperturbed background
metric. Its stress-energy then acts as a source term for the familiar
set of differential equations governing the evolution of the
perturbations.
As demonstrated by Veeraraghavan and Stebbins \cite {VeeSte90}, the
solutions to the Einstein equations
describing the {\em subsequent} evolution of perturbation variables
with
a source term can be expressed as a convolution of a suitable Green's
function
(dependent on the usual cosmological parameters) with the source term
integrated
over time. For a set of perturbation variables $\Delta_a$ and a set of
source
terms $S_b$ related to $\Theta_{\mu\nu}$
\ben
\Delta_a(k,\eta)=\int_{\eta_i}^\eta d{\eta}'\,G_{ab}(\eta,\eta')
S_b(k,\eta')
\label{pert1}
\een
where $\eta$ and $\eta'$ are conformal times and {\em k} is the
wavenumber of the mode.
Calculating the power in these variables will then involve integrating
over the two-time correlator
\ben
\la {|\Delta_a(k,\eta)|^2\ra}=\int_{\eta_i}^{\eta} d{\eta}'\,d{\eta}''\,G_{ab}(\eta,\eta')
G_{ac}(\eta,\eta'') \langle{S_b(k,\eta') S_c(k,\eta'')\rangle}
\label{pert2}
\een
When using the Veeraraghavan-Stebbins formalism, the Green's functions
can be calculated
from standard cosmological perturbation theory:
analytically in the simplest cases, or numerically for a more realistic
universe.
However, in the absence of a workable analytic treatment of string
evolution
(although see \cite {AusCopKib} and \cite{Hind1})
we must rely for the moment on a numerical study of the two-time
correlators.
In a simplified model the Universe consists of two fluids: a
pressureless CDM component
and a tightly-coupled baryon-photon fluid. In the convenient
synchronous gauge, the sources are $\Theta_{00}$ and $i\hat
k_j\Theta_{0j}$ (which we
shall hereafter call $U$), when the equations are rewritten
using the energy-momentum pseudotensor introduced by Veeraraghavan and
Stebbins \cite {VeeSte90}. In the Newtonian gauge they are also the
sources
of curvature fluctuations \cite{HuSpeWhi96}.
It is therefore the two-time correlation
functions of these variables that we study.
Such a study is particularly timely in view of recent work
on the power spectrum of Cosmic Microwave Background around the degree
scale from
strings \cite {MACF1,MACF2} and from global textures \cite
{DuGaSa,CritTur}.
As emphasised by Albrecht {\em et\, al\,} \cite {ACFM}, the distinctive
appearance
of ``Doppler'' peaks and troughs seen in inflationary calculations and
texture
models depend sensitively on the temporal coherence of the sources. If
one assumes little coherence the peaks are washed out; on the other
hand
an assumption of total coherence preserves them. Magueijo {\em et\,
al\,}
in \cite {MACF1} assumed that strings were
effectively incoherent and obtained a rather featureless CMB power
spectrum
at large multipole $l$. In \cite {MACF2} this assumption was justified
by a numerical study of the two-time energy-density correlator.
In contrast both Durrer {\em et\, al\,} \cite {DuGaSa} and Crittenden
and Turok \cite {CritTur} assumed maximum coherence for their texture
models and found
that the peaks were preserved, albeit in positions typical of
isocurvature
models \cite{HuSpeWhi96}. It is therefore clear that understanding
the temporal coherence of string sources is of great importance when
calculating their microwave background signals.
In this paper we present the
results of some numerical ``experiments'' evolving strings in
Minkowski space. Using Minkowski space rather than a Friedmann model
may seem
rather unrealistic. However, we expect a network of strings evolving in
Minkowski space to have all the essential features of one in a
Friedmann background. A tangled initial state consisting of a few
probably ``infinite'' strings plus a scale free distribution of loops
straightens out under tension and continually self intersects resulting
in the transfer of energy into very small fast moving loops. The
infinite
string approaches ``scaling'' in which it is characterised as a set
of more or less Brownian random walks. The step length of the walks
and their average separation are both approximately equal to an overall
network scale $\xi$, which increases linearly with time (as naive
dimensional analysis would suggest).
The length density of the infinite string
decreases as $\xi^{-2}$, again as dimensional analysis suggests.
In fact, $\xi$ is conventionally defined so that the length density
is precisely $1/\xi^2$ \cite {CopKibAus}.
The effect of an expanding background is to damp coherent motions on
super-horizon scales. However the network scale is much less that the
horizon
size so one can argue that the expansion does not significantly affect
the
network dynamics. The great advantage of Minkowski space is that the
network evolution is very easy to simulate numerically: the code is
generally many times shorter than an equivalent code for a Friedmann
background \cite {AlbTur,FRWCodes}, and makes fewer demands
on both RAM and CPU time. We are therefore
able to get much better statistical significance from the data.
We present results from an extensive programme of numerical simulations
for the two-time
correlators of $\Theta_{00}$ and $U$, and use a simple model to explain
most of the features we observe. It turns out that, to a good
approximation,
the network can be thought of as consisting of randomly placed segments
of string with
random velocities. The segments are of length $\xi$
and number density $\xi^{-3}$, thus reproducing the correct behaviour
for the length density.
In this model it is easy to show that the coherence time-scale in a
Fourier mode
of wavenumber {\em k} is determined by the time segments take to travel
a
distance $k^{-1}$. The model in fact predicts that the correlations
between
the energy density at times $\eta$ and $\eta'$ decrease as
${\rm exp}( -\sixth k^2 \bar{v}^2 (\eta-\eta')^2)$, where $\bar v$ is the
r.m.s string
velocity.
Hence we can talk of a coherence time-scale $\eta_c={\sqrt 3}/{\bar v}
k$,
which is the time over which the correlation function falls to
$e^{-\frac{1}{2}}$
of its equal time value.
Given that we find $\bar{v}^2=0.36$, the model predicts $\eta_c\simeq 3/k$.
Our results actually indicate that at high {\em k}, $\eta_c$ decreases
faster
than $k^{-1}$, but we have some evidence that this behaviour is a
lattice artifact.
\section{Flat Space Strings}
We use a development of a code written by one of us some time ago
\cite{SakVil1,SakVil2}.
Initial string realisations are generated using the
Vachaspati-Vilenkin algorithm \cite{VV}. This mimics the Kibble
mechanism for the spontaneous
breaking of a scalar U(1) symmetry with a ``Mexican hat'' potential,
as the
Universe cools through a thermal phase transition.
This initial configuration is set up for evolution on a cubic lattice
with fundamental
lattice side $\delta$. We are free to alter the initial correlation
length $\xi_0$
in terms of $\delta$ and we may add structure to the network by placing
a percentage
$P_c$ of cusps randomly along the string. Cusps are string links which
are confined
to one lattice site and which move at the speed of light. Adding cusps
avoids peculiarities
arising from having long straight segments of string in the initial
conditions.
In Minkowski space the string equations of motion are
\ben
{\bf X}'' = \ddot {\bf X}
\label {WaveEq}
\een
where $\dot { }={\partial / \partial\eta}$ and $\ '={\partial / \partial\sigma}$; $\eta$
is time
and $\sigma$ is a space-like parameter along the string. If
we regard Minkowski space as a Friedmann-Roberston-Walker Universe
in the limit that the expansion rate goes to zero, then $\eta$ can be
identified as FRW conformal time.
${\bf X}={\bf X}(\sigma,\eta)$ is a position three-vector which
satisfies
\ben
{\bf X}'\cdot\dot {\bf X}=0
\label {GaugeCond1}
\een
\ben
{\bf X}'^2+\dot {\bf X}^2=1
\label {GaugeCond2}
\een
This is a convenient set of constraints, as (\ref {GaugeCond2})
ensures that the energy of a segment of string is proportional
to its length in $\sigma$.
We solve this using the Smith-Vilenkin algorithm \cite {SmithVil:alg}
which is based on the
exact finite difference solution to (\ref {WaveEq}),
\ben
{\bf X}(\sigma,\eta+\delta) = {\bf X}(\sigma+\delta,\eta) + {\bf
X}(\sigma-\delta,\eta) - {\bf X}(\sigma,\eta-\delta)
\label {SmithVil}
\een
If the string points are initially defined on the sites of a cubic
lattice $(N\delta)^3$,
then (\ref {SmithVil}) ensures that they remain on the lattice at time
steps of $\delta$.
One can see from (\ref {SmithVil}) that stationary elementary segments
have length
(and hence energy) of $2\delta$ and point in one of six directions.
Because the string points lie on the sites of the lattice, identifying
crossing events
is easy. When two strings cross, they intercommute with a probability
which
is set to 1.
Loops with energy greater than or equal to a threshold value
of $E_c$ are allowed to leave the network,
while reconnections are forbidden for loops with energy equal to $E_c$.
Forbidding
reconnections allows
energy to leave the network fairly efficiently; otherwise it
takes much longer for the effect of the initial conditions to wear off.
The natural and usual value for $E_c$ is the minimum segment length,
$2\delta$. In fact, most of the energy
in the string network goes into the smallest possible loops.
In some sense, the cusps model the gravitational radiation of a
realistic string network:
They travel at the speed of light and do not subsequently interact with
the network.
Each realisation is evolved on a $(64\delta)^3$ or a $(128\delta)^3$
lattice with
approximately $7500$ or $60000$ points describing the string network.
For calculating
quantities like power spectra and two-time correlation functions we
typically
average over 50 realisations.
\section{Results}
The energy-momentum tensor for a cosmic string at a point {\bf x} is
\ben
\Theta^{\mu\nu}({\bf x})=\int d\sigma (\dot X^\mu \dot X^\nu - X'^\mu
X'^\nu)
\delta^{(3)}({\bf x}-{\bf X}(\sigma,\eta))
\label{emx}
\een
It is simple to calculate this on a cubic lattice for our string
realisations,
and use a Fast Fourier Transform to get a Fourier mode representation.
Here we want
to measure the power spectra of $\Theta_{00}(k)$ and $U(k)$. These
are calculated by averaging the amplitudes over all Fourier modes with
a wavenumber between
{\em k} and {\em k} + $2\pi/\delta$.
Previous work \cite {SakVil1,SakVil2,VHS} has tried to identify
those features of the network that are scaling, when a relevant length
scale grows with the
horizon. For example, we can define the familiar energy density scale
$\xi$ by $\xi^2=\mu/\rho_{\inf}$ where
$\rho_{\inf}$ is the density of string with energy greater than $\xi$.
(This
apparently circular definition works because we know the initial step
length $\xi_0$ and
to calculate $\rho_{\inf}$ we use $\xi$ from the previous time step).
Scaling is reached when $x=\partial \xi/\partial \eta$ is constant.
In the cases considered here $x=0.15$ ($E_c=2\delta$) and $x=0.12$
($E_c=4\delta$).
The usual definition for scaling, that $x=\xi/\eta$ is
constant, can not be used here as throughout our simulations $\xi_0$ is
too large to be ignored.
For this reason, we express all scaling functions in terms of $\xi$,
rather than $\eta$. The
behaviour of $\xi$ is shown in Figure \ref{fig:xi}.
\begin{figure}
\centerline{\epsfig{file=xi.eps,width=3in,angle=0}}
\caption{The energy-density length scale $\xi$ over time $\eta$ for a
$128^3$ lattice, averaged over 50 realisations}
\label{fig:xi}
\end{figure}
The infinite string energy-momentum power spectra also exhibit scaling
behaviour, which we
express in terms of the scaling functions $P^{\rho}$ and $P^{U}$. We
also consider the equal time
cross correlation function $\la {U(k)\Theta^*_{00}(k)\ra}$ which is a real function within
the ensemble errors.
\ben
\la {|\Theta_{00}(k,\eta)|^2\ra} = { V P^{\rho}(k\xi) \over \xi }
\label {ps00sc1}
\een
\ben
\la {U(k)\Theta^*_{00}(k)\ra} = { V X^{\rho U}(k\xi) \over \xi }
\label {pscrosssc1}
\een
\ben
\la {| U(k,\eta)|^2\ra} = { V P^{U}(k\xi) \over \xi }
\label {ps0isc1}
\een
These scaling forms are fixed by the requirement that the density
fluctuations
obey the scaling law
\ben
\int_0^{\xi^{-1}}\,d^3k\, \la {|\Theta_{00}(k,\eta)|^2\ra} \propto \xi^{-4}
\label {rms_req}
\een
with a similar law for the other components of $\Theta_{\mu\nu}$.
We display our results for $P^{\rho}$, $X^{\rho U}$ and $P^{U}$ in
Figures \ref {fig:pseng},
\ref {fig:pscross} and \ref {fig:psmom} along with fits to the
following forms
\ben
P^{\rho}(z) = { a \over {(1-(bz)+(cz)^n)^{1/n} } }
\label {fform}
\een
\ben
X^{\rho U}(z) = { d \over {(1-(ez)+(fz)^m)^{2/m} } }
\label {Xform}
\een
\ben
P^{U}(z) = { g \over {(1-(hz)+(jz)^p)^{0.66/p} } }
\label {gform}
\een
which are motivated by a requirement of white noise at large scales
due to
uncorrelated initial conditions
and good single parameter fits to $P^{\rho} \sim (k\xi)^{-0.98 \pm
0.06}$,
$P^{U} \sim (k\xi)^{-0.66 \pm 0.03}$ and $X^{\rho U} \sim
(k\xi)^{-2.0 \pm 0.1}$
at small scales.
\begin{figure}
\centerline{\epsfig{file=ps00.eps,width=3in,angle=0}}
\caption{Scaling function for the energy density as defined by equation
(\ref{ps00sc1})
The solid line is the measured scaling function with $1-\sigma$ error
bars from ensemble
averaging. The dashed line is the fitted function equation (\ref{fform}).
The dash-dot line is the predicted form from our model, the last line
in equation (\ref{fm3}). }
\label{fig:pseng}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=psX.eps,width=3in,angle=0}}
\caption{Scaling fucnction for the equal time energy-momentum cross
correlator
as defined by equation (\ref {pscrosssc1}). The solid line is the
measured
scaling function with $1-\sigma$ error bars from ensemble
averaging. The dashed line is the fitted function equation (\ref
{Xform}).
The dash-dot line is the predicted form from our model,
equation (\ref{Xmodel}).}
\label{fig:pscross}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=ps0i.eps,width=3in,angle=0}}
\caption{Scaling fucnction for the momentum density as defined by
equation (\ref {ps0isc1})
The solid line is the measured scaling function with $1-\sigma$ error
bars from ensemble
averaging. The dashed line is the fitted function equation (\ref
{gform}).
The dash-dot line is the predicted form from our model, the last line
in equation (\ref {momden1}). }
\label{fig:psmom}
\end{figure}
The errors indicate ensemble standard deviation. The other parameters
are $a=1.18$, $b=0.25$, $c=0.24$, $n=6$, $d=0.23$, $e=0.3$, $f=0.5$,
$m=3$, $g=0.07$, $h=0.15$, $j=0.18$ and $p=1.59$.
We consistently get a peak for $P^{\rho}(z)$ and $P^{U}(z)$ at
$k\xi \simeq 3$ which corresponds to a physical wavelength of about
$2\xi$.
However, it is difficult to be certain about the initial rise in the
power spectrum
because the error bars are large. In the notation of Magueijo {\em
et\, al\,}
the peak corresponds to an $x_c=k\eta$ of approximately $20$.
As discussed above, the real importance of $\Theta_{00}$ and
$\Theta_{0i}$ in the context of perturbation
theory is how they are correlated over time. We performed $k$-space
measurements
of the two-time correlation functions
\ben
C^{\rho\rho}(k;\eta,\eta')=V^{-1}\la {\Theta_{00}(k,\eta)\Theta_{00}^*(k,\eta')\ra}\\
\label {TTengdef}
\een
\ben
C^{\rho U}(k;\eta,\eta')=V^{-1}\la {U(k,\eta)\Theta_{00}^*(k,\eta')\ra}
\label {TTcrossdef}
\een
\ben
C^{UU}(k;\eta,\eta')=V^{-1}\la {U(k,\eta)U^*(k,\eta')\ra}
\label {TTmomdef}
\een
During the scaling era they are well approximated by
\ben
C^{\rho\rho}={1 \over \sqrt {\xi\xi'}}
\sqrt{P^{\rho}(k\xi)P^{\rho}(k\xi')}
{\it e}^{-\frac{1}{2} \Upsilon^2 k^2 (1-(k\Delta)) (\eta-\eta')^2}
\label{2t00approx}
\een
\ben
C^{\rho U}={1 \over \sqrt {\xi\xi'}}
\sqrt{P^{\rho}(k\xi)P^{\rho}(k\xi')}
{\it e}^{-\frac{1}{2} \Upsilon'^2 k^2 (\eta-\eta')^2} ({\alpha \over
k\sqrt{\xi\xi'}}-\Upsilon'^2 k (\eta-\eta'))
\label{2tcrossapprox}
\een
\ben
C^{UU}={1 \over \sqrt {\xi\xi'}} \sqrt{P^{U}(k\xi)P^{U}(k\xi')}
{\it e}^{-\frac{1}{2} \Upsilon''^2 k^2 (\eta-\eta')^2} (1-\Upsilon''^2 k^2
(\eta-\eta')^2)
\label{2t0iapprox}
\een
where we have used the scaling forms of the power spectra and
cross-correlator
and $\xi=\xi(\eta)$, $\xi'=\xi(\eta')$. The imaginary component of
these correlators is consistent
with zero within the ensemble errors. The forms of these functions are
motivated by a model
which we describe in the next section.
The values for $\Upsilon$, $\Upsilon'$ and $\Upsilon''$ are given in
Table \ref{tab1}.
$\Delta$ is approximately $3\delta$. These parameters were determined
by minimisation of
the $\chi$-squared for each realisation. The normal distribution of
parameters
obtained allows an estimate to be made of each parameter and its
$1-\sigma$ errors.
We stress that we consider equations (\ref{2t00approx}),
(\ref{2tcrossapprox})
and (\ref{2t0iapprox}) to be an approximation, although a good one for
$|\eta-\eta'|\,\leq 8 k^{-1}$. For comparison, the energy and momentum
correlators fall to half their maximum value at $|\eta-\eta'|\,\simeq
3 k^{-1}$.
Outside the range given there are two effects
present in the measured correlators which invalidate
the model. The first is small $k$-dependent oscillations about zero as
predicted by Turok \cite{Turok}. The second is sharp peaks in the
correlators
at small scales which we take to be an effect of defining
strings on the lattice.
\begin{table}[ht]
\begin {center}
\begin {tabular} {|c|c|c|c|c|}
\hline
$E_c$ & $\Upsilon$ & $\alpha$ & $\Upsilon'$& $\Upsilon''$\\
\hline
$2\delta$ & $0.21 \pm 0.05$ & $0.19 \pm 0.05$ & $0.42 \pm 0.05 $ &
$0.36 \pm 0.07$\\
$4\delta$ & $0.18 \pm 0.05$ & $0.16 \pm 0.05$ & $0.42 \pm 0.05 $ &
$0.42 \pm 0.06$\\
\hline
\end {tabular}
\caption{Fitted parameters for the models in equations
(\ref{2t00approx}),
(\ref{2tcrossapprox}) and (\ref{2t0iapprox})}
\label{tab1}
\end{center}
\end{table}
Figures \ref {fig:ttcf00}, \ref {fig:ttcfcross} and
\ref {fig:ttcf0i} show the measured functions at a time
$\eta'=22\delta$ (which
is within the scaling era of our simulations).
\begin{figure}
\centerline{\epsfig{file=ttcf00.eps,width=3in,angle=0}}
\caption{Two-time correlation function $\la {\Theta_{00}(k,\eta)\Theta_{00}^*(k,\eta')\ra}$ for $\eta'=22\delta$}
\label{fig:ttcf00}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=ttcfcross.eps,width=3in,angle=0}}
\caption{Two-time correlation function $\la {U(k,\eta)\Theta_{00}^*(k,\eta')\ra}$ for $\eta'=22\delta$}
\label{fig:ttcfcross}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=ttcf0i.eps,width=3in,angle=0}}
\caption{Two-time correlation function $\la {U(k,\eta)U^*(k,\eta')\ra}$ for $\eta'=22\delta$}
\label{fig:ttcf0i}
\end{figure}
From the gaussian fall-off over time we see that for a given mode with
wavenumber {\em k}
the network energy density decorrelates on a characteristic time scale
which goes like
$1/\Upsilon k \sqrt{(1-(k\Delta))}$ and the momentum density varies as
$1/\Upsilon''k$ for $E_c=2\delta$.
\section{Theoretical Model}
\subsection{Power Spectra}
We can use a very simple model to account for the forms given in (\ref
{fform})
and (\ref{gform}), if not the precise details. From (\ref {emx}) we get
\ben
\Theta^{\mu\nu}({\bf k})=\int d\sigma\,(\dot X^\mu \dot X^\nu - X'^\mu
X'^\nu)
e^{i {\bf k}\cdot{\bf X}(\sigma,\eta)}
\label {emk}
\een
At any given time the power spectrum is
\ben
\la {|\Theta_{00}(k,\eta)|^2\ra}=\langle {\int d\sigma\, d\sigma'\, {\it e}^ {i {\bf k}\cdot({\bf
X}(\sigma,\eta)-{\bf X}(\sigma',\eta))} \rangle}
\label {fm1}
\een
We then make some assumptions about the statistics of the network.
Firstly, that for
each lag $\sigma_-=\sigma-\sigma'$, the quantities
$X_i(\sigma)-X_i(\sigma')$,\,$\dot X_i(\sigma)$ and $X'_i(\sigma)$
are gaussian random variables with zero mean. Secondly, that the
distribution of strings
is isotropic. Then
\begin{eqnarray}
\la {|\Theta_{00}(k)|^2\ra} & = & \int d\sigma\, d\sigma'\, {\it e}^ {-\sixth k^2 \langle{({\bf
X}(\sigma)-
{\bf X}(\sigma'))^2\rangle}}\nonumber\\[-3mm]\\
& = & \frac{1}{2}\int d\sigma_+\, d\sigma_-\, {\it e}^ {-\sixth k^2
\Gamma(\sigma_-)}\nonumber
\label{fm2}
\end{eqnarray}
where we have introduced the function $\Gamma(\sigma-\sigma')=\langle{({\bf
X}(\sigma)-{\bf X}(\sigma'))^2\rangle}$
and changed variables to $\sigma_-=\sigma-\sigma'$ and
$\sigma_+=\sigma+\sigma'$.
We will also use $\bar{t}^2=\la{{\bf X'}^2\ra}$. The third assumption is that the
network is described by a collection
of randomly placed string segments of length $\xi/{\bar t}$ and
total energy ${V \xi^{-2}}$, where $V$ is the volume of the simulation
box.
Then $\Gamma=\bar{t}^2 \xi^2 ({ \sigma_-/\xi } )^2$ and $L=\frac{1}{2}\int
d\sigma_+ = {V \xi^{-2}}$.
If we define $z=\sigma_-\bar t /\xi$ then
\begin{eqnarray}
\la {|\Theta_{00}(k,\eta)|^2\ra} & =& L \int d\sigma_-\, {\it e}^ {-\sixth k^2 \bar{t}^2 \xi^2
(\sigma_-/\xi)^2}\nonumber\\
& = & {L\xi \over {\bar t} } \int_{-\frac{1}{2}}^{\frac{1}{2}} dz\, {\it e}^
{-\sixth (k\xi)^2z^2 }\label {fm3}\\
& = & { V \over \xi {\bar t} }{2 \sqrt{6} \over k\xi } {\rm{erf}}
({k\xi \over 2 \sqrt{6}})\nonumber
\end{eqnarray}
Thus the scaling form of (\ref {ps00sc1}) emerges quite naturally.
If we compare (\ref {fm3}) with (\ref {ps00sc1}), we can write the
predicted form of the
scaling function $P^{\rho}(k\xi)$. The large and short wavelength
limits are
\ben
P^{\rho}(k\xi) = \left\{ \begin{array}{ll}
(k\xi)^0 & {\rm if} \: k\xi << 2\sqrt{6} \\
(k\xi)^{-1} & {\rm if} \: k\xi >> 2\sqrt{6} \\
\end{array}
\right.
\label{gb}
\een
This function is plotted along with the measured and fitted model in
Figure \ref {fig:pseng}.
It should be noted that the normalisation is given by the model along
with the
measurement for $\bar t$, and not introduced
by hand. This model does better at predicting the measured
power spectrum than a random walk model, in which the power
on large scales goes like $k^{-2}$. Therefore, although each individual
string is a random walk,
they must be highly (anti) correlated on large scales.
Similarly, the power spectrum for $U$ becomes
\begin{eqnarray}
\la {| U(k)|^2\ra} & = & \langle {\int d\sigma\, d\sigma'\, \hat k_i\hat k_j \dot
X^i(\sigma) \dot X^j(\sigma')
{\it e}^ {i {\bf k}\cdot({\bf X}(\sigma,\eta)-{\bf X}(\sigma',\eta))}
\rangle}\nonumber\\
& = & L \int d\sigma_-\, ({\textstyle\frac{1}{3}} {\cal V}(\sigma_-) - {\textstyle\frac{1}{9}} k^2
\Pi^2(\sigma_-))
{\it e}^ {-\sixth k^2 \Gamma(\sigma_-)}\label{momden1}\\
& = & {2 V \over 3 \xi {\bar t} } \int_{0}^{\frac{1}{2}} dz\,
(\tilde {\cal V}(z/\bar t) - {\textstyle\frac{1}{3}} (k\xi)^2 \tilde\Pi^2(z/\bar t)
{\it e}^ {-\sixth (k\xi)^2 z^2 }\nonumber
\end{eqnarray}
where ${\cal V}$ and $\Pi$ are the correlation functions
\ben
{\cal V}(\sigma)=\langle{ {\bf\dot X}(\sigma)\cdot{\bf\dot X}(0)\rangle}
\label{correlV}
\een
\ben
\Pi(\sigma)=\langle{ {\bf\dot X}(\sigma)\cdot({\bf X}(\sigma)-{\bf X}(0)
)\rangle}
\label{correlPI}
\een
and $\tilde {\cal V}(\sigma_-/\xi)$ and $\tilde\Pi(\sigma_-/\xi)$ are
their scaling forms.
We have measured these correlation functions elsewhere \cite{VHS},
and we can use them to calculate the scaling function $P^{U}$. The
result is plotted in Figure \ref {fig:psmom}. Although for very large
$k\xi$
this model predicts a $(k\xi)^{-1}$ behaviour, within the $k$-range of
our simulations
the model agrees well with our measurements giving a fit to
$(k\xi)^{-0.66}$, although
the normalisation is not so impressive as for the energy-density.
\vskip 0.1in
\subsection{Two-time correlation functions}
In order to calculate the two-time correlation functions in this
framework we need
the two-time correlation functions $\Gamma(\sigma,\sigma',\eta,\eta')$,
${\cal V}(\sigma,\sigma',\eta,\eta')$ and
$\Pi(\sigma,\sigma',\eta,\eta')$. However, we have not yet
measured these quantities and instead resort to crude modelling. If we
consider small
scales we may assume that each segment is moving at a velocity
$\bar{v}^2=\la{\dot {\bf X}^2\ra}$ in a
random direction orthogonal to the orientation of the segment so that
\ben
\Gamma(\sigma_-,\eta,\eta')=\bar{t}^2\sigma_-^2 + \bar{v}^2 (\eta-\eta')^2
\label {gam1}
\een
Using equations (\ref {fm2}) and (\ref {gam1}) we get
\ben
\la {\Theta_{00}(k,\eta)\Theta_{00}^*(k,\eta')\ra}=\la {|\Theta_{00}(k,\eta)|^2\ra} {\it e}^{-\sixth \bar{v}^2 k^2(\eta-\eta')^2}
\label {TTmod2}
\een
However this model does not reflect the fact that between $\eta$ and
$\eta'$ some energy
is lost and the integrand is no longer independent of $d\sigma_+$. We
know
that $C^{\rho\rho}(\eta,\eta')=C^{\rho\rho}(\eta',\eta)$ and the most
natural way of respecting this condition is
to replace the power spectrum at $\eta$ with the square root of the
product of the
power spectra at the two times. Hence the final result to be compared
with equation
(\ref {2t00approx}) is
\ben
\la {\Theta_{00}(k,\eta)\Theta_{00}^*(k,\eta')\ra}={1 \over \sqrt {\xi\xi'}}
\sqrt{P^{\rho}(k\xi)P^{\rho}(k\xi')}\,{\it e}^{-\sixth \bar{v}^2
k^2(\eta-\eta')^2}
\label {TTmod3}
\een
To model the other two-time correlators $\la {U(k,\eta)U^*(k,\eta')\ra}$ and $\la {U(k,\eta)\Theta_{00}^*(k,\eta')\ra}$ we must
be
careful about the conservation of energy-momentum as loops are created
and energy
is lost from the long string network. If we assume that the loop
production occurs
evenly along the string and that we are in a scaling regime we can
model in $k$-space
the rate of energy going into loops as
\ben
\Lambda(k,\eta)={\lambda(k\xi) \over \xi}\Theta_{00}(k,\eta)
\label {Lambda}
\een
The energy-momentum conservation equation becomes
\ben
U(k,\eta)={1 \over k}{\partial \over \partial \eta}\Theta_{00}+{\lambda \over k
\xi}\Theta_{00}(k,\eta)
\label {em1}
\een
and hence,
\begin {eqnarray}
\la {U(k,\eta)\Theta_{00}^*(k,\eta')\ra} & = & \int d\sigma\, d\sigma'\ ({1 \over k}{\partial \over \partial
\eta}
+{\lambda \over k \xi}){\it e}^{-\sixth k^2 (\bar{v}^2
(\eta-\eta')^2 + \bar{t}^2\sigma_-^2)}\nonumber\\
& = & \int d\sigma\, d\sigma'\ ({\lambda \over k \xi} -{\textstyle\frac{1}{3}} k
\bar{v}^2\eta_-))
{\it e}^{-\sixth k^2 (\bar{v}^2 (\eta-\eta')^2 +
\bar{t}^2\sigma_-^2)}\label{TTcrossmod1}\\
& = & \la {|\Theta_{00}(k,\eta)|^2\ra} ({\lambda \over k\xi}-{\textstyle\frac{1}{3}} k \bar{v}^2\eta_-)
{\it e}^{-\sixth \bar{v}^2 k^2(\eta-\eta')^2}\nonumber
\end {eqnarray}
To ensure that we do not pick out a time $\eta$ we express $C^{\rho
U}(\eta,\eta')$
\ben
\la {U(k,\eta)\Theta_{00}^*(k,\eta')\ra}=\la {\Theta_{00}(k,\eta)\Theta_{00}^*(k,\eta')\ra}\,({\lambda \over k\sqrt{(\xi\xi')}}-\frac{1}{3} k
\bar{v}^2\eta_-)
\label{TTcrossmod3}
\een
The time dependence of this function is plotted in Figure
\ref {fig:ttcf_model_timecross}, and should be compared with
Figure \ref {fig:ttcf_timecross}.
\begin{figure}
\noindent
\centering
\centerline{\epsfig{file=ttcf_model_timecross.eps,width=3in,angle=0}}
\caption{Model of time dependence of $\la {U(k,\eta)\Theta_{00}^*(k,\eta')\ra}$}
\label{fig:ttcf_model_timecross}
\end{figure}
\begin{figure}
\noindent
\centering
\centerline{\epsfig{file=ttcf_timecross.eps,width=3in,angle=0}}
\caption{Measured time dependence of $\la {U(k,\eta)\Theta_{00}^*(k,\eta')\ra}$}
\label{fig:ttcf_timecross}
\end{figure}
These plots assume a
value of $\lambda$ measured from the simulations.
We find it to be roughly constant at large
$k\xi$ at $\lambda\simeq 0.32\,\, (\pm 0.03)$. Including loop
production
shifts where the cross-correlator goes to zero from $\eta=\eta'$ to
$\lambda-\frac{1}{3} k^2 \bar{v}^2\eta_-\sqrt{(\xi\xi')})=0$.
The effect of energy loss through
loop production on the momentum of the long string may also account for
the form of the equal time cross-correlator.
If we take $\eta=\eta'$ in equation (\ref {TTcrossmod3}), which in
terms
of this model gives the correlation between the long string
energy and the reaction momentum from loop production, we get
\ben
X^{\rho U}(k\xi)={\lambda \over k\xi}P^{\rho}(k\xi)
\label{Xmodel}
\een
This neatly explains the $(k\xi)^{-2}$ dependence in equation
(\ref{Xform}),
although the measured normalisation is only $60\%$ of that given by the
independently measured $\lambda$ and equation (\ref{Xmodel}). One
reason for this
discrepancy may be that our model assumes loop production occurs evenly
along the string
at a constant rate,
whereas we observe loop production in more discrete bursts.
The prediction from equation (\ref{Xmodel}) is plotted in Figure
\ref {fig:pscross}, along with the measured scaling function.
We find that the effect of the loop production term is insignificant
for $\la {U(k,\eta)U^*(k,\eta')\ra}$ and we will drop it in the following expression:
\begin {eqnarray}
\la {U(k,\eta)U^*(k,\eta')\ra} & = & {1 \over k^2} \int d\sigma\, d\sigma'\ {\partial \over \partial
\eta}{\partial \over \partial \eta'}
{\it e}^{-\sixth k^2 (\bar{v}^2 (\eta-\eta')^2 +
\bar{t}^2\sigma_-^2)}\nonumber\\[-3mm]\\
& = & \la {|\Theta_{00}(k,\eta)|^2\ra} {\bar{v}^2 \over 3} (1-{\bar{v}^2 \over 3} k^2 (\eta-\eta')^2)
{\it e}^{-\sixth\bar{v}^2 k^2 (\eta-\eta')^2} \nonumber
\label{TTmommod}
\end {eqnarray}
Again to ensure $C^{UU}(\eta,\eta')=C^{UU}(\eta',\eta)$, we use the
product of the square root of the power
spectra, so that
\ben
\la {U(k,\eta)U^*(k,\eta')\ra} = {1 \over \sqrt {\xi\xi'}}
\sqrt{P^{\rho}(k\xi)P^{\rho}(k\xi')}\,
{\bar{v}^2 \over 3} (1-{\bar{v}^2 \over 3} k^2 (\eta-\eta')^2) {\it
e}^{-\sixth\bar{v}^2 k^2 (\eta-\eta')^2}
\label{TTmommod2}
\een
Equation (\ref{TTmommod2}) incorrectly predicts $U$ to have the same
$k\xi$ dependence in its power spectrum as the energy, with a relative
normalisation given by $\bar{v}^2/3$.
We expect a proper treatment in terms
of the two-time on-string correlators for $\Pi$, ${\cal V}$ and
$\Gamma$
to remedy this problem. The time dependence of this function is plotted
in
Figure \ref {fig:ttcf_model_time0i}. As one can verify by comparison
with
Figure \ref {fig:ttcf_time0i}, the simplest of assumptions has provided
a reasonable approximation to the measured time dependence.
\begin{figure}
\centerline{\epsfig{file=ttcf_model_time0i.eps,width=3in,angle=0}}
\caption{Model of time dependence of $\la {U(k,\eta)U^*(k,\eta')\ra}$ }
\label{fig:ttcf_model_time0i}
\end{figure}
\begin{figure}
\centerline{\epsfig{file=ttcf_time0i.eps,width=3in,angle=0}}
\caption{Measured time dependence of $\la {U(k,\eta)U^*(k,\eta')\ra}$}
\label{fig:ttcf_time0i}
\end{figure}
In this section we have motivated the forms in equations
(\ref{2t00approx}),
(\ref{2tcrossapprox}) and (\ref{2t0iapprox}). From the model, and using
the measurements $\bar{t}^2=0.637 \pm 0.010$ and $\bar{v}^2=0.363 \pm 0.010$,
we can make predictions for the coherence parameters $\Upsilon$,
$\Upsilon'$
and $\Upsilon''$, and the relative normalisation $R=\la {\Theta_{00}(k,\eta)\Theta_{00}^*(k,\eta')\ra} / \la {U(k,\eta)U^*(k,\eta')\ra}$.
The measured values are for $E_c=2\delta$.
\begin{table}[ht]
\begin {center}
\begin {tabular} {|c|c|c|}
\hline
quantity & model & measured\\
\hline
$\Upsilon$ & $0.35 \pm 0.02$ & $0.21 \pm 0.05$\\
$\Upsilon'$ & $0.35 \pm 0.02$ & $0.42 \pm 0.05$\\
$\Upsilon''$ & $0.35 \pm 0.02$ & $0.36 \pm 0.07$\\
$R$ & $0.35 \pm 0.02$ & $0.38 \pm 0.1$\\
\hline
\end {tabular}
\caption{Parameters for the forms given in equations
(\ref{2t00approx}), (\ref{2tcrossapprox})
and (\ref{2t0iapprox}) as predicted by our model and measured from the
simulations. Note
a complication in comparing the values of $\Upsilon$, due to a lattice
effect in
equation (\ref{2t00approx}) as discussed in the text.}
\end {center}
\end{table}
The comparison for $\Upsilon$ is complicated by an
apparent lattice effect accounted for in equation (\ref{2t00approx}) by
the length scale $\Delta$.
We assume that as $\Delta$ goes to zero and the form in
equation (\ref{2t00approx}) approaches that in equation (\ref{TTmod3}),
allowing a comparison to be made.
Not surprisingly, the model is not wholly satisfactory. We know that
the momentum density does not have the same $(k\xi)$ dependence as the
energy density,
which is predicted by (\ref {TTmommod2}). Also from (\ref {2t00approx})
we see that the strings
decorrelate on a time scale $\sim 1/k\sqrt{(1-(k\Delta))}$ and not
$k^{-1}$.
One might think that both features are understandable as it is well
known that strings move faster
on smaller scales. This gives faster decorrelation and greater power
in the momentum density on small scales than the simple model would
suggest.
Having said this, we note that the scale $\Delta$ introduced in
equation
(\ref {2t00approx}) is close to the lattice scale and is fairly
constant throughout the simulation. Consequently,
the departure from a $k^{-1}$ coherence time may be due to the fact
that the strings
are defined at all times on the lattice. It is puzzling that this
effect does not show up in the other correlators. Furthermore, the
model does not account for the
oscillations about zero in the two-time correlators. The oscillations
are
present in all three two-time correlators, but are most obvious
in Figures \ref {fig:ttcf_timecross} and \ref {fig:ttcf_time0i}. As
Turok has recently pointed out \cite {Turok}, such oscillations should
appear in the Fourier transforms of two-time spatial correlators
because of a
causality constraint which sets the correlator to zero for causally
disconnected points.
\section{Conclusions}
We have demonstrated the scaling properties of the power
spectra and cross correlator of two important energy-momentum
quantities, the
density $\Theta_{00}$ and the velocity $U$, and
given the large {\em k} behaviour. We have also studied the two-time
correlators and measured the time coherence in the network.
We find that the energy and the momentum power spectra are peaked at
around
$k\xi\simeq 3$, where $\xi\simeq 0.15\eta$, thereafter decaying as
$(k\xi)^{-1}$ and $(k\xi)^{-0.66}$ respectively over the range
of our simulations. The cross-correlator decays as $(k\xi)^{-2}$ and is
peaked at
$k\xi\simeq 2$.
For a mode of spatial frequency $k$, the two-time correlation functions
of $\Theta_{00}$
and $U$ display correlations over time scales of $\eta_c\simeq
4.7k^{-1}$
and $\eta_c\simeq 2.4k^{-1}$ respectively. The time scale $\eta_c^2$ is
the variance
in the Gaussian fall-off as a function of the time difference (see
equations
(\ref{2t00approx}) and (\ref{2t0iapprox})).
We have presented simple models to explain the qualitative results,
predicting the power spectra and cross correlator including
normalisations,
to a reasonable accuracy,
and accounted for the features of the two-time correlation functions.
The model describes the string network as a set of randomly placed
segments of length $\xi/\bar t$, where $\bar t=(1-\bar{v}^2)^{\frac{1}{2}}$,
with random velocities. We assume that relevant
quantities such as velocities and extensions between points with a
given separation
in $\sigma$ are gaussian random variables. We can then reduce ensemble
averaging to the study of
two-point correlations along the string, which we must model or
measure.
From the model we can show that the characteristic coherence time
scale for a mode of spatial frequency $k$ is $\eta_c\simeq 3/k$.
Our simulations give a similar result, although at very small scales
$\eta_c$ decrease faster than $k^{-1}$. This we believe to be a
lattice effect.
There are potential pitfalls in taking the Minkowski space string
network
as a source for the fluid perturbation variables in a Friedmann model.
For example,
the energy conservation equation is modified to
\ben
\dot\Theta_{00} + {\dot a \over a}(\Theta_{00}+\Theta)-kU = -\Lambda
\label {emconFRW}
\een
where $a(\eta)$ is the scale factor and $\Theta=\Theta_{ii}$ the trace
of the spatial components of $\Theta_{\mu\nu}$. Since $\Theta$ is
unconstrained by any conservation equation, its fluctuations could
drive a
fluctuating component in $\Theta_{00}$ whose time scale would go like
${a /\dot a}$ \cite {Stebbins}. However, we find that
${\langle |\Theta|^2\rangle}<<{\langle |\Theta_{00}|^2\rangle}$ and believe
the introduction of the $\dot a / a$ term to be a small effect.
If we take the simulations at face value, the implications for the
appearance of the Doppler peaks are not entirely clear cut. The
coherence time is smaller than, but of the same order of magnitude as,
the period of acoustic oscillations in the photon baryon fluid at
decoupling, which is roughly $11/k$ \cite{Peeb}. This is in turn
smaller
than the time at which the power in the energy and velocity sources
peak,
approximately $20/k$. The computations of Magueijo {\em et\, al\,}
\cite {MACF2} of the Microwave Background angular power spectrum for
various source models suggest small or absent secondary peaks. However,
our string correlation functions can be
used as realistic sources to settle the issue.
\section*{Acknowledgements}
We wish to thank Andy Albrecht, Nuno Antunes, Pedro Ferreira, Paul
Saffin
and Albert Stebbins for useful discussions.
GRV and MBH are supported by PPARC, by studentship number 94313367,
Advanced
Fellowship number B/93/AF/1642 and grant number GR/K55967. MS is
supported by the Tomalla Foundation. Partial support is
also obtained from the European Commission under the Human Capital and
Mobility programme,
contract no. CHRX-CT94-0423.
\begin {thebibliography} {99}
\bibitem{ShelVil} A. Vilenkin and E.P.S. Shellard, {\em Cosmic Strings
and other Topological Defects}
(Cambridge University Press, Cambridge, 1994)
\bibitem{HindKib} M. Hindmarsh and T. Kibble {\em Rep. Prog. Phys} {\bf
58}
477 (1994)
\bibitem{VeeSte90} S. Veeraraghavan and A. Stebbins {\em Ap. J.}
{\bf 365}, 37 (1990)
\bibitem{AusCopKib} D. Austin, E. J. Copeland and T. W. B. Kibble {\em
Phys. Rev.}
{\bf D48}, 5594 (1993)
\bibitem{Hind1} M. Hindmarsh SUSX-TH-96-005 {\tt hep-th/9605332
}(unpublished)
\bibitem{HuSpeWhi96} W. Hu, D. N. Spergel and M. White {\tt astro-ph
9605193}
\bibitem{MACF1} J. Magueijo, A. Albrecht, D. Coulson and P. Ferreira
{\em Phys. Rev. Lett.} {\bf 76} 2617 (1996)
\bibitem{MACF2} J. Magueijo, A. Albrecht, D. Coulson and P. Ferreira
{\em MRAO-1917} {\tt astro-ph/9605047}
\bibitem{DuGaSa} R. Durrer, A. Gangui and M. Sakellariadou
{\em Phys. Rev. Lett.} {\bf 76} 579 (1996)
\bibitem{CritTur} R. G. Crittenden and N. Turok {\em Phys. Rev. Lett.}
{\bf 75}, 2642 (1995)
\bibitem{ACFM} A. Albrecht, D. Coulson, P. Ferreira and J. Magueijo
{\em Phys. Rev. Lett.} {\bf 76} 1413 (1996)
\bibitem{CopKibAus} E. J. Copeland, T. W. B. Kibble and D. Austin {\em
Phys. Rev.} {\bf D45} (1992)
\bibitem{AlbTur} A. Albrecht and N. Turok {\em Phys. Rev.} {\bf D40},
973 (1989)
\bibitem{FRWCodes} D. P. Bennett, in ``Formation and Evolution of
Cosmic Strings'',
eds. G. Gibbons, S. Hawking and T. Vachaspati, (Cambridge University
Press, Cambridge.
1990); F. R. Bouchet {\it ibid.}; E. P. S. Shellard and B. Allen {\it
ibid.};
\bibitem{SakVil1} M. Sakellariadou and A. Vilenkin {\em Phys. Rev.}
{\bf D37}, 885 (1988)
\bibitem{SakVil2} M. Sakellariadou and A. Vilenkin {\em Phys. Rev.}
{\bf D42}, 349 (1990)
\bibitem{VV} T. Vachaspati and A. Vilenkin {\em Phys. Rev.} {\bf D30},
2036 (1984)
\bibitem{SmithVil:alg} A.G. Smith and A. Vilenkin {\em Phys. Rev.} {\bf
D36}, 990 (1987)
\bibitem{VHS} M. Hindmarsh, M.Sakellariadou and G. Vincent (in
preparation)
\bibitem{Hind2} M. Hindmarsh {\em Ap. J.} {\bf 431}, {534} (1994)
\bibitem{Hind3} M. Hindmarsh {\em Nucl. Phys. B (Proc. Suppl.)}
{\bf 43}, 50 (1995)
\bibitem{Turok} N. Turok {\tt astro-ph/9604172} (1996)
\bibitem{Stebbins} A. Stebbins, private communication (1996)
\bibitem{Peeb} P. J. E. Peebles {\em The Large Scale Structure of the
Universe}
(Princeton University Press, Princeton, 1980)
\end {thebibliography}
\end{document}
|
1,477,468,751,445 | arxiv | |
1,477,468,751,446 | arxiv | \section{Symmetry between the contraction process and the expansion process}
\section{}
Eq.~\ref{eq:exactSoln} gives the following expression for the transition probability from $\vert m^A\rangle$ to $\vert n^B\rangle$ during the expansion process:
\begin{equation}
\begin{split}
&P(n^B \vert m^A) =\left \vert \sum_{l=1}^{\infty}\frac{2}{A} \int_{0}^{A} e^{-i v x^2/2 A} \sin \left(\frac{l \pi x}{A}\right) \sin \left(\frac{m \pi x}{A}\right) {\rm d}x \right.\\
&\times \left.
\exp \left[ -i \frac{\pi^2 l^2 (B-A)}{2ABv} \right] \,
\frac{2}{B} \int_{0}^{B} e^{i v x^2/2 B} \sin \left( \frac{n \pi
x}{B} \right) \sin \left( \frac{l \pi x}{B} \right) {\rm d}x \right \vert^{2}.
\end{split}
\label{expansion}
\end{equation}
For the contraction process, the transition probability from $\vert n^B\rangle$ to $\vert m^A\rangle$ is obtained from this result by making the replacements $m \leftrightarrow n$,
$A \leftrightarrow B$, and $v \rightarrow -v$:
\begin{equation}
\begin{split}
&\bar{P}(m^A \vert n^B) =\left \vert \sum_{l=1}^{\infty}\frac{2}{B} \int_{0}^{B} e^{i v x^2/2 B} \sin \left(\frac{l \pi x}{B}\right) \sin \left(\frac{n \pi x}{B}\right) {\rm d}x \right.\\
&\times \left.
\exp \left[ i \frac{\pi^2 l^2 (A-B)}{2BAv} \right] \,
\frac{2}{A} \int_{0}^{A} e^{-i v x^2/2 A} \sin \left( \frac{m \pi
x}{A} \right) \sin \left( \frac{l \pi x}{A} \right) {\rm d}x \right \vert^{2}.
\end{split}
\label{contraction}
\end{equation}
Comparing these expressions, it is straightforward to verify that they are equal:
\begin{equation}
P(n^B \vert m^A) = \bar{P}(m^A \vert n^B)
\label{a3}
\end{equation}
|
1,477,468,751,447 | arxiv | \section{Introduction}
\noindent \textit{\textbf{Introduction.}}---The observed cosmological constant (CC), $\Lambda\simeq (2\times 10^{-3}\,{\rm eV})^4$~\cite{Planck:2018vyg}, is one of biggest mysteries in nature. One may ask a natural question: is it a constant or potential energy of a scalar boson field?
In this letter, we stick to the latter scenario, because if so, it may provide us a deep insight into the quantum gravity~\cite{Dvali:2018fqu, Obied:2018sgi,tHooft:2006uhw,Lin:2022khg}. In this case, the mass of the scalar boson must be assumed extremely small as $\sim 10^{-33}\,{\rm eV}$ in order to keep the boson at the non-minimum point of its potential until the present. A unique candidate is the Nambu-Goldstone boson (called here as a quintessence axion~\cite{Fukugita:1994hq,Fukugita:1995nb,Frieman:1995pm,Kolda:1998wq,Choi:1999wv,Kim:2002tq,Kaloper:2005aj,Bonnefoy:2018ibr}) since it can have such a small mass against possible radiative corrections. However, the non-perturbative corrections of the quantum gravity may easily generate a larger mass for the axion, since non-perturbative corrections explicitly break any global symmetry in the quantum gravity~\cite{Banks:2010zn}. If it happens, the axion is no longer able to explain the present CC. We call this problem the quality problem of quintessence axion.
Interestingly, there is another candidate for a light particle, that is, the QCD axion. The QCD axion~\cite{Wilczek:1977pj, Weinberg:1977ma} has attracted many people's attention for a long time since it provides us a dynamical solution to the strong CP problem \cite{Peccei:1977hh}. However, due to the stringent constraint on QCD vacuum angle from neutron EDM measurement, the QCD axion also faces a similar quality problem~\cite{Georgi:1981pu,Barr:1992qq,Kamionkowski:1992mf,Holman:1992us,Ardu:2020qmo}.
Another question is that the origin of both axions in UV theories is unknown. String theories are expected to be such UV theories, and in fact, there are many candidates for massless axions whose masslessness is guaranteed by shift symmetries at the tree level in string theories.
However, world-sheet instantons and/or gravitational instantons might generate huge breakings of the shift symmetries~\cite{Svrcek:2006yi} and if it is the case the axions do not remain at low energies.
Therefore, it is very important to search for the UV theories in the framework of quantum field theories~\cite{Fukuda:2017ylt,Ibe:2018hir,Choi:2022fha}.
In this letter, we point out that candidates for the quintessence and QCD axions often exist in large parameter space for a class of the chiral $U(1)$ gauge theories. Surprisingly, the quality of the axions required to explain the observed vacuum energy (equivalently the CC) and/or to solve the strong CP problem is guaranteed by the $U(1)$ gauge symmetries. Moreover, our mechanism can also be extended to include the Fuzzy dark matter (DM) axion scenario.\\
\noindent \textit{\textbf{Chiral $U(1)$ gauge theories.}}---The new sector consists of two Higgs $\phi_1$, $\phi_2$ and $N$ pairs of chiral fermions $\{Q_i, \overline{Q}_i\}$. We assume $Q_i \in (\boldsymbol{3},\boldsymbol{1},0)$ and $\overline{Q}_i\in (\boldsymbol{3^*},\boldsymbol{1},0)$ under the $SU(3)_{\rm c} \times SU(2)_{\rm L} \times U(1)_{\rm Y}$ gauge transformation, respectively, and $i=1,2,\cdots,N$. Two Higgs fields imply two global $U(1)$ symmetries associated with their phase rotations. As shown in Ref.~\cite{Fukuda:2017ylt}, one linear combination of two $U(1)$s can be gauged dubbed $U(1)_g$, while the other combination dubbed $U(1)_a$, is orthogonal to $U(1)_g$ and can be the origin of the axion.
Since $U(1)_g$ is a gauge symmetry, there are two anomaly-cancellation conditions must be fulfilled, which are from $[U(1)_g]^3$ and gravitational $[U(1)_g]\times [{\rm graviton}]^2$ anomaly, i.e.,
\begin{align}
\sum_{i=1}^N U(1)_g^{Q_i} + U(1)_g^{\overline{Q}_i} & = 0\;, \nonumber \\
\sum_{i=1}^N \left(U(1)_g^{Q_i}\right)^3 + \left(U(1)_g^{\overline{Q}_i}\right)^3 & = 0\;,
\label{eq:anomaly_cancel}
\end{align}
where $U(1)_g^{Q_i}$ ($U(1)_g^{\overline{Q}_i}$) represents the $U(1)_g$ charge of $Q_i$ ($\overline{Q}_i$). Note that all these charges should be rational numbers, otherwise, it violates a principle in the quantum gravity~\cite{Banks:2010zn}. Furthermore, we can make them all integers by proper normalization.
In addition, the assignment of $U(1)_g^{Q_i}$ and $U(1)_g^{\overline{Q}_i}$ needs to ensure that there is no gauge invariant mass term, otherwise, they get the Planck scale masses and become irrelevant at low energies.
We demand that all fermions acquire mass through the Yukawa couplings. Therefore, the $U(1)_g$ charge of two Higgs $q_{1,2}$ can be determined by gauge invariance.
In this letter, we always take $q_{1,2}>0$, which can be realized by switching the definition of $\phi_i$ and $\phi^*_i$.
As we mentioned above, high quality is extremely crucial for both quintessence and QCD axion, that is, the global $U(1)_a$ should be a good symmetry.
In our framework, the possible lowest-order non-renormalizable operator that obeys the gauge $U(1)_g$ symmetry but breaks the global $U(1)_a$ symmetry is
\begin{equation}
\mathcal{O} \sim \frac{1}{n! m!}\frac{\phi_1^n \phi_2^{*m}}{M_{\rm Pl}^{n+m-4}} + \text{h.c.}\;,
\label{eq:PQbreak}
\end{equation}
where $(n,m)=(q_2/n_{\rm gcd},q_1/n_{\rm gcd})$ and $n_{\rm gcd}$ is the greatest common divisor of $(q_1,q_2)$. Therefore, $n$ and $m$ are relatively prime integers.
Clearly, varying degrees of qualities can be achieved by adjusting the values of $m$ and $n$.
After spontaneous symmetry breaking, one could expand two Higgs fields as $\phi_1 = (f_1/\sqrt{2}) \exp{(i\tilde{a}/f_1)}$ and $\phi_2 = (f_2/\sqrt{2}) \exp{(i \tilde{b}/f_2)}$, where $f_i$ is the vacuum expectation value of $\phi_i$. Since here we focus on two Nambu-Goldstone modes $\tilde{a}$ and $\tilde{b}$, the radial modes are neglected. One linear combination of them, $b$, is absorbed by the gauge boson of $U(1)_g$, while the orthogonal mode, $a$, is the axion. They are related by~\cite{Fukuda:2017ylt}
\begin{equation}
\begin{pmatrix}
a\\
b
\end{pmatrix}
= \frac{1}{\sqrt{q_1^2 f_1^2 + q_2^2 f_2^2}}
\begin{pmatrix}
q_2f_2 & -q_1 f_1 \\
q_1f_1 & q_2 f_2
\end{pmatrix}
\begin{pmatrix}
\tilde{a}\\
\tilde{b}
\end{pmatrix}\;.
\label{eq:transform}
\end{equation}
Therefore, one has
\begin{equation}
\phi_1^n \phi_2^{*m} = \frac{f_1^n f_2^m}{\sqrt{2}^{n+m}} e^{(ia/ F_a)}\;, \quad F_a = \frac{f_1 f_2}{\sqrt{m^2 f_1^2 + n^2 f_2^2}}\;.
\end{equation}
Clearly, $\mathcal{O}$ breaks the continuous shift symmetry of $a$, and grants the axion mass. The $b$ mode does not show up in $\mathcal{O}$ as expected since it is gauge-invariant.
In the following content, we will show that this formalism can always provide us with a proper quintessence axion and/or QCD axion candidate.\\
\noindent \textit{\textbf{High-quality quintessence axion.}}---Considering $N=4$ case, it is very easy to find a consistent model, that is, the $Q_2$, $Q_4$ ($\overline{Q}_2$, $\overline{Q}_4$) carry the opposite charges of $Q_1$, $Q_3$ ($\overline{Q}_1$, $\overline{Q}_3$), respectively. We assign the charges $\alpha$ and $\beta$ for $Q_1$ and $Q_3$ and $\gamma$ and $\delta$ for $\overline{Q}_1$ and $\overline{Q}_3$ (see Table~\ref{tab:charges}). With these $U(1)_g$ charge assignments, it is easy to verify that the theory is gauge anomaly free and there is no gauge invariant mass term unless $\alpha, \beta = \pm\delta, \pm \gamma$.
Without loss of generality, we can always take $\{\alpha,\beta,\gamma,\delta\}$ to be positive integers.
It is straightforward to extend our model to any even number $(>4)$ pairs of the new fermions.
\begin{table}
\caption{Symmetric charge assignment.}
\label{tab:charges}
\begin{ruledtabular}
\begin{tabular}{ccccc}
$i$ & $1$ & $2$ & $3$ & $4$ \\
\midrule
$Q_i$ & $\alpha$ & $-\alpha$ & $\beta$ & $-\beta$ \\[0.4em]
$\overline{Q}_i$ & $\gamma$ & $-\gamma$ & $\delta$ & $-\delta$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
In principle, both the QCD instanton effect and non-renormalizable operators could explicitly break the global $U(1)_a$ symmetry and grant the axion mass. In this ``symmetric" charge assignment scenario, one could see that this global $U(1)_a$ is anomaly-free, which means that the QCD instanton effect will not give axion mass (see Supplemental Material for
details). Thus, the axion mass is generated only from the higher-order symmetry-breaking terms (see Eq.~\eqref{eq:PQbreak}).
The potential from $\mathcal{O}$ is given by
\begin{align}
V & = \frac{2}{\sqrt{2}^{n+m} n! m!} \frac{f_1^n f_2^m}{M_{\rm Pl}^{n+m-4}}\left( 1- \cos{\frac{a}{F_a}} \right) \\
& = \frac{1}{2} m_a^2 a^2 + \cdots \nonumber\;.
\end{align}
The second line is expanded around the minimum of this potential and the axion mass is
\begin{equation}
m_a = \frac{M_{\rm Pl}^2}{F_a}\sqrt{\frac{2}{\sqrt{2}^{n+m} n!m!} \frac{f_1^n f_2^m}{M_{\rm Pl}^{n+m}}}\;.
\label{eq:ma}
\end{equation}
The equation of motion of the axion within the Friedmann–Lemaître–Robertson–Walker metric is given by
\begin{equation}
\ddot{a} + 3H(t) \dot{a} + \partial_a V =0\;,
\end{equation}
where $H(t)$ is the Hubble constant and the dot refers derivative with respect to cosmic time, $t$. Usually one takes $\partial_a V \simeq m_a^2 a$. Therefore, the mass and Hubble constant determine the evolution of the axion.
The axion quintessence requires the axion mass light enough so that it is still frozen by the current Hubble constant, $H_0\sim 10^{-33}\,{\rm eV}$ or just starts to roll down towards its vacuum. To explain the CC, one needs larger enough $F_a$ to compensate for the smallness of mass, according to $\Lambda\simeq m_a^2 F_a^2$~\cite{Nomura:2000yk}.
To quantitatively discuss the quality of quintessence axion, here we take
\begin{equation}
f_1=f_2=2\times 10^{17}\,{\rm GeV}\;
\label{eq:f_i_quintessence}
\end{equation}
as a benchmark, which naturally leads to the fact $F_a<M_{\rm Pl}$ to keep the Planck-suppressed expansion valid for Eq.~\eqref{eq:PQbreak} and it is big enough to avoid the axion instability~\cite{Ibe:2018ffn,Choi:2021aze}. Due to the fact that $F_a$ is bounded from above, the mass shall have a lower bound; otherwise, the CC could not be explained. Since we neglect the coupling constant in Eq.~\eqref{eq:PQbreak}, the mass is allowed to have one or two orders of magnitude differences to fulfill the quintessence requirement. Therefore, we consider that the axion has good quality if its mass satisfies
\begin{equation}
10^{-34} \,{\rm eV} \lesssim m_a \lesssim 10^{-32} \;{\rm eV}\;.
\label{eq:ma_range}
\end{equation}
Naively, if consider $f_i\sim M_{\rm Pl}$ in Eq.~\eqref{eq:ma}, one has $m_a \sim M_{\rm Pl}/\sqrt{\sqrt{2}^{n+m} n!m!}$. This tells us that to have a light enough axion mass, one needs large $n$ and $m$, which are determined by the charges of two Higgs $q_{1,2}$, whose values are furthermore given by charges of four pairs of chiral fermions. For a set of fermion charges in Table~\ref{tab:charges}, there are two scenarios that should be considered.
\begin{enumerate}
\item All charges are different. Then, there are eight possible charge assignments of two Higgs $(q_1,q_2)$, which are $(|\alpha \pm \gamma|,~|\beta \pm \delta|)$ and $(|\alpha \pm \delta|,~|\beta \pm \gamma|)$.
\item For $\alpha = \beta$ (or $\gamma = \delta$), there could only be four possible ways of charge assignment, which is $(|\alpha \pm \gamma|,~|\beta \pm \delta|)$.
\end{enumerate}
Since fermion charges are paired, the anomaly cancellation~\eqref{eq:anomaly_cancel} is trivial, which means that $\{\alpha,\beta,\gamma,\delta\}$ could be any positive integers.
One could see that it is easy to assign proper charges to fermions, such that a good quality quintessence axion appears. For example, consider
\begin{equation}
\alpha= 1\;,\quad \beta= 11\;,\quad \gamma =17\;,\quad \delta=26\;.
\end{equation}
Then if we take $q_1=\alpha+\gamma=18$ and $q_2=\beta+\delta =37$, which gives us $n=37$ and $m=18$, according to Eq.~\eqref{eq:ma}, the axion mass is $m_a\simeq 8\times 10^{-34}\,{\rm eV}$. This value could be different if one chooses $f_i$ which is different from Eq.~\eqref{eq:f_i_quintessence}.
Suppose that the maximum integer charge for fermions is $C_{\rm max}$. We could scan over all possible charge combinations that are allowed by chiral theories. Some of them have good quality for quintessence. We define the quality rate as
\begin{equation}
P = \frac{\text{No. of good quality axions}}{\text{No. of $\{\overline{Q}_i,Q_i,\phi_{1,2}\}$ charge combinations}}\;.
\end{equation}
As shown in Fig.~\ref{fig:rate_quintessence}, when $C_{\rm max}>15$, good quality quintessence axion appears in the parameter landscape of fermion charges.
Besides, we find that when $C_{\rm max}>30$ the high-quality rate $P$ begins to decrease because the mass of the axions tends to be smaller as $C_{\rm max}$ increases.
Here, we have assumed that all pairs of new fermions, $Q_i$ and $\overline{Q}_i$, obtain their masses through the Yukawa coupling of the Higgs $\phi_{1,2}$. However, the quality rate $P$ will easily increase if we use higher dimensional operators like $(\phi_1 \phi_2/M_{\rm PL}) Q_i \overline{Q}_i$ to generate masses for some pairs of the fermions.\\
\begin{figure}
\includegraphics[width=8cm]{quality_rate.pdf}
\caption{The quality rate $P$ as function of maximum charge of fermions $C_{\rm max}$. The red circles and black triangles are corresponding to the Quintessence axion and Fuzzy DM axion cases.}
\label{fig:rate_quintessence}
\end{figure}
\noindent \textit{\textbf{High-quality Fuzzy dark matter axion.}}--- The Fuzzy Dark Matter (DM) of mass $10^{-21}$--$10^{-19}\,{\rm eV}$~\cite{Irsic:2017yje,Armengaud:2017nkf,Ferreira:2020fam,Hui:2021tkt} is very attractive, since we may naively understand the size of galaxies by its de Broglie wavelength. Furthermore, it may not have small-scale problems including the cusp-core problem. Interestingly, the required initial value of the Fuzzy DM field to explain the DM density by its coherent oscillation is about $F_a\simeq 10^{16}\,{\rm GeV}$ which is close to the decay constant for the quintessence axion discussed above~\footnote{A recent proposal of mixed Fuzzy and cold DM model is constructed from electroweak axions~\cite{Qiu:2022uvt}.}. Thus, it is natural to accommodate both axions together in the present framework. It is in fact possible if we introduce a new four pairs ($N=4$) of fermions, $Q'_i$ and $\overline{Q'}_i$, and assign different gauge $U(1)$ charges. However, we have to take care of operator mixing among Higgs fields.
This could be avoided if we introduce a new chiral $U(1)_g'$ gauge theory, to which $Q'_i$ and $\overline{Q'}_i$ couple. This is exactly a copy of the previous model, but the new fermions only carry the new $U(1)_g'$ gauge charges, and hence there is no operator mixing. Similar to the quintessence axion case, we also demonstrate the quality rate for the Fuzzy DM scenario. Here for Fuzzy DM, good quality means that the axion has suitable mass, $10^{-21}$--$10^{-19}\,{\rm eV}$, and we take $f_1=f_2=10^{16}\,{\rm GeV}$ as the benchmark. Figure~\ref{fig:rate_quintessence} shows that there is a higher quality rate to explain the Fuzzy dark matter axion since the mass constraint is much weaker than the quintessence axion. \\
\noindent \textit{\textbf{High-quality QCD axion.}}--- The currents of all axions discussed in the previous sections do not have any gauge anomalies and hence we can not identify them with the QCD axion. The QCD axion model was proposed based on a chiral $U(1)_g$ gauge theory, where five pairs ($N=5$) of chiral fermions, $Q_i$ and $\overline{Q}_i$, have ``asymmetric" $U(1)_g$ charges. A known example is $\{-9,-5,-1,7,8\}$ for both of $Q_i$ and $\overline{Q}_i$, where all gauge anomalies are cancelled out~\cite{Nakayama:2011dj}. Two Higgs $\phi_{1,2}$ carry the $U(1)$ gauge charges $10$ and $-15$ to give masses to all fermions~\cite{Choi:2020vgb}. This is a consistent model for the QCD axion, since the axion couples to the QCD Chern-Simons term. However, the quality is not sufficiently high to solve the strong CP problem~\footnote{An extremely high-quality QCD axion model was, recently, constructed based on this five-pair fermion model with help of supersymmetry and $R$ symmetries~\cite{Choi:2022fha}.}.
In this section, we extend the above model by introducing more fermions to get a high-quality QCD axion. There might be various extensions to solve the quality problem. Here we consider only a special case where we have $N=3+2k$ pairs of chiral fermions, $Q''_i$ and $\overline{Q''}_i$. Their $U(1)_g$ charge assignment is shown in Table~\ref{tab:asymmetriccharges}. Note that here charges of fermions could be negative. The $U(1)_g$ charges of two Higgs $q_1$ and $q_2$ are
\begin{align}
q_1 & = |\alpha + \beta|= |2 \gamma| \;, \\
q_2 & = |\delta_1 + \eta_1| = |\delta_2 + \eta_2| = \cdots =|\delta_k + \eta_k| \;. \nonumber
\end{align}
We can prove that $q_1/q_2 = 2k/3$ and the $[U(1)_a]\times[SU(3)_{\rm c}]^2$ anomaly is nonzero (see Supplemental Material
for details).
The high order operator in Eq.~\eqref{eq:PQbreak} will cause a shift of the global minimum of axion potential, and therefore contribute to the QCD $\bar{\theta}$, i.e.,
\begin{align}
\label{eq:delttheta}
\delta\bar{\theta} & \sim \frac{2}{{\sqrt{2}}^{n+m} n!m!} \frac{f_1^n f_2^m}{M_{\rm Pl}^{n+m-4} m_\pi^2 F_\pi^2} \\
& \sim \frac{2}{n!m!}\times10^{-(18.38-x)(n+m)+77}\left(\frac{f_a}{10^{x}~\text{GeV}}\right)^{n+m}\;, \nonumber
\end{align}
where $m_\pi$ and $F_\pi$ are the mass and decay constant of the pion. Note that we have assumed $ f_1/\sqrt{2} = f_2/\sqrt{2} \equiv f_a \sim 10^{x}~\text{GeV}$. In order to fulfill the high-quality requirements, we need $\delta\bar{\theta}<10^{-10}$~\cite{Pospelov:1999mv}.
It shows for a larger $f_a$, a larger $n$ and $m$ are needed to achieve good quality. Here we consider two case $f_a = 10^{9}\,{\rm GeV}$ and $f_a = 10^{12}\,{\rm GeV}$. The former constraint is given by star cooling \cite{Paul:2018msp}, while in the latter case the axion is the dominant DM~\cite{Marsh:2015xka}.
For $f_a = 10^{12}\,{\rm GeV}$, we can derive that the minimum value of $k$ is 5, the corresponding $n$ and $m$ are 3 and 10 respectively.
Using Eq.~\eqref{eq:delttheta} one has $\delta\bar{\theta}\sim 10^{-13}$.
And just to be specific, we show one set of the solutions, i.e., $\{-11, -9, -10, -9, 15, 2, 4, 2, 4, 2, 4, 3, 3\}$.
As for $f_a = 10^{9}\,{\rm GeV}$, the minimum value of $k$ can be $4$, and the corresponding $n$ and $m$ are $3$ and $8$ respectively, which has an extremely high quality, i.e., $\delta\bar{\theta}\sim 10^{-31}$. One set of solutions is $\{-5, -3, -4, -3, 6, 1, 2, 1, 2, 1, 2\}$.\\
\begin{table}
\caption{Asymmetric charge assignment.}
\label{tab:asymmetriccharges}
\begin{ruledtabular}
\begin{tabular}{cccccccccc}
$i$ & $1$ & $2$ & $3$ & $4$ & $5$ & $\cdots$ & $2k+2$ &$2k+3$ \\
\midrule
$Q''_i$ & $\beta$ & $\alpha$ & $\gamma$ & $\delta_1$ & $\eta_1$ & $\cdots$ & $\delta_k$ & $\eta_k$ \\[0.4em]
$\overline{Q''}_i$ & $\alpha$ & $\beta$ & $\gamma$ & $\eta_1$ & $\delta_1$ & $\cdots$ & $\eta_k$ & $\delta_k$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
\noindent \textit{\textbf{Discussion and conclusions.}}---In this letter, we have proposed a simple unified framework for high-quality axions, including the QCD axion, the Fuzzy DM axion, and the quintessence axion, based on chiral $U(1)_g$ gauge theories.
Their high qualities are guaranteed by the $U(1)_g$ gauge symmetries and therefore free from non-perturbative corrections of quantum gravity.
Specifically, for $N=4$ and with symmetric $U(1)_g$ charge assignment, our model can provide the excellent quintessence and Fuzzy DM axion with a satisfactory high-quality rate $\sim2\%$.
For $N=3+2k$ with asymmetric $U(1)_g$ charge assignment, we find that $k=5~(4)$ is the minimum case to provide high-quality QCD axions with $f_a = 10^{12}~(10^{9})$ GeV.
Also, it's important to note that we are just providing a mathematical framework here, and in fact, it can be extended to many further types of research. For example,
if we apply the $N=3+2k$ fermion model to the quintessence and/or the Fuzzy dark matter axions and replace the fermions with the weak $SU(2)_{\rm L}$ doublets and anti-doublets, the axions couple to the weak $SU(2)_{\rm L}$ instantons. Then, the instantons generate their masses and if they dominate over the non-renormalizable higher-order terms it is called the electroweak axion~\cite{Nomura:2000yk,Lin:2022niw}.
One could construct ultra-light bosons with a board mass range under symmetric charge assignment of fermions in our framework, and their qualities are protected by $U(1)_g$. Such light bosons, $10^{-20}$--$10^{-10}\,{\rm eV}$, may form clouds around astrophysical black holes through superradiance instability~\cite{Brito:2015oca}, which could be further studied by gravitational collider physics~\cite{Baumann:2019ztm}.
If such $U(1)_{g}$ gauge symmetry is the gauged $U(1)_{B-L}$, then it is possible to identify two Higgs as inflatons since one of them can decay to the standard model particles making a thermal bath after the inflation. This provides a natural particle motivation for multi-stream inflation~\cite{Li:2009sp} if the scalar potential is in the proper form.
We can introduce more than two Higgs bosons and we have many global $U(1)$ symmetries. The spontaneous breaking of these global $U(1)$s generates many axions. Some of them have high quality and some of them do not. In any case, we have multiple axion-like particles. This might be regarded as a generic prediction of our framework.
The robust prediction in our framework is the presence of many massive fermions and the $U(1)_g$ gauge bosons. Generally, they are too heavy to be detected. However, if one of the gauge bosons has a very small gauge coupling and its mass is very small, it becomes a good candidate for DM. However, it can mix with the photon in general and the mass must be below the threshold of a pair of the electron and the positron. One-loop diagrams may generate a kinetic mixing between this new gauge boson and the weak boson, $Z^0$, but the mixing is strongly suppressed and the decay to a pair of the neutrinos is suppressed enough to make the gauge boson sufficiently long-lived to be the DM. The production of such a light DM occurs during the inflation~\cite{Graham:2015rva} and we have a correct abundance if the Hubble constant of inflation is in the range of $H_{\rm inf} = 10^{11}$--$10^{12}\,{\rm GeV}$~\cite{Lin:2022xbu, Lin:2022mqe}. The details of this DM gauge boson scenario will be discussed elsewhere.\\
\begin{acknowledgements}
T. T. Y. is supported in part by the China Grant for Talent Scientific Start-Up Project and by Natural Science Foundation of China (NSFC) under grant No. 12175134 as well as by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan.
\end{acknowledgements}
|
1,477,468,751,448 | arxiv |
\chapter{QFA data set sample, solving the logistic growth model and random effects model R code\label{cha:appendix_A}}
\section{\label{app:QFA_set_sam}\emph{cdc13-1} Quantitative Fitness Analysis data set sample}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img/data}
\caption[\emph{cdc13-1} QFA data set sample]{\emph{cdc13-1} Quantitative Fitness Analysis data set sample.
Notable columns include ``ORF", ``Expt.Time" and ``Growth". ``ORF" indicates which $\emph{orf}\Delta$ strain the row corresponds to. ``Expt.Time" indicates the time in days from the $\emph{orf}\Delta$ strain being spotted \citep{QFA1}.
``Growth" gives an adjusted measure of cell culture density from the image analysis for a given $\emph{orf}\Delta$ strain and time point.
Generated from Colonyzer output files with the qfa R package, freely available at \url{http://qfa.r-forge.r-project.org/}. \label{app:appendix_label}
}
\end{figure}
\clearpage
\section{\label{app:solving_log_gro}Solving the logistic growth model}
The solution to the logistic growth ODE (\ref{eq_det}) can be obtained as follows.
First we factor the right side of (\ref{eq_det}) and rearrange to give:
\begin{equation*}
\frac{d{x(t)}}{{x(t)}\left(1 - \frac{{x(t)}}{K}\right)}=rdt.
\end{equation*}
We now rearrange further using a partial fractions expansion and integrate over both sides of the equation:
\begin{equation}\label{eq:log_int}
\int\frac{d{x(t)}}{{x(t)}}+\frac{\frac{1}{K}d{x(t)}}{\left(1 - \frac{{x(t)}}{K}\right)}=\int rdt.
\end{equation}
Integrating the first component on the left side of (\ref{eq:log_int}) we obtain the following, where $c_1$ is an unknown constant:
\begin{equation*}
\int\frac{d{x(t)}}{{x(t)}}=\log({x(t)})+c_1.
\end{equation*}
Integrating the second component on the left side of (\ref{eq:log_int}) we obtain the following, where $c_2$ is an unknown constant:
\begin{equation*}
\frac{1}{K}\int\frac{d{x(t)}}{1-\frac{{x(t)}}{K}}=-\log(1-\frac{{x(t)}}{K})+c_2.
\end{equation*}
Integrating the right side of (\ref{eq:log_int}) we obtain the following, where $c_3$ is an unknown constant:
\begin{equation*}
\int rdt=rt+c_3.
\end{equation*}
Solving the integrals in (\ref{eq:log_int}) we obtain the following, where $c_4=c_3-c_1-c_2$ is an unknown constant:
\begin{equation*}
\log\left(\frac{{x(t)}}{1-\frac{{x(t)}}{K}}\right)=rt+c_4.
\end{equation*}
Rearranging our equation, we obtain the following:
\begin{equation*}
\frac{{x(t)}}{1-\frac{{x(t)}}{K}}=e^{rt+c_4}.
\end{equation*}
We now apply initial conditions, $P={x_{0}}$ and rearrange to obtain an expression for $c_4$:
\begin{equation*}
c_4=\log\left(\frac{P}{1-\frac{P}{K}}\right).
\end{equation*}
We now substitute in our expression for $c_4$ to give:
\begin{equation*}
\log\left(\frac{{x(t)}}{1-\frac{{x(t)}}{K}}\right)=rt+\log\left(\frac{P}{1-\frac{P}{K}}\right)
\end{equation*}
Finally, we rearrange to give (\ref{eq:logistic}).
\clearpage
\section{Random effects model R code\label{app:remcode}}
{\fontsize{7.6}{7.6}\selectfont
\begin{verbatim}
library(lme4) #http://cran.r-project.org/web/packages/lme4/index.html
#http://research.ncl.ac.uk/colonyzer/AddinallQFA/Logistic.zip and extract zip file
#alternatively http://research.ncl.ac.uk/colonyzer/AddinallQFA/
#"Table S8 Logistic Output Files - 36MB .zip file"
aa<-read.delim("cSGA_v2_r1_Logistic.txt",header=T,skip=1,sep="\t")
#...
bb<-read.delim("Adam_cdc13-1_SDLV2_REP1_Logistic.txt",header=T,skip=0,sep="\t")
#...
aa<-aa[aa$Treatments==27,]
bb<-bb[bb$Treatments==27,]
aa<-aa[!aa$Row==1,]
aa<-aa[!aa$Row==16,]
aa<-aa[!aa$Col==1,]
aa<-aa[!aa$Col==24,]
bb<-bb[!bb$Row==1,]
bb<-bb[!bb$Row==16,]
bb<-bb[!bb$Col==1,]
bb<-bb[!bb$Col==24,]
ORFuni=ORFuni_a=unique(aa$ORF)
ORFuni_b=unique(bb$ORF)
L=length(ORFuni_a)
NoORF_a=NoORF_b=aaa=bbb=numeric()
for (i in 1:L){
NoORF_a[i]=nrow(aa[aa$ORF==ORFuni[i],])
NoORF_b[i]=nrow(bb[bb$ORF==ORFuni[i],])
aaa<-rbind(aaa,aa[aa$ORF==ORFuni[i],])
bbb<-rbind(bbb,bb[bb$ORF==ORFuni[i],])
}
a=b=numeric(0)
K_lm=aaa$Trimmed.K
P_a=43
r_lm=aaa$Trimmed.r
for (i in 1:length(r_lm)){
if(K_lm[i]<=2*P_a){K_lm[i]=2*P_a+0.01;r_lm[i]=0;}
a[i]=(r_lm[i]/log(2*max(0,K_lm[i]-P_a)/max(0,K_lm[i]-2*P_a)))*(log(K_lm[i]/P_a)/log(2));
}
K_lmb=bbb$Trimmed.K
P_b=43
r_lmb=bbb$Trimmed.r
for (i in 1:length(r_lmb)){
if(K_lmb[i]<=2*P_b){K_lmb[i]=2*P_b+0.01;r_lmb[i]=0;}
b[i]=(r_lmb[i]/log(2*max(0,K_lmb[i]-P_b)/max(0,K_lmb[i]-2*P_b)))*(log(K_lmb[i]/P_b)/log(2));
}
condition<-factor(c(rep("a",length(a)),rep("b",length(b))))
subject=numeric()
for (i in 1:L){
subject=c(subject,rep(i,NoORF_a[i]))
}
for (i in 1:L){
subject=c(subject,rep(i,NoORF_b[i]))
}
subcon=subject
subcon[1:length(a)]=0
subcon<-factor(subcon)
subject<-factor(subject)
f=c(a,b)
data=data.frame(f,subject,condition,subcon)
data$lf=log(data$f+1)
data$subcon<-C(data$subcon,sum)
bk<-contrasts(data$subcon)
contrasts(data$subcon)=bk[c(nrow(contrasts(data$subcon)),1:(nrow(contrasts(data$subcon))-1)),]
model1<-lmer(lf~subcon+(1|subject),data=(data),REML=F)
\end{verbatim}
}
\clearpage
\chapter{Bayesian hierarchical modelling\label{cha:appendix}}
\section{Hyper-parameter values for Bayesian hierarchical modelling}
\begin{table}[h!]
\caption[Hyper-parameter values for Bayesian hierarchical modelling of quantitative fitness analysis data]{
Hyper-parameter values for Bayesian hierarchical modelling of quantitative fitness analysis data. Hyper-parameter values for the separate hierarchical model (SHM), interaction hierarchical model (IHM) and joint hierarchical model (JHM) are provided. \label{tab:SHM_priors}}
\centering
\resizebox{!}{0.9in}{%
\npdecimalsign{.}
\nprounddigits{2}
\centering
\begin{tabular}{c n{2}{2} c n{2}{2} c n{2}{2} c n{2}{2} c n{2}{2}}
\hline
\noalign{\vskip 0.4mm}
\multicolumn{2}{c}{SHM \& JHM} &\multicolumn{2}{c}{SHM \& JHM}& \multicolumn{2}{c}{JHM} & \multicolumn{2}{c}{IHM} & \multicolumn{2}{c}{JHM-B \& JHM-T}\\
Parameter Name & \multicolumn{1}{c}{Value} & Parameter Name & \multicolumn{1}{c}{Value} & Parameter Name & \multicolumn{1}{c}{Value}& Parameter Name & \multicolumn{1}{c}{Value} &Parameter Name & \multicolumn{1}{c}{Value}\\ \hline
$\tau^{K,\mu}$ & 2.20064039227566 & $\eta^{r,p}$ & 0.133208648543871 &$\alpha^{\mu}$ & 0 & $Z_{\mu}$ & 3.65544229414228 &$\kappa^p$ & 0\\
$\eta^{\tau,K,p}$ & 0.0239817523340161 & $\nu^{\mu}$ & 19.8220570630669 & $\eta^{\alpha}$ & 0.25 & $\eta^{Z,p}$ & 0.697331530063874 & $\eta^\kappa$& 1.166666666666\\
$\eta^{K,o}$ & -0.79421175992029 & $\eta^{\nu,p}$ & 0.0174869367984725 &$\beta^{\mu}$ & 0 & $\eta^{Z}$ & 0.104929506383255 &$\lambda^p$ & 0\\
$\psi^{K,o}$ & 0.610871036009521 & $P^{\mu}$ & -9.03928728018792 & $\eta^{\beta}$ & 0.25 & $\psi^{Z}$ & 0.417096744759774 & $\eta^\lambda$& 1.166666666666\\
$\tau^{r,\mu}$ & 3.64993037268256 & $\eta^{P}$ & 0.469209463148874 & $p$ & 0.05 & $\eta^{\nu}$ & 0.101545024587153 & $\phi^{shape}$ & 100\\
$\eta^{\tau,r,p}$ & 0.0188443648965434 &&&$\eta^{\gamma}$ & -0.79421175992029 & $\psi^{\nu}$ & 2.45077729037385 & $\phi^{scale}$ & 0.01\\
$\eta^{r,o}$ & 0.468382435659566 &&& $\psi^{\gamma}$ & 0.610871036009521 & $\nu^{\mu}$ & 2.60267545154548 & $\chi^{shape}$ & 100\\
$\psi^{r,o}$ & 0.0985295312016232 &&& $\eta^{\omega}$ & 0.468382435659566 & $\eta^{\nu,p}$ &0.0503202367841729 & $\chi^{scale}$ & 0.01\\
$\eta^{\nu}$ & -0.834166609695065 &&& $\psi^{\omega}$ & 0.0985295312016232 & $\alpha^{\mu}$ & 0 &&\\
$\psi^{\nu}$ & 0.855886535578262 &&& $\eta^{\tau,K}$ & 2.20064039227566& $\eta^{\alpha}$ & 0.309096075088720 &&\\
$K^{\mu}$ & -2.01259579112252 &&& $\psi^{\tau,K}$ & 0.0239817523340161 & $p$ & 0.05 &&\\
$\eta^{K,p}$ & 0.032182397822033 &&& $\eta^{\tau,r}$ & 3.64993037268256 & $\eta^{\gamma}$ & 0.104929506383255&& \\
$r^{\mu}$ & 0.97398228941848 &&& $\psi^{\tau,r}$ & 0.0188443648965434 & $\psi^{\gamma}$ & 0.417096744759774&& \\
\hline
\end{tabular}
\npnoround
}
\end{table}
\clearpage
\section{\label{app:GO_fit}\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C fitness plots with gene ontology terms highlighted}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/GO_4_tel}
\caption[Alternative fitness plots with \emph{orf}$\Delta$ posterior mean fitnesses and labels for the ``telomere maintenance'' gene ontology term]{Alternative fitness plots with \emph{orf}$\Delta$ posterior mean fitnesses. Labels for the ``telomere maintenance'' gene ontology term are highlighted in blue.
A) Non-Bayesian, non-hierarchical fitness plot, based on Table~S6 from Addinall et al. (2011) $(F=MDR\times MDP)$.
B) Non-Bayesian, hierarchical fitness plot, \hl{from fitting REM to data} in Table~S6 from Addinall et al. (2011) $(F=MDR\times MDP)$.
C) IHM fitness plot with $\emph{orf}\Delta$ posterior mean fitness $(F=MDR\times MDP)$.
D) JHM fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on analysis of growth parameter $r$.
Further fitness plot explanation and notation is given in Figure~\ref{fig:old}.
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/GO_4_age}
\caption[Alternative fitness plots with \emph{orf}$\Delta$ posterior mean fitnesses and labels for the ``ageing'' gene ontology term]{Alternative fitness plots with $\emph{orf}\Delta$ posterior mean fitnesses. Labels for the ``ageing'' gene ontology term are highlighted in blue.
A) Non-Bayesian, non-hierarchical fitness plot, based on Table~S6 from Addinall et al. (2011) $(F=MDR\times MDP)$.
B) Non-Bayesian, hierarchical fitness plot, \hl{from fitting REM to data} in Table~S6 from Addinall et al. (2011) $(F=MDR\times MDP)$.
C) IHM fitness plot with $\emph{orf}\Delta$ posterior mean fitness $(F=MDR\times MDP)$.
D) JHM fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on analysis of growth parameter $r$.
Further fitness plot explanation and notation is given in Figure~\ref{fig:old}.
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/GO_4_dna}
\caption[Alternative fitness plots with \emph{orf}$\Delta$ posterior mean fitnesses and labels for the ``response to DNA damage'' gene ontology term]{Alternative fitness plots with $\emph{orf}\Delta$ posterior mean fitnesses. Labels for the ``response to DNA damage'' gene ontology term are highlighted in blue.
A) Non-Bayesian, non-hierarchical fitness plot, based on Table~S6 from Addinall et al. (2011) $(F=MDR\times MDP)$.
B) Non-Bayesian, hierarchical fitness plot, \hl{from fitting REM to data} in Table~S6 from Addinall et al. (2011) $(F=MDR\times MDP)$.
C) IHM fitness plot with $\emph{orf}\Delta$ posterior mean fitness $(F=MDR\times MDP)$.
D) JHM fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on analysis of growth parameter $r$.
Further fitness plot explanation and notation is given in Figure~\ref{fig:old}.
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/GO_4_pex}
\caption[Alternative fitness plots with \emph{orf}$\Delta$ posterior mean fitnesses and labels for the ``peroxisomal organisation'' gene ontology term]{Alternative fitness plots with $\emph{orf}\Delta$ posterior mean fitnesses. Labels for the ``peroxisomal organisation'' gene ontology term are highlighted in blue.
A) Non-Bayesian, non-hierarchical fitness plot, based on Table~S6 from Addinall et al. (2011) $(F=MDR\times MDP)$.
B) Non-Bayesian, hierarchical fitness plot, \hl{from fitting REM to data} in Table~S6 from Addinall et al. (2011) $(F=MDR\times MDP)$.
C) IHM fitness plot with $\emph{orf}\Delta$ posterior mean fitness $(F=MDR\times MDP)$.
D) JHM fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on analysis of growth parameter $r$.
Further fitness plot explanation and notation is given in Figure~\ref{fig:old}.
}
\end{figure}
\clearpage
\section{\label{app:interactions}Lists of top genetic interactions for the two-stage and one-stage Bayesian approaches}
\input{tables/IHM_interactions}
\input{tables/JHM_interactions}
\clearpage
\section{\label{app:alternative_JHM}\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C fitness plots for the joint hierarchical model in terms of carrying capacity and growth rate parameters}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_K}
\caption[Joint hierarchical model carrying capacity fitness plot]{Joint hierarchical model (JHM) carrying capacity fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on carrying capacity parameter $K$.
Significant interactors have posterior probability $\Delta>0.5$.
To compare fitness plots, labelled genes are those belonging to the following gene ontology terms in Table~\ref{tab:sup_enh}: ``telomere maintenance'', ``ageing'', ``response to DNA damage stimulus'' or ``peroxisomal organization'', as well as the genes identified as interactions only in $K$ with the JHM (see Figure~\ref{fig:JHM_only}) (blue), genes interacting only in $r$ with the JHM (cyan) and the MRX complex genes (pink).
\label{fig:JHM_K}
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_r}
\caption[Joint hierarchical model growth rate fitness plot]{Joint hierarchical model (JHM) growth rate fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on growth parameter $r$.
Significant interactors have posterior probability $\Delta>0.5$.
To compare fitness plots, labelled genes are those belonging to the following gene ontology terms in Table~\ref{tab:sup_enh}: ``telomere maintenance'', ``ageing'', ``response to DNA damage stimulus'' or ``peroxisomal organization'', as well as the genes identified as interactions only in $K$ with the JHM (see Figure~\ref{fig:JHM_only}) (blue), genes interacting only in $r$ with the JHM (cyan) and the MRX complex genes (pink).
\label{fig:JHM_r}
}
\end{figure}
\FloatBarrie
\clearpage
\section{Gene ontology term enrichment analysis in R\label{app:GOstats}}
{\fontsize{9}{9}\selectfont
\begin{verbatim}
source("http://bioconductor.org/biocLite.R")
biocLite("GOstats")
biocLite("org.Sc.sgd.db")
###################
library(GOstats) # GO testing tool package
library(org.Sc.sgd.db) # yeast gene annotation package
genes=read.table("JHM_strip.txt", header=T)
UNIVSTRIP=genes[,2]
genes<-as.vector(genes[genes[,3]>0.5,2])
genes<-unique(genes)
ensemblIDs=as.list(org.Sc.sgdPMID2ORF)
univ=unlist(ensemblIDs)
univ=univ[!is.na(univ)]
length(univ)
length(unique(univ))
univ=unique(univ)
all=as.vector(univ)
all=all[al
length(all)
ontology=c("BP")
vec<-gene
genes<-genes[vec]
params_temp=new("GOHyperGParams", geneIds=genes,
universeGeneIds=all,
annotation="org.Sc.sgd.db", categoryName="GO",
ontology=ontology, pvalueCutoff=1,
testDirection = "over")
results=hyperGTest(params_temp)
results=summary(results)
results$qvalue<-p.adjust(results$Pvalue,method="BH")
\end{verbatim}
}
\clearpage
\section{Code for Just Another Gibbs Sampler software\label{app:jags_code}}
\subsection{Separate hierarchical model code}
{\fontsize{7.4}{7.4}\selectfont
\begin{verbatim}
model {
for (l in 1:N){
for (m in 1:NoORF[l]){
for (n in 1:NoTime[(NoSum[l]+m)]){
y[m,n,l] ~ dnorm(y.hat[m,n,l], exp(nu_l[l]))
y.hat[m,n,l] <- (K_lm[(NoSum[l]+m)]
*P*exp(r_lm[(NoSum[l]+m)]*x[m,n,l]))
/(K_lm[(NoSum[l]+m)]+P*(exp(r_lm[(NoSum[l]+m)]*x[m,n,l])-1))
}
K_lm[(NoSum[l]+m)]<- exp(K_lm_L[(NoSum[l]+m)])
K_lm_L[(NoSum[l]+m)] ~ dnorm(K_o_l_L[l],exp(tau_K_l[l]))T(,0)
r_lm[(NoSum[l]+m)]<- exp(r_lm_L[(NoSum[l]+m)])
r_lm_L[(NoSum[l]+m)] ~ dnorm(r_o_l_L[l],exp(tau_r_l[l]))T(,3.5)
}
K_o_l_L[l]<- log(K_o_l[l])
K_o_l[l] ~ dt( exp(K_p), exp(sigma_K_o),3)T(0,)
r_o_l_L[l]<- log(r_o_l[l])
r_o_l[l] ~ dt( exp(r_p), exp(sigma_r_o),3)T(0,)
nu_l[l] ~ dnorm(nu_p, exp(sigma_nu) )
tau_K_l[l]~dnorm(tau_K_p,exp(sigma_tau_K))T(0,)
tau_r_l[l]~dnorm(tau_r_p,exp(sigma_tau_r))
}
K_p ~ dnorm(K_mu,eta_K_p)
r_p ~ dnorm(r_mu,eta_r_p)
nu_p ~ dnorm(nu_mu,eta_nu_p)
P<-exp(P_L)
P_L ~ dnorm(P_mu,eta_P)
tau_K_p ~ dnorm(tau_K_mu,eta_tau_K_p)
sigma_tau_K ~ dnorm(eta_tau_K,psi_tau_K)
tau_r_p ~ dnorm(tau_r_mu,psi_tau_r)
sigma_tau_r ~ dnorm(eta_tau_r,psi_tau_r)
sigma_nu~dnorm(eta_nu,psi_nu)
sigma_K_o ~ dnorm(eta_K_o,psi_K_o)
sigma_r_o ~ dnorm(eta_r_o,psi_r_o)
}
\end{verbatim}
}
\subsection{Interaction hierarchical model code}
{\fontsize{7.4}{7.4}\selectfont
\begin{verbatim}
model {
for (l in 1:N){
for (c in 1:2){
for (m in 1:NoORF[l,c]){
y[m,c,l]~ dnorm(exp(alpha_c[c]
+delta_l[l,c]*gamma_cl_L[l,c])*Z_l[l],exp(nu_cl[l+(c-1)*N]))
}
nu_cl[l+(c-1)*N]~dnorm(nu_p,exp(sigma_nu))
}
Z_l[l]~dt(exp(Z_p),exp(sigma_Z),3)T(0,)
delta_l[l,1]<-0
delta_l[l,2]~dbern(p)
gamma_cl_L[l,1]<-0
gamma_cl_L[l,2]<-log(gamma_l[l])
gamma_l[l]~dt(1,exp(sigma_gamma),3)T(0,)
}
alpha_c[1]<-0
alpha_c[2]~dnorm(alpha_mu,eta_alpha)
Z_p~dnorm(Z_mu,eta_Z_p)
nu_p~dnorm(nu_mu,eta_nu_p)
sigma_Z~dnorm(eta_Z,psi_Z)
sigma_nu~dnorm(eta_nu,psi_nu_p)
sigma_gamma~dnorm(eta_gamma,psi_gamma)
}
\end{verbatim}
}
\clearpage
\subsection{Joint hierarchical model code}
{\fontsize{7.6}{7.6}\selectfont
\begin{verbatim}
model {
for (l in 1:N){
for (c in 1:2){
for (m in 1:NoORF[l,c]){
for (n in 1:NoTime[NoSum[l,c]+m,c]){
y[m,n,l,c] ~ dnorm(y.hat[m,n,l,c],exp(nu_cl[l+(c-1)*N]))
y.hat[m,n,l,c] <- (K_clm[(SHIFT[c]+NoSum[l,c]+m)]
*P*exp(r_clm[(SHIFT[c]+NoSum[l,c]+m)]*x[m,n,l,c]))
/(K_clm[(SHIFT[c]+NoSum[l,c]+m)]+P*(exp(r_clm[(SHIFT[c]+NoSum[l,c]+m)]
*x[m,n,l,c])-1))
}
K_clm[(SHIFT[c]+NoSum[l,c]+m)]<-exp(K_clm_L[(SHIFT[c]+NoSum[l,c]+m)])
K_clm_L[(SHIFT[c]+NoSum[l,c]+m)] ~ dnorm(alpha_c[c]+K_o_l_L[l]
+(delta_l[l,c]*gamma_cl_L[l,c]),exp(tau_K_cl[l+(c-1)*N]))T(,0)
r_clm[(SHIFT[c]+NoSum[l,c]+m)]<-exp(r_clm_L[(SHIFT[c]+NoSum[l,c]+m)])
r_clm_L[(SHIFT[c]+NoSum[l,c]+m)] ~ dnorm(beta_c[c]+r_o_l_L[l]
+(delta_l[l,c]*omega_cl_L[l,c]),exp(tau_r_cl[l+(c-1)*N]))T(,3.5)
}
tau_K_cl[l+(c-1)*N]~dnorm(tau_K_p_c[c],exp(sigma_tau_K_c[c]))T(0,)
tau_r_cl[l+(c-1)*N]~dnorm(tau_r_p_c[c],exp(sigma_tau_r_c[c]))
nu_cl[l+(c-1)*N]~dnorm(nu_p,exp(sigma_nu))
}
K_o_l_L[l]<- log(K_o_l[l])
K_o_l[l] ~ dt(exp(K_p),exp(sigma_K_o),3)T(0,)
r_o_l_L[l]<- log(r_o_l[l])
r_o_l[l] ~ dt(exp(r_p),exp(sigma_r_o),3)T(0,)
delta_l[l,1]<-0
delta_l[l,2]~dbern(p)
gamma_cl_L[l,1]<-0
gamma_cl_L[l,2]<-log(gamma_l[l])
gamma_l[l]~dt(1,exp(sigma_gamma),3)T(0,)
omega_cl_L[l,1]<-0
omega_cl_L[l,2]<-log(omega_l[l])
omega_l[l]~dt(1,exp(sigma_omega),3)T(0,)
}
alpha_c[1]<-0
alpha_c[2]~dnorm(alpha_mu,eta_alpha)
beta_c[1]<-0
beta_c[2]~dnorm(beta_mu,eta_beta)
K_p~dnorm(K_mu,eta_K_p)
r_p~dnorm(r_mu,eta_r_p)
nu_p~dnorm(nu_mu,eta_nu_p)
P <- exp(P_L)
P_L ~dnorm(P_mu,eta_P)
sigma_K_o~dnorm(eta_K_o,psi_K_o)
sigma_r_o~dnorm(eta_r_o,psi_r_o)
tau_K_p_c[1]~dnorm(tau_K_mu,eta_tau_K_p)
tau_K_p_c[2]~dnorm(tau_K_mu,eta_tau_K_p)
tau_r_p_c[1]~dnorm(tau_r_mu,eta_tau_r_p)
tau_r_p_c[2]~dnorm(tau_r_mu,eta_tau_r_p)
sigma_tau_K_c[1]~dnorm(eta_tau_K,psi_tau_K)
sigma_tau_K_c[2]~dnorm(eta_tau_K,psi_tau_K)
sigma_tau_r_c[1]~dnorm(eta_tau_r,psi_tau_r)
sigma_tau_r_c[2]~dnorm(eta_tau_r,psi_tau_r)
sigma_nu~dnorm(eta_nu,psi_nu)
sigma_gamma~dnorm(eta_gamma,psi_gamma)
sigma_omega~dnorm(eta_omega,psi_omega)
}
}
\end{verbatim}
}
\clearpage
\section{Additional \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C fitness plots\label{app:alt_fitness}}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img/REM}
\caption[Alternative non-Bayesian, hierarchical random effects model fitness plot]{Alternative non-Bayesian, hierarchical fitness plot, \hl{from fitting the random effects model (REM) to data} in Table~S6 from \cite{QFA1} $(F=MDR\times MDP)$.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in red and green for suppressors and enhancers respectively.
$\emph{orf}\Delta$s without significant evidence of interaction are in grey and have no \emph{orf} name label.
Significant interactors are classified as those with FDR corrected p-values $<0.05$.
}
\label{fig:REM_app}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/IHM_nocross}
\caption[Alternative interaction hierarchical model fitness plot]{Alternative interaction hierarchical model (IHM) fitness plot with $\emph{orf}\Delta$ posterior mean fitness.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted on the plot as red and green for suppressors and enhancers respectively $(F=MDR\times MDP)$.
Solid and dashed grey fitted lines are for the IHM linear model fit.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in red and green for suppressors and enhancers respectively.
$\emph{orf}\Delta$s without significant evidence of interaction are in grey and have no \emph{orf} name label.
Significant interactors have posterior probability $\Delta>0.5$.
\label{fig:IHM_app}
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_URA_CDC13-1_27_27}
\caption[Alternative joint hierarchical model fitness plot]{Alternative joint hierarchical model (JHM) fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
The JHM does not does not make use of a fitness measure such as $MDR\times{MDP}$ but the fitness plot is given in terms of $MDR\times{MDP}$ for comparison with other approaches which do.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on one of the two parameters used to classify genetic interaction, growth parameter $r$, this means occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in red and green for suppressors and enhancers respectively.
$\emph{orf}\Delta$s without significant evidence of interaction are in grey and have no \emph{orf} name label.
Significant interactors have posterior probability $\Delta>0.5$.
\label{fig:JHM_app}
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_URA_CDC13-1_27_27_K}
\caption[Alternative joint hierarchical model carrying capacity fitness plot]{Joint hierarchical model (JHM) carrying capacity fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on carrying capacity parameter $K$.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in red and green for suppressors and enhancers respectively.
$\emph{orf}\Delta$s without significant evidence of interaction are in grey and have no \emph{orf} name label.
Significant interactors have posterior probability $\Delta>0.5$.
\label{fig:JHM_K_full}
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_URA_CDC13-1_27_27_r}
\caption[Alternative joint hierarchical model growth rate fitness plot]{Joint hierarchical model (JHM) growth rate fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on growth parameter $r$.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in red and green for suppressors and enhancers respectively.
$\emph{orf}\Delta$s without significant evidence of interaction are in grey and have no \emph{orf} name label.
Significant interactors have posterior probability $\Delta>0.5$.
\label{fig:JHM_r_full}
}
\end{figure}
\clearpage
\FloatBarrier
\section{Correlation between methods\label{app:corr}}
The Addinall et al. (2011) approach has its highest correlation with the IHM, followed by the JHM and then the REM.
The REM correlates least well with the JHM while showing the same correlation with both the Addinall et al. (2011) approach and the IHM.
The correlation between the IHM and the JHM is the largest observed between any of the methods, demonstrating the similarity of our Bayesian hierarchical methods.
\begin{table}[h!]
\caption[Spearman's rank correlation coefficients for magnitudes from genetic independence, between approaches]{Spearman's rank correlation coefficients for magnitudes from genetic independence, between Addinall et al. (2011), random effects approach (REM), interaction hierarchical model (IHM) and joint hierarchical model (JHM) approaches \label{tab:spearman}}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{*{5}{c}}
\\
\hline
\\
\emph{Method} &\multicolumn{4}{c}{\emph{Method}}\\
&&& \\ \cline{2-5}
&&&\\
& \emph{Addinall et al. (2011)} & \emph{REM} & \emph{IHM} & \emph{JHM QFA} \\
& \emph{QFA} & \emph{QFA} & \emph{QFA} & \emph{($MDR\times MDP$)} \\
\\
\hline
\\
Addinall et al. (2011) QFA, &1 & 0.77 & 0.89 & 0.88 \\
REM QFA, & &1 & 0.77 & 0.75 \\
IHM QFA, & & &1 & 0.95 \\
JHM QFA ($MDR\times MDP$), & & & & 1 \\
\\
\hline
\end{tabular}
}
\end{table}
The $MDR\times MDP$ correlation plot of the JHM versus the Addinall et al. (2011) approach demonstrates the similarity (Pearson correlation=0.90) and differences between the two approaches in terms of $MDR\times MDP$.
{We can see how the results differ between the JHM and Addinall et al. (2011), with a kink at the origin due to the JHM allowing shrinkage of non-interacting genes towards the fitted line.}
\clearpage
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img/cor_JHM_ADD}
\caption[$MDR\times MDP$ genetic interaction correlation plot of the joint hierarchcial model versus Addinall et al. (2011)]{ $MDR\times MDP$ genetic interaction correlation plot of JHM versus Addinall et al. (2011) (Pearson correlation=0.90).
}
\label{app:correlation_JHM_ADD}
\end{figure}
\clearpage
\FloatBarrier
\chapter{Stochastic logistic growth modelling}
\input{sections_SDE/appendix.tex}
\section{\label{lit:yeast_bio}Yeast biology}
\emph{Saccharomyces cerevisiae} is a species of budding yeast widely used to study genetics. \emph{S. cerevisiae} was the first eukaryotic genome that was completely sequenced \citep{yeast6000}. Yeast is ideal for high throughout experimentations as it is easy to use and arrayed libraries of genetically modified yeast strains are readily available or obtainable for experiments \citep{yeast}. There are many different observable traits available with \emph{S. Cerevisiae}, such as size, opacity and density. There are about 6000 genes in the \emph{S. Cerevisiae} genome of which 5,800 of these are believed to be true functional genes \citep{sgd}.
Yeasts are ideal for genome-wide analysis of gene function as genetic modification of yeast cells is relatively straightforward and yeast cultures grow quickly.
Epistasis identified within a species of yeast may exist in the analogous
genes within the human genome \citep{yeastorg}. Therefore, finding genes involved in
epistasis within yeast is of great interest outside the particular experimental species in question.
\subsection{\label{lit:telomere}Telomeres}
Telomeres are the ends of linear chromosomes and found in most eukaryotic organisms \citep{telo}. Telomeres permit cell division and some researchers claim that telomere-induced replicative senescence is an important component of human ageing \citep{endrep}. They cap (or seal) the chromosome end to ensure genetic stability and are believed to prevent cancer \citep{telo_sen}.
\begin{figure}[h!]
\centering
\includegraphics[width=13cm]{img/tel}
\caption[Telomere at a chromosome end]{Telomere at a chromosome end (diagram and legend taken from \citet{james}). The telomere cap is evolutionarily conserved. Telomeres are nucleoprotein caps present at the ends of most eukaryotic chromosomes, consisting of double-stranded DNA (dsDNA) with a single-stranded DNA (ssDNA) overhang, bound by dsDNA- and ssDNA-binding proteins. Collectively, the telomere binding proteins ``cap'' the telomere and serve to regulate telomerase activity and inhibit the DNA damage response (DDR). In budding yeast, the telomeric dsDNA is bound by Rap1, which recruits the accessory factors Rif1 and Rif2. In humans, the telomeric dsDNA is bound by TRF1 and TRF2 (held together by TIN2) and TRF2 recruits RAP1 to telomeres. In budding yeast, Cdc13 binds the telomeric ssDNA and recruits Stn1 and Ten1 to form the CST (Cdc13-Stn1-Ten1) complex, while in humans, the telomeric ssDNA is bound by POT1. In human beings, POT1 and TRF1-TRF2-TIN2 are linked together by TPP1, which may permit the adoption of higher-order structures. In both budding yeast and humans, the Ku complex, a DDR component that binds to both telomeres and Double-strand breaks (DSBs), also binds and plays a protective role.}
\label{fig:lit:telomere}
\end{figure}
In Figure~\ref{fig:lit:telomere}, a \emph{S. cerevisiae} chromosome is shown with the telomere single-stranded DNA (ssDNA) at the end, where DNA binding proteins such as Cdc13 are bound.
Figure~\ref{fig:lit:telomere} also shows how telomere maintenance compares between a Homo sapiens (\emph{H. sapiens}) and \emph{S. cerevisiae} chromosome.
\\
Telomere length decreases with each division of a cell until telomere length is very short and the cell enters senescence \citep{hay}, losing the ability to divide.
Some cancerous cells up-regulate the enzyme called telomerase which can prevent shortening of telomeres or elongate them, potentially allowing cancerous cells to live indefinitely \citep{immortal}.
\\
It is believed that telomeres are partly responsible for ageing; without the enzyme telomerase, a fixed limit to the number of times the cell can divide is set by the telomere shortening mechanism because of the end replication problem \citep{ageing}.
\subsection{\label{lit:end_rep}The end replication problem}
In eukaryote cell replication, shown in Figure~\ref{fig:lit:drprob}, new strands of DNA are in the $5^\prime$ to $3^\prime$ direction (red arrows), the leading strand is therefore completed in one section whereas the lagging strand must be formed via backstitching with smaller sections known as Okazaki fragments \citep{endrep}.
Figure \ref{fig:lit:drprob} shows how the lagging strand is left with a $3^\prime$ overhang, with the removal of the terminal primer at the end and how the leading strand is left with a blunt end \citep{endrep2}. Telomerase fixes this problem by extending the $3^\prime$ end to maintain telomere length \citep{ageing}. Without telomerase, the leading strand is shortened \citep{short_strand} and telomere capping proteins such as Cdc-13 in yeast binds to the ssDNA that remains.
Most eukaryotic cells have telomerase activated and may maintain DNA replication indefinitely. Not all mammalian cells have telomerase activated and it is believed this problem then leads to the shortening of their telomeres and ultimately senescence.
\begin{figure}[h]
\centering
\includegraphics[width=7cm]{img/drprob}
\caption[The end replication problem]{The end replication problem (diagram and legend taken from \citet{endrep}).
(A) Telomeres in all organisms contain a short $3^\prime$ overhang on the G rich strand. (B) A replication fork moving towards the end of the chromosome. (C) The newly replicated, lagging C strand, will generate a natural $3^\prime$ overhang when the ribonucleic acid (RNA) primer is removed from the final Okazaki fragment, or if the lagging strand replication machinery cannot reach the end of the chromosome. In the absence of nuclease activity the unreplicated $3^\prime$ strand will be the same length as it was prior to replication. (D) The newly replicated leading G strand will be the same length as the parental $5^\prime$ C strand, and blunt ended if the replication fork reaches the end of the chromosome. Therefore the newly replicated $3^\prime$ G strand will be shorter than the parental $3^\prime$ strand and unable to act as a substrate for telomerase because it does not contain a $3^\prime$ overhang. If the leading strand replication fork does not reach the end of the chromosome a $5^\prime$ rather than $3^\prime$ overhang would be generated, but this would not be a suitable substrate for telomerase.}
\label{fig:lit:drprob}
\end{figure}
\subsection{\label{lit:cdc13-1}\emph{CDC13} and \emph{cdc13-1}}
\emph{CDC13} is an essential telomere-capping gene in \emph{S. cerevisiae} \citep{essential}.
The protein Cdc13, encoded by \emph{CDC13}, binds to telomeric DNA (see Figure~\ref{fig:lit:telomere}), forming a nucleoprotein structure \citep{cdc13_summary}. Cdc13 regulates telomere capping and is part of the CST complex with Stn1 and Ten1 \citep{cdc_cst}. This provides protection from degradation by exonucleases such as Exo1.
\mbox{\emph{cdc13-1}} is a temperature-sensitive allele of the \emph{CDC13} gene that has temperature sensitivity above $26\,^{\circ}\mathrm{C}$, where the capping ability of the protein is reduced \citep{cdc131}.
By inducing the temperature sensitivity of \emph{Cdc13-1}, telomere maintenance is disrupted.
A lot of research activity for telomere integrity focuses on the CST complex and often \emph{cdc13} mutations are considered, like \emph{cdc13-1} and \emph{cdc13-5} \citep[see, for example,][]{cdc,MRX}.
\subsection{\label{lit:ura3}\emph{URA3}}
\emph{URA3} is a gene that encodes orotidine 5-phosphate decarboxylase \citep{URA3_again}.
\emph{URA3} is used as a genetic marker for DNA transformations, allowing both positive and negative selection depending on the choice of media \citep{URA3}.
In \citet{QFA1} \emph{ura3}$\Delta$ is used as a control mutation because it is neutral under the experimental conditions.
For a QFA comparison, constructing a query mutation such as \emph{cdc13-1} typically involves adding selection markers to the genome.
To ensure that the same selection markers are found in both the query and control strains, and that the control and query screens can be carried out in comparable environments, a neutral mutation such as \emph{ura3}$\Delta$ can be introduced to the control strain.
\emph{URA3} encodes an enzyme called ODCase.
Deleting \emph{URA3} causes a loss of ODCase, which leads to a reduction in cell growth unless uracil is added to the media \citep{ura3end}.
\citet{QFA1} include uracil in their media so that \emph{ura3}$\Delta$ is effectively a neutral deletion, approximating wild-type fitness.
As a control deletion, \emph{URA3} is not expected to interact with the query mutation, the library of \emph{orf}$\Delta$s in the control and query screen or any experimental condition of interest such as temperature.
\subsection{\label{lit:synthetic_gen_arr}High-throughput methodology for Quantitative Fitness Analysis}
To collect enough data to perform QFA \citep{QFA1}, a methodology such as high-throughput screening is required \citep{high,HTP}.
High-throughput screening is most notably used in the field of biology for genome wide suppressor/enhancer screening and drug discovery.
The automation of experimental procedures through robotics, software, sensors and controls allows a researcher to carry out large scale experimentation quickly and more consistently.
Hundreds of microbial strains with various gene deletions need to be systematically created, cultured and then have measurable traits quantified.
The repeatability of microbial culture growth is ideal to give sufficient sample sizes for identifying both variation and significance in high throughput experimentation \citep{microbes_HTP}.
The quality of the quantitative data is critical for identifying significantly interacting genes.
To measure the phenotypes of different mutant strains of a micro-organism such as yeast \citep{yeast}, a process called \emph{spotting} is used.
This process is different to a typical SGA experiment where \emph{pinning} would be used (see, for example, \cite{sgaboone}).
Pinning is a quicker but less quantitative process where the microbial strains are typically directly pinned to a 1536 plate and allowed to grow until image analysis starts.
Spotting on the other hand has a stage where the cultures are diluted and then the dilute culture is spotted in 384 format to give a more accurate reading in image analysis.
This in turn gives rise to much more accurate time series data for modelling.
Figure~\ref{fig:int:spot} illustrates the spotting process.
An image opacity measure is typically used as a proxy for the density of microbial colonies.
Time lapse photographs are taken of the 384-spot plates after incubation, using high resolution digital cameras, to measure growth.
A software package such as Colonyzer \citep{Colonyzer} can then be used to determine a quantitative measure of fitness from the photographs taken of the cultures grown on the plates.
To ensure a consistent method to capture images of microbial colonies, all cameras should be of the same make and model.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img/plate}
\caption[The spotting procedure]{The spotting procedure for robotic inoculation of yeast strains in 384-spot format (diagram and legend taken from \citet{jove}).
This procedure begins with 1536 independent cultures per plate (left).
In this typical example, colonies at positions 1,1; 1,2; 2,1 and 2,2 (colored red) are four replicates of the same genotype.
\emph{his3::KANMX} cultures in yellow, growing on the edge of the plate, have a growth advantage due to lack of competition and are therefore not examined by Quantitative Fitness Analysis.
One of these replicates (e.g. 1,1) is inoculated into liquid growth media in 96-well plates using a 96-pin tool which inoculates 96 out of 1536 colonies each time.
In order to inoculate one replicate for each of 384 gene deletions, four different ``quadrants'' (indicated as red, blue, green and purple) are inoculated into four different 96-well plates containing growth media.
After growth to saturation (e.g. 3 days at 20 °C), cultures are diluted in water, then the four quadrants from one repeat are spotted in 384-format onto a solid agar plate (right) in the same pattern as the original Synthetic Genetic Array plate (as indicated by color).
The process can be repeated to test other replicates: 1,2; 2,1 and 2,2. Example time-lapse images on the right were captured 0.5, 2 and 3.5 days after inoculation.
}
\label{fig:int:spot}
\end{figure}
\section{\label{lit:comparing_res}Comparing lists of genes}
\begin{comment}
Upon completing a QFA screen comparison, the following are obtained, a variable for the magnitude of genetic interaction for each gene, a variable corresponding to each genes evidence of genetic interaction and a cut-off for the level of evidence which leads to a gene being classified as showing genetic interaction.
Lists of genes ordered by magnitude of genetic interaction or the subsets of genes classified as showing genetic interaction can be used to compare two different statistical approaches.
\end{comment}
Upon completing a QFA screen comparison, a list of genes ordered by genetic interaction strength can be obtained.
Lists of ordered genes can be used to compare two different statistical approaches for a QFA screen comparison.
A comparison of two lists can be carried out through standard statistical similarity measures such as the Jaccard Index or Spearman's rank correlation coefficient.
Observing only the subset of genes showing significant evidence of genetic interaction, two lists of genes can be compared using the Jaccard Index \citep{Jaccard}, see Section~\ref{lit:jaccard_ind}.
The Jaccard index does not account for the ordering of genes and is dependent on the number of interactions identified when the cut-off of genes showing significant evidence of interaction is chosen or influenced by the experimenter.
Due to these undesirable properties of the Jaccard index, this method is not appropriate for an unbiased comparison of statistical methods.
The Spearman's rank correlation coefficient \citep{spearman_book} is able to account for the ordering of genes and is able to account for the whole list of genes available, see Section~\ref{lit:spearmans_cor}.
Gene ontology (GO) term enrichment can be used to suggest which list of genetic interactions has the most biological relevance \citep{GOterm2}.
There are many other alternative approaches available for the comparison of two gene lists \citep{compare_genes,compare_genes2}.
Using both Spearman's correlation coefficient and GO term enrichment analysis of gene lists allows for both an unbiased statistical and biological comparison of two lists of ordered genes.
\subsection{\label{lit:jaccard_ind}Jaccard index}
For two sample sets, the Jaccard index \citep{Jaccard_origin,Jaccard} gives a measure of similarity.
Where $A$ and $B$ are two sample sets of interest, the Jaccard Index is as follows:
\begin{equation*}
J(A,B) = {{|A \cap B|}\over{|A \cup B|}}.
\end{equation*}
The value of J(A,B) can range from 0 to 1, with a larger number for more similarity.
\subsection{\label{lit:spearmans_cor}Spearman's rank correlation coefficient}
The Spearman's rank correlation coefficient \citep{spearman,spearman_book} allows comparison of two variables $X_i$ and $Y_i$, both of sample size $n$.
First, $X_i$ and $Y_i$ are both converted into ranks $x_i$ and $y_i$.
Where there are rank ties or duplicate values, the rank equal to the average of their positions is assigned.
The Spearman's rank correlation coefficient is as follows:
\begin{equation*}
\rho = \frac{\sum_i(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_i (x_i-\bar{x})^2 \sum_i(y_i-\bar{y})^2}}.
\end{equation*}
The value of $\rho$ can range from -1 to 1. As the relationship between two variables becomes closer to being described by a monotonic function, the larger in magnitude $\rho$ will be.
\subsection{\label{lit:GO_term}Gene ontology term enrichment analysis}
Gene ontology (GO) term enrichment analysis can give insight to the biological functions of a list of genes \citep{GOterm2}.
A list of GO terms can be acquired from a list of genes. For yeast the Saccharomyces Genome Database (SGD) \citep{sgd} can be used to find GO term associations for each gene in the genome.
A statistical analysis is carried out to determine which GO terms are most prevalent in a list of genes.
The experimenter can then look at GO terms of interest, find out which genes they correspond to and how many are identified in the list.
An unbiased Gene Ontology (GO) term enrichment analyses on a list of genes can be carried out using the software R \citep{rprog} and the bioconductoR package GOstats \citep{GOstats}.
There are many other software packages and online services available to carry out a GO term enrichment such as the Database for Annotation, Visualization and Integrated Discovery (DAVID) \citep{DAVID,DAVID2} or the Gene Ontology Enrichment Analysis and Visualization tool (GOrilla) \citep{Gorilla,Gorilla2}.
A GO term clustering analysis is a statistical approach that can be used to follow up a GO term analysis.
Information on the relation of GO terms is used in a clustering analysis to find functionally related groups of GO terms.
The bioinformatics tool DAVID \citep{DAVID,DAVID2} can be used to carry out GO term clustering (\url{david.abcc.ncifcrf.gov/}).
\section{\label{lit:bayesian_inf}Bayesian inference}
A classical (or frequentist) statistical approach typically assumes model unknown parameters are constants and uses the likelihood function to make inference.
An alternative methodology is a Bayesian approach \citep{Bayth,BayPriors}, named after Thomas Bayes \citep{bayes1763}.
In a Bayesian setting, a parametric model similar to the frequentist approach can be assumed but model parameters are treated as random variables.
This feature allows any \emph{prior} knowledge for a given parameter to be incorporated into inference by building a \emph{prior} distribution to describe the information available.
We are interested in the \emph{posterior} distribution, that is the probability of the parameters given the evidence.
Moreover, where $D$ is the observed data, $\theta$ is the set of parameters of interest, we are interested in calculating the \emph{posterior density} $\pi(\theta|D)$.
\emph{A priori} knowledge of $\theta$ is described by $\pi(\theta)$ and the likelihood of data by $L(D|\theta)$. Using Bayes theorem we obtain the following:
\begin{flalign*}
&& \pi(\theta|D)&\propto\pi(\theta)L(D|\theta)&\\
\text{or} && Posterior&\propto Prior\times likelihood.
\end{flalign*}
\subsection{\label{lit:markov_cha_mon_car}Markov chain Monte Carlo}
In Bayesian inference we are typically interested in sampling from the posterior distribution or one of its marginals, but often this is difficult.
Markov Chain Monte Carlo (MCMC) methods are used for sampling from probability distributions \citep{MCMC,MCMC2}. The Monte Carlo name describes the repeated random sampling used to compute results.
A Markov chain can be constructed with an equilibrium distribution that is the \emph{posterior} distribution of interest.
A Markov chain $\{X_{n},n\in\mathbb{N}^0\}$ is a stochastic process which satisfies the Markov property (or ``memoryless'' property):
for $A\subseteq{S}$, where $S$ is the continuous state space s.t. $X_n\in{S}$,
\begin{equation*}
P(X_{n+1}\in{A}|X_n=x,X_{n-1}=x_{n-1},...,X_0=x_{0})=P(X_{n+1}\in{A}|X_n=x),
\end{equation*}
$\forall x,x_{n-1},...,x_0\in{S}$.
The equilibrium distribution $\pi(x)$ is a limiting distribution of a Markov chain with the following two properties.
First, there must exist a distribution $\pi(x)$ which is stationary.
This condition is guaranteed when the Markov chain satisfies detailed balance
\begin{equation*}
\pi(x)p(x,y)=\pi(y)p(y,x),\qquad{\forall}x,y,\end{equation*}
where $p(x,y)$ is the transition density kernel of the chain.
Secondly, the stationary distribution $\pi(x)$ must be unique.
This is guaranteed by the ergodicity of the Markov process; see \citet{MCMC} for a definition and sufficient conditions.
\subsection{\label{lit:metropolis-has_alg}Metropolis-Hastings algorithm}
The Metropolis-Hastings algorithm \citep{met,hastings} is a MCMC method for obtaining a random sample from a probability distribution of interest (or stationary distribution) \citep{met_hast}.
With the following procedure a sample from the stationary distribution of the Markov chain can be obtained:\\
\\
1) Initialise counter $i = 0$ and initialize $X_0=x_0$\\
\\
2) From the current position $X_i=x$, generate a candidate value $y^*$ from a proposal density $q(x,y)$.\\
\\
3) Calculate a probability of acceptance $\alpha(x,y^*)$, where
\begin{equation*}
\alpha(x,y)=
\begin{cases}
\min\left\{1,\frac{\pi(y)q(y,x)}{\pi(x)q(x,y)}\right\} & \text{if } \pi(x)q(x,y)>0\\
1 & \text{otherwise.}\\
\end{cases}
\end{equation*}
\\
4) Accept the candidate value with probability $\alpha(x,y^*)$ and set $X_{i+1}=y^*$, otherwise reject and set $X_{i+1}=x$.\\
\\
5) Store $X_{i+1}$ and iterate $i=i+1$.\\
\\
6) Repeat steps 2-5 until the sample size required is obtained.
\\
\\
The choice of proposal density is important in determining how many iterations are needed to converge to a stationary distribution.
There are many choices of proposal distribution \citep{MCMC}, the simplest case is the symmetric chain.
The symmetric chain involves choosing a proposal where $q(x,y)=q(y,x)$, such that step two simplifies to give the following:
\begin{equation*}
\alpha(x,y)=
\begin{cases}
\min\left\{1,\frac{\pi(y)}{\pi(x)}\right\} & \text{if } \pi(x)>0\\
1 & \text{otherwise.}\\
\end{cases}
\end{equation*}
More general cases are random walk chains and independence chains.
For a random walk chain, the proposed value at stage $i$ is given by $y^*=x_i+w_i$, where $w_i$ are i.i.d. random variables.
The distribution for $w_i$ must therefore be chosen, and is typically Normal or Student's $t$ distribution centred at zero.
If the distribution for $w_i$ is symmetric, the random walk is a special case of symmetric chains.
For an independence chain, the proposed transition is formed independently of the previous position of the chain, thus $q(x,y)=f(y)$ for some density $f(.)$:
\begin{equation*}
\alpha(x,y)=
\begin{cases}
\min\left\{1,\frac{\pi(y)f(x)}{\pi(x)f(y)}\right\} & \text{if } \pi(x)f(y)>0\\
1 & \text{otherwise.}\\
\end{cases}
\end{equation*}
Parameters within our proposal distribution are known as tuning parameters. They are typically used to adjust the probability of acceptance or improve mixing and must be chosen through some automatic procedure or manually, see Section~\ref{lit:convergence_iss}.
\subsection{\label{lit:gibbs_sam}Gibbs sampling}
The Gibbs sampler \citep{Gibbs_origin,gibbs2} is a MCMC algorithm for obtaining a random sample from a multivariate probability distribution of interest $\pi(\theta)$, where $\theta=(\theta^1,\theta^2,...,\theta^d)$.
Consider that the full conditional distributions $\pi(\theta_i|\theta_{1},...,\theta_{i-1},\theta_{i+1},...,\theta_{d})$, $i=1,...,d$ are available.
Where it is simpler to sample from conditional distribution than to marginalize by integrating over a joint distribution, the Gibbs sampler is applicable.
The following procedure sequentially samples from the full conditional distribution for each parameter, resulting in the probability distribution of interest.
The algorithm is as follows:\\ \\
1) Initialise counter $i=1$ and parameters $\theta_{(0)}=(\theta_{(0)}^1,\theta_{(0)}^2,...,\theta_{(0)}^d)$.\\ \\
2) Simulate $\theta_{(i)}^1\text{ from }\theta_{(i)}^1\sim \pi(\theta^1|\theta_{(i-1)}^2,...,\theta_{(i-1)}^d)$.\\ \\
3) Simulate $\theta_{(i)}^2\text{ from }\theta_{(i)}^2\sim \pi(\theta^2|\theta_{(i)}^1,\theta_{(i-1)}^3,...,\theta_{(i-1)}^d)$.\\ \\
4) $...$\\ \\
5) Simulate $\theta_{(i)}^d \text{ from }\theta_{(i)}^d\sim \pi(\theta^d|\theta_{(i)}^1,...,\theta_{(i)}^{d-1})$.\\ \\
6) Store $\theta_{(i)}=(\theta_{(i)}^1,\theta_{(i)}^2,...,\theta_{(i)}^d)$ and iterate $i=i+1$.\\ \\
7) Repeat steps 2-6 until the sample size required is obtained.\\
\\
To ensure the full conditional distributions for each parameter in a Bayesian model are known and easy to handle, conjugacy can be used.
Conjugacy is where the prior is of the same family as the posterior.
Conjugacy can be induced by the choice of prior, for example if it is known that a likelihood is Normal with known variance, a Normal prior over the mean will ensure that the posterior is also a Normal distribution.
\subsection{\label{lit:convergence_iss}Convergence issues}
To accept output from MCMC algorithms, all chains are required to have reached convergence \citep{MCMC,converge}.
Convergence is a requirement to gain unbiased samples of a posterior distribution.
Visual and statistical tests can be used to determine if chains have converged, see Section~\ref{lit:convergence_dia}.
Other issues that we must consider for MCMC sampling algorithms are choice of tuning parameters, burn-in period, sample size and thinning, if required.
Tuning parameters require a good choice of proposal distribution, preferably with high acceptance rates and good mixing.
There are many schemes available for the choice of tuning parameters \citep{tuning}. Typically tuning parameters are determined during a burn-in period.
The burn-in period is a number of iterations which an algorithm must be run for in order to converge to equilibrium.
Sample size depends on how many iterations from the posterior are required for both inference and testing convergence.
Thinning involves discarding output for iterations of a MCMC algorithm, in order to give less dependent realizations from the posterior distribution.
Extending the length of the burn-in period, sample size and thinning leads to increased computational time.
With large data sets and models with a large number of parameters, computation time can become a problem.
With a Bayesian modelling approach, computational time associated with MCMC can be much longer than a much simpler least squares approach.
This problem is exacerbated when coupled with poor mixing and is likely to lead the experimenter to simplify their modelling procedure, consequently sacrificing the quality of inference, in order to complete their analysis within a shorter time frame.
\subsection{\label{lit:convergence_dia}Convergence diagnostics}
To determine whether chains are true samples from their target distributions, tests for lack of convergence or mixing problems \citep{MCMC,converge} must be carried out.
Typically multiple tests are used to give confidence that the output has convergence.
There are many convergence diagnostics for testing chains for convergence, for example the Heidelberg-Welch \citep{Heidelberger} and Raftery-Lewis \citep{Raftery} tests.
For many convergence diagnostics, summary statistics such as p-values can be used to decide whether convergence has been reached.
Visual inspection of diagnostic plots can also be used to determine if convergence has been reached.
Trace plots are used to check if samples from the posterior distribution are within a fixed region of plausible values and not exploring the whole range.
ACF (auto-correlation function) plots are used to determine serial correlation between sample values of the posterior distribution in order to check for the independence of observations.
Density plots are used to check whether a sample posterior distribution is restricted by the choice of prior distribution and determine whether choice of prior is appropriate.
Running multiple instances of our MCMC algorithm and comparing chains can also help us decide whether our chains have converged.
\subsection{\label{lit:computer_lan}Computer programming}
To ensure results and inference are reproducible, it is useful to create a computer package so that an analysis can be made in the future without all the code required being re-written.
Using freely available software such as the statistical program R \citep{rprog}, scripts and commands can be built and shared for easy implementation of code.
Where fast inference is of importance, the choice of programming language is an important consideration.
The software package R can also be used as an interface for running code in the C programming language.
Statistical code written in the C programming language is typically much faster than using standard R functions or code written in many other programming languages \citep{ccode}.
\section{\label{lit:hierarchical_mod}Hierarchical modelling}
Hierarchical modelling is used to is used to describe the structure of a problem where we believe some population level distribution exists, describing a set of unobserved parameters \citep{BayPriors}.
Examples include pupils nested within classes, children nested within families and patients nested within hospitals.
With the pupil-class relationship (2 level-hierarchy), for a given class there may be a number of pupils.
We may believe that by being in the same class, pupils will perform similarly in an exam as they are taught by the same teacher.
Further, we may have a pupil-class-school relationship (3 level-hierarchy).
For a given school, multiple classes exist and in each class there is a number of pupils.
We may believe that being within the same school, classes would perform similarly in an exam as they share the same head teacher or school principal.
Hierarchical modelling is used to describe a parent/child relationship \citep{GelmanMultilevel}. Repeating the parent/child relationship allows multiple levels to be described.
Where a hierarchical structure is known to exist, describing this experimental structure avoids confounding of effects with other sources of variation.
There are many different hierarchical models available, depending on what the experimenter is most interested in \citep{mixedeffects,BayHi}.
Sharing of information can be built into hierarchical models by the sharing parameters.
Allowing parameters to vary at more than one level allows an individual child (subject) effect to be examined.
A typical frequentist hierarchical model is built with random effects and has limited distributional assumptions available, whereas a Bayesian hierarchical model is flexible to describe various distributions \citep{Gelmanprior}, see Section~\ref{lit:distributional_ass}.
Plate diagrams allow hierarchical models to be represented graphically \citep{DAGbook,oldDAGbook}.
Nodes (circles) are used to describe parameters and plates (rectangles) to describe repeating nodes.
The use of multiple plates allows nesting to be described.
\subsection{\label{lit:distributional_ass}Distributional assumptions}
The flexibility of the Bayesian paradigm allows for models to be built that are otherwise not practical in the frequentist paradigm.
More appropriate assumptions can therefore be made to better describe experimental structure and variation in a Bayesian setting \citep{BayPriors}.
For example, inference for a hierarchical \emph{t}-distribution or hierarchical variable section model in a frequentist context is difficult in practise without using MCMC methods that are a more natural fit with Bayesian approaches.
The use of prior distributions allows information from the experimenter and experimental constraints to be incorporated, for instance if a parameter is known to be strictly positive then a positive distribution can be used to enforce this.
Truncation can be used to reduce searching posterior areas with extremely low probability.
\subsection{\label{lit:indicator var}Indicator variables}
Indicator variables are used in variable selection models to describe binary variables \citep{indicator}.
A Bernoulli distributed indicator variable can take the value $0$ or $1$ to indicate the absence or presence of an effect and can be used to describe binary outcomes such as gender.
\subsection{\label{lit:t_dist}The three parameter \emph{t}-distribution }
The Student's \emph{t}-distribution has one parameter, namely the degrees of freedom parameter $\nu$ which controls the kurtosis of the distribution \citep{tdist1}.
The Student's \emph{t}-distribution is as follows:
\begin{equation} \label{eq:t_dist}
t_1(x;\nu)=\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\,\Gamma \left(\frac{\nu}{2} \right)} \left(1+\frac{x^2}{\nu} \right)^{-\frac{\nu+1}{2}}, x\in\mathbb{R},\nu\in\mathbb{R}^+.
\end{equation}
The $\nu$ scale parameter has the effect of increasing the heaviness of the distribution's tails.
Adding an additional location parameter $\mu$ and scale parameter $\sigma$ allows further flexibility with the shape of the distribution \citep{tdist}.
The $\sigma$ scale parameter does not correspond to a standard deviation but does control the overall scale of the distribution.
The three parameter \emph{t}-distribution (or scaled \emph{t}-distribution) is then as follows:
\begin{equation*}
t_3(x;\mu,\nu,\sigma)=\frac{1}{\sigma} t_1\left(\frac{(x - \mu)}{\sigma}; \nu\right), x\in\mathbb{R},\nu\in\mathbb{R}^+,
\end{equation*}
where $t_1$ is given in (\ref{eq:t_dist}).
\section{\label{lit:generalised_log}Generalisations of the logistic growth model}
Where more flexibility than the logistic growth model is required, the logistic growth model (\ref{eq_det}) can be extended by adding parameters \citep{analysisoflogistic,theoryoflogisticgro}.
A common extension of the logistic growth model is Richards' growth model \citep{GenLog,logisticrevisited}, which adds a single parameter for changing the shape of growth.
A more general case to both the logistic and Richards' growth model is the generalised logistic growth model.
Similarly to the logistic growth model (\ref{eq_det}) and its stochastic counterpart (\ref{eq_det_sde_2}), these more general equations can be extended to diffusion equations if required.
\subsection{\label{lit:richards_gro}Richards' growth model}
Richards' Growth model \citep{GenLog} adds an extra parameter $\beta$ to the logistic growth equation (\ref{eq_det}). The parameter $\beta$ affects where maximum growth occurs and consequently the relative growth rate \citep{analysisoflogistic}.
Richards' Growth model is as follows:
\begin{align}\label{eq:richards}
\frac{dx_t}{dt}&=rx_t\left[1-\left(\frac{x_t}{K}\right)^\beta\right].
\end{align}
The ODE has the following analytic solution:
\begin{align*}
x_t&=\frac{K}{(1+Qe^{-r\beta t})^{\frac{1}{\beta}}},\\
\text{where }Q&=\left[\left(\frac{K}{P}\right)^\beta-1\right]e^{\beta t_o},
\end{align*}
$({\alpha},{\beta})$ are positive real numbers and $t\geq{t_0}$.
When $\beta=1$, Richards' growth model is equivalent to the logistic growth equation.
\subsection{\label{lit:generalised_log}Generalised logistic growth model}
The generalised logistic growth model adds extra parameters $({\alpha},{\beta},{\gamma})$ to the logistic growth equation (\ref{eq_det}). The extra parameters $({\alpha},{\beta},{\gamma})$ affect where maximum growth occurs, the relative growth rate \citep{analysisoflogistic} and give a greater selection of curve shapes than the Richards' growth model (\ref{eq:richards}).
The generalised logistic growth model is as follows:
\begin{align}
\frac{dx_t}{dt}&=rx_t^{\alpha}\left[1-\left(\frac{x_t}{K}\right)^\beta\right]^\gamma,
\end{align}
where $({\alpha},{\beta},{\gamma})$ are positive real numbers and $t\geq{t_0}$.
The generalised logistic growth model cannot in general be integrated to give an analytical solution for $x_t$.
When ${\alpha}=1$, ${\beta}=1$ and $ {\gamma}=1$, the generalised logistic growth model is equivalent to the logistic growth equation.
\section{\label{lit:state_spa_mod}State space models}
A state space model describes the probabilistic dependence between a measurement process $Y_t$ and a state process $X_t$ \citep{dynamicmodels,statespace2}.
The most basic case of a state space model is as follows:
\begin{align}\label{eq:state_space}
\begin{split}
\left(X_t|X_{t-1}=x_{t-1}\right)&\sim f(t,x_{t-1}),\\
\left(Y_t|X_{t}=x_t\right)&\sim g(t,x_t),
\end{split}
\end{align}
where $f$ and $g$ are known.
A state space model with a linear Gaussian structure has the advantage of allowing us to carry out more efficient MCMC by integrating out latent states with a Kalman filter, instead of imputing all states.
The probabilistic representation and the ability to incorporate prior information makes Bayesian inference an appropriate choice for parameter estimation of a state space model.
State space representation provides a general framework for analysing stochastic dynamical systems observed through a stochastic process.
A state space model allows us to include both an internal state variable and an output variable in our model.
The state-space representation of a stochastic process with measurement error can be given by (\ref{eq:state_space}) where $f$ is the transition density of the process and $g$ is the assumed measurement error.
Inference methods are also readily available to carry out estimation of state space models.
\subsection{\label{lit:sde}Stochastic differential equations}
An ordinary differential equation (ODE) can be used to model a system of interest.
For systems with inherent stochastic nature we require a stochastic model.
A stochastic differential equation (SDE) is a differential equation where one or more terms include a stochastic process \citep{wilkinson2012stochastic,sdebook}.
An SDE differs from an ODE by the addition of a diffusion term, typically a Weiner process, used to describe the intrinsic noise of a given process.
A Wiener process (or standard Brownian motion) is a continuous-time stochastic process.
A Wiener process $W(t)$, $t\geq{0}$, has the following three properties \cite{wiener}:\\
1) $W(0)=0$.\\
2) The function $t\rightarrow W(t)$ is almost surely everywhere continuous.\\
3) $W(t)$ has independent increments with $W(t)-W(s)\sim \operatorname{N}(0, t-s)$, for $0 \leq s < t$.\\
\\
Intrinsic noise from a Weiner process perpetuates the system dynamics of a differential equation
The intrinsic noise is able to propagate though the process, unlike measurement noise.
Instead of inappropriately modelling intrinsic noise by measurement noise, an SDE allows our process to model both system and measurement noise separately.
The simplest case of a stochastic differential equation is of the form:
\begin{equation*}
dX(t)=\mu dt+\sigma dW(t),
\end{equation*}
where $W$ denotes a Wiener process.
Parameters $\mu$ and $\sigma$ may depend on time and correspond to the drift and diffusion coefficients respectively.
The transition density of a stochastic process describes the movement from one state to the next and can be found from the solution of the process.
\subsection{\label{lit:em}The Euler-Maruyama method}
The Euler-Maruyama method provides an approximate numerical solution of a SDE \sloppy\citep{embook}.\sloppy
For a stochastic process of the form:
\begin{equation*}
dX_t=f(X_t)dt+g(X_t)dW_t,
\end{equation*}
where functions $f$ and $g$ are given and $W_t$ is a Wiener process.
Given an initial condition $X_0=x_0$ we can build an Euler-Maruyama approximation of $X$ over an interval $[0,T]$.
The Markov chain $Y$ defined below is an Euler-Maruyama approximation to the true solution of $X$.
First we set the initial condition $Y_0=x_0$.
Next, the interval $[0,T]$ is partitioned into $N$ equal subintervals of width $\Delta{t}>0$.
The Euler-Maruyama approximation is then recursively defined for $1\leq{i}\leq{N}$ as follows:
\begin{equation*}
Y_{i+1}=Y_{i}+f(Y_i)\Delta{t}+g(Y_i)\Delta{W_i},
\end{equation*}
where $\Delta{W_i}={W_{t_{i+1}}}-{W_{t_i}}\sim\operatorname{N}(0,\Delta{t})$.
The Euler-Maruyama approximation $Y$ will become a better approximation to the true process $X$ as we increase the size of $N$.
\subsection{\label{lit:kalman_fil}Kalman filter}
The Kalman filter \citep{kalmanoriginal,kalman} is a recursive algorithm that can be used to estimate the state of a dynamic system from a series of incomplete and noisy measurements.
The main assumptions of the Kalman filter are that the underlying system is a linear dynamical system and that the noise has known first and second moments. Gaussian noise satisfies the second assumption, for example.
Inference for a state space model (\ref{eq:state_space}) (see Section~\ref{lit:state_spa_mod}), where both $f$ and $g$ are Gaussian, can be carried out using a Kalman filter.
If all noise is zero-mean, uncorrelated and white, then the Kalman filter represents an optimal linear filter \citep{kalman_optimal}, even if the noise is not Gaussian.
An application of the Kalman filter is given in Section~\ref{app:kalman_fil} of the Appendix.
The Kalman filter algorithm is derived as follows:
$X_{{t_{i}}}$ and $Y_{t_{i}}$ are the state and measurement processes respectively.
$w_t$ and $u_t$ are the state and measurement error respectively, where $w_t$ and $u_t$ are IID, $E[w_t]=0$, $E[u_t]=0$, $E[w_t{w_t}^T]=W_t$ and $E[u_t{u_t}^T]=U_t$.
The Kalman filter can be extended where $w_t$ and $u_t$ are not zero mean.
The unobserved latent process is driven by:
\begin{equation*}
X_{{t_{i}}}|X_{{t_{i-1}}}\sim\operatorname{N}(G_{{t_{i}}}X_{{t_{i-1}}},W_{t_{i}})
\end{equation*}
and the measurement error distribution, relating the latent variable to the observed is given by
\begin{equation*}
Y_{t_{i}}|X_{{t_{i}}}\sim\operatorname{N}(F^T_{t_i}X_{{t_{i}}},U_{t_i}),
\end{equation*}
where matrices $F_{t_i}$, $G_{t_i}$, $U_{t_i}$ and $W_{t_i}$ are all given.
Now, suppose that:
\begin{equation*}
X_{t_{i-1}}|Y_{1:{t_{i-1}}}\sim \operatorname{N}(m_{t_{i-1}},C_{t_{i-1}}).
\end{equation*}
Incrementing time with $X_{t_i}=G_{t_i}X_{t_{i-1}}+w_{t_{i-1}}$ and condition on $Y_{1:{t_{i-1}}}$ to give:
\begin{align*}
X_{t_{i}}|Y_{1:{t_{i-1}}}&=G_{t_i}X_{t_{i-1}}|Y_{1:{t_{i-1}}}+w_{t_i}|Y_{1:{t_{i-1}}}\\
&=G_{t_i}X_{t_{i-1}}|Y_{1:{t_{i-1}}}+w_{t_{i-1}},
\end{align*}
as $w_{t_i}$ is independent of $Y_{1:{t_{i-1}}}$.
We can then show the following using standard multivariate theory:
\begin{equation*}
X_{t_{i}}|Y_{1:{t_{i-1}}}\sim \operatorname{N}(a_{t_{i}},R_{t_{i}}).
\end{equation*}
where $a_{t_{i}}=G_{{t_{i}}}m_{{t_{i-1}}}$ and $R_{t_{i}}=G_{{t_{i}}}C_{{t_{i-1}}}G_{t_{i}}^T+W_{t_{i}}$.
As $Y_{t_i}=F_{t_i}^TX_{t_{i}}+u_{t_i}$, and condition on $Y_{1:{t_{i-1}}}$ to give:
\begin{align*}
Y_{t_i}|Y_{1:{t_{i-1}}}&=F_{t_i}^TX_{t_{i}}|Y_{1:{t_{i-1}}}+u_{t_i}|Y_{1:{t_{i-1}}}\\
&=F_{t_i}^TX_{t_{i}}|Y_{1:{t_{i-1}}}+u_{t_i},
\end{align*}
as $u_{t_i}$ is independent of $Y_{1:{t_{i-1}}}$.
We can then show the following using standard multivariate theory:
\begin{equation*}
Y_{1:{t_{i}}}|Y_{1:{t_{i-1}}}\sim\operatorname{N}(F^T_{t_i}{a_{t_i}},F^T_t{R_{t_i}}F_t+U_{t_i})
\end{equation*}
$Y_{1:{t_{i}}}|Y_{1:{t_{i-1}}}$ and $X_{t_{i}}|Y_{1:{t_{i-1}}}$ are therefore jointly Gaussian with the following mean and covariance:\\
\begin{equation*}
\begin{pmatrix}
X_{{t_{i}}} \\
Y_{1:{{t_{i}}}}
\end{pmatrix}
\sim
MVN\left(
\begin{pmatrix}
a_{t_i} \\
Y_{t_i}
\end{pmatrix},
\begin{pmatrix}
R_{t_i} & {R_{t_i}}F_t\\
F^T_t{R_{t_i}} & F^T_t{R_{t_i}}F_t+U_{t_i}
\end{pmatrix}
\right),
\end{equation*}
Finally, the following multivariate theorem is used:
\begin{align*}
\text{if }\begin{pmatrix}
Y_1 \\
Y_2
\end{pmatrix}
&\sim
MVN\left(
\begin{pmatrix}
\mu_1 \\
\mu_2
\end{pmatrix},
\begin{pmatrix}
\Sigma_{11} & \Sigma_{12}\\
\Sigma_{21} & \Sigma_{22}
\end{pmatrix}
\right),\\
\text{then }Y_1| Y_2=y_2
&\sim
MVN\left(
\mu_1+\Sigma_{12}\Sigma^{-1}_{22}(y_2-\mu_2),\Sigma_{11}-\Sigma_{12}\Sigma^{-1}_{22}\Sigma_{21}
\right),
\end{align*}
to obtain the following:
\begin{align}
\label{eq:recursive}
\begin{split}
X_{{t_{i}}}|Y_{1:{{t_{i}}}}&\sim \operatorname{N}(m_{t_{i}},C_{t_{i}}),\\
\text{where } m_{t_{i}}&=a_{t_{i}}+R_{t_{i}}F(F^{T}R_{{t_{i}}}F+U)^{-1}[Y_{t_{i}}-F^{T}a_{t_{i}}]\\
\text{and }C_{t_{i}}&=R_{t_{i}}-R_{t_{i}}F(F^TR_{t_{i}}F+U)^{-1}F^{T}R_{t_{i}}.
\end{split}
\end{align}
Parameters $m_0$ and $C_0$ must be initialised first, then using the equations in (\ref{eq:recursive}), $m_{t_i}$ and $C_{t_i}$ can be recursively estimated.
Typically, the Kalman filter is used to make inference for a hidden state process, but it can be used to reduce computational time in algorithms for inferring process hyper-parameters by recursively computing the marginal likelihood $\pi(y_{t_{1:N}})$ \citep{dynamicmodels}, where
\begin{equation*}
\pi(y_{t_{1:N}})=\prod^N_{i=1}\pi(y_{t_{i}}|y_{t_{1:(i-1)}})
\end{equation*}
and $\pi(y_{t_{i}}|y_{t_{1:(i-1)}})=\int_{X}\pi(y_{t_{i}},x_{t_{i}}|y_{t_{1:(i-1)}})dx_{t_{i}}=\int_{X}\pi(y_{t_{i}}|x_{t_{i}})\pi(x_{t_{i}}|y_{t_{1:(i-1)}})dx_{t_{i}}$ gives a tractable Gaussian integral.
The procedure for computing the marginal likelihood $\pi(y_{t_{1:N}})$ using the Kalman filter algorithm is as follows:\\
\\
1) Initialise with prior knowledge for $X_{0}$ and set $i=1$.\\
\\
2) Prediction step from $X_{t_{i-1}}|Y_{1:{t_i-1}}$ to $X_{t_i}|Y_{1:{t_{i-1}}}$ (giving $\pi (x_{t_i}|y_{1:t_{i-1}})$).\\
\\
3) Calculate and store $\pi (y_{t_i}|y_{1:t_{i-1}})$.\\
\\
4) Update step to give $X_{t_i}|Y_{1:{t_i}}$, then iterate $i=i+1$.\\
\\
5) Repeat steps 2-4 (and compute $\pi (y_{t_{1:N}})$ ).\\
\subsection{\label{lit:LNA}Linear noise approximation}
The linear noise approximation (LNA) \citep{kurtz1,kurtz2,van} reduces a non-linear SDE to a linear SDE with additive noise, which can be solved \citep{LNA,komorowski}.
The LNA assumes the solution of a diffusion process $Y_t$ can be written as ${Y_t = v_t+Z_t}$ (a deterministic part $v_t$ and stochastic part $Z_t$), where $Z_t$ remains small for all $t\in\mathbb{R}_{\geq 0}$.
The LNA is useful when a tractable solution to a SDE cannot be found.
Typically the LNA is used to reduce an SDE to a Ornstein-Uhlenbeck process which can be solved explicitly.
Ornstein-Uhlenbeck processes are Gaussian, time discretising the resulting LNA will therefore give us a linear Gaussian state space model with an analytically tractable transition density available.
The LNA can be viewed as a first order Taylor expansion of an approximating SDE about a deterministic solution (higher order approximations are possible \citep{gardiner2010stochastic}).
We can also view the LNA as an approximation of the chemical Langevin equation \citep{wallace}.
Applications of the LNA to non-linear SDEs are given in Section~\ref{sec:LNAM}~and~\ref{sec:LNAA}.
\begin{comment}
A LNA of an SDE $dX_t$ can be derived as follows:
firstly assume $X_t$ can be split into a stochastic part $Z_t$ and a deterministic part $v_t$, where $X_t=v_t+Z_t$ and consequently $dX_t=dv_t+dZ_t$.
Typically, the drift component of $dX_t$ is chosen as $dv_t$ and the solution of $dv_t$ used as $v_t$.
With derivations for $dX_t$ and $dv_t$, $dZ_t$ can be obtained using $dZ_t=dX_t-dv_t$.
The LNA is then made by linearising $dZ_t$, through approximation, to give $\hat{Z}_t$, which we can solve.
Using the derivations of $v_t$ and $\hat{Z}_t$, the solution to the LNA SDE $\hat{X}_t$ can be written as $\hat{X}_t=v_t+\hat{Z}_t$.
\end{comment}
\end{chapter}
\section{\label{case:intro}Introduction}
In this chapter, the new Bayesian models developed in Chapter~\ref{cha:modelling_den_int} are applied to previously analysed QFA screen data.
The one-stage and two-stage Bayesian approaches are compared with the two-stage \citet{QFA1} and random effects model (REM) approaches for a QFA screen comparison designed to inform the experimenter about telomere biology in \emph{S. cerevisiae}.
After comparing the approaches developed, the one-stage Bayesian joint hierarchical model (JHM) is found to best model a QFA screen comparison.
The JHM is then applied to further examples of \emph{S. cerevisiae} QFA screen data to demonstrate the JHM's ability to model different experiments.
Two extensions of the JHM are then considered, to account for a batch effect and a transformation effect within a QFA screen comparison.
Fitness plots for the further case studies and extensions of the JHM are included for further investigation and research.
The new one-stage Bayesian QFA will be used at first to help identify genes that are related to telomere activity, but the analysis is general enough to be applicable to any high-throughput study of arrayed microbial cultures (including experiments such as drug screening).
\section{\label{sec:ura3_cdc13-1_27_27}\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C suppressor/enhancer data set}
The following analysis is for a QFA experiment comparing query \mbox{\emph{cdc13-1}} \emph strains with control \emph{ura3$\Delta$} strains at~${27}^{\circ}$C, previously analysed by \citet{QFA1}, to identify genes that show evidence of genetic interaction with the query mutation \mbox{\emph{cdc13-1}}.
The ability of the Cdc13 protein produced by \mbox{\emph{cdc13-1}} strains to cap telomeres is reduced at temperatures above $26\,^{\circ}\mathrm{C}$ \citep{cdc131}, inducing a fitness defect.
The experimental data used are freely available at \sloppy\url{http://research.ncl.ac.uk/colonyzer/AddinallQFA/}.\sloppy
\cite{QFA1} present a list of interaction strengths and p-values for significance of interaction, together with a fitness plot for this experiment. We will compare lists of genes classified as interacting with \mbox{\emph{cdc13-1}} by the non-hierarchical frequentist approach presented by \cite{QFA1} and the hierarchical REM with those classified as interacting by our hierarchical Bayesian approaches.
4,294 non-essential $\emph{orf}\Delta$s were selected from the yeast deletion collection and used to build the corresponding double deletion query and control strains.
Independent replicate culture growth curves (time course observations of cell density) were captured for each query and control strain.
The median and range for the number of replicates per $\emph{orf}\Delta$ is 8 and $[8,144]$ respectively.
There are 66 $\emph{orf}\Delta$ strains that have greater than 8 replicates (for both the control and query screen).
More replicates have been tested for this subset of $\emph{orf}\Delta$s as a quality control measure to check if 8 replicates are sufficient to generate a stable fitness summary for each $\emph{orf}\Delta$. $\emph{orf}\Delta$s with high replicate number include a small number of mutations whose phenotypes are well understood in a telomere-defective background, together with some controls and a range of mutations randomly selected from the deletion library.
Including genotypes with well characterised phenotypes allows us to leverage expert, domain-specific knowledge to assess the quality of experimental results.
The modelling approaches considered can accommodate different numbers of replicates for each $\emph{orf}\Delta$, therefore we don't expect systematic bias from the number of repeats.
The range for the number of time points for growth curves captured in the control experiment is $[7,22]$ and $[9,15]$ in the query experiment.
Raw \mbox{\emph{cdc13-1}}~${27}^{\circ}$C time series data is given in Figure~\ref{app:appendix_label}, for example.
As in the \cite{QFA1} analysis, a list of 159 genes are stripped from our final list of genes for biological and experimental reasons.
Prior hyper-parameters for the models used throughout this chapter are provided in Table~\ref{tab:SHM_priors}.
\hl{Although our priors are informed by frequentist estimates of historical QFA data sets, we ensure our priors are sufficiently diffuse that all plausible parameter values are well represented and that any given QFA data set can be fit appropriately.}
\hl{The Heidelberg-Welch \citep{Heidelberger}\sloppy and Raftery-Lewis \sloppy\citep{Raftery}\sloppy convergence diagnostics are used to determine whether convergence has been reached for all parameters.
Posterior and prior densities are compared by eye to ensure that sample posterior distributions are not restricted by the choice of prior distribution.
ACF (auto-correlation) plot diagnostics are checked visually to ensure that serial correlation between sample values of the posterior distribution is low, ensuring that the effective sample size is similar to the actual sample size.}
To assess how well the logistic growth model \hl{describes cell density observations} we generate plots of raw data with fitted curves overlaid.
Figures~\ref{fig:diagABC}A, \ref{fig:diagABC}B and \ref{fig:diagABC}C show time series data for three different mutant strain repeats at~${27}^{\circ}$C\hl{, together with fitted logistic curves}.
We can see that each $\emph{orf}\Delta$ curve fit well represents the repeat level estimates as each $\emph{orf}\Delta$ level (red) curve lies in the region where most repeat level (black) curves are found.
Sharing information between $\emph{orf}\Delta$s will also affect each $\emph{orf}\Delta$ curve fit, increasing the probability of the $\emph{orf}\Delta$ level parameters being closer to the population parameters.
Comparing Figures~\ref{fig:diagABC}A, \ref{fig:diagABC}B and \ref{fig:diagABC}C shows that the separate hierarchical model (SHM) \hl{captures heterogeneity at} both the repeat and $\emph{orf}\Delta$ levels.
Figure~\ref{fig:diagABC}D demonstrates the hierarchy of information about \hl{the logistic model parameter $K$} generated by the \hl{SHM} for the $\emph{rad50}\Delta$ control mutant strain (variation decreases going from population level down to repeat level).
Figure~\ref{fig:diagABC}D \hl{also shows that the posterior distribution for $K$ is much more peaked than the prior, demonstrating that we have learned about the distribution of both the population and $\emph{orf}\Delta$ parameters.}
Learning more about the repeat level parameters \hl{reduces} the variance of our $\emph{orf}\Delta$ level estimates.
The posterior for the first time-course repeat \hl{ $K_{clm}$ } parameter shows exactly how much uncertainty there is for this particular repeat in terms of carrying capacity $K$.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img/diagABC}
\caption[Separate hierarchical model logistic growth curve fits]{
Separate hierarchical model (SHM) logistic growth curve fits.
Data for $\emph{orf}\Delta$ repeats have been plotted in A, B and C, with SHM fitted curves overlaid in black for repeat level parameters and red for the $\emph{orf}\Delta$ level parameter fit.
A) SHM scatter plot for 144 \emph{his3}$\Delta$ \emph{ura3$\Delta$} repeats at~${27}^{\circ}$C.
B) SHM scatter plot for 48 \emph{rad50}$\Delta$ \emph{ura3$\Delta$} repeats at~${27}^{\circ}$C.
C) SHM scatter plot for 56 \emph{exo1}$\Delta$ \emph{ura3$\Delta$} repeats at~${27}^{\circ}$C.
D) SHM density plot of posterior predictive distributions for \emph{rad50}$\Delta$ \emph{ura3$\Delta$}
carrying capacity $K$ hierarchy.
The prior distribution for $K^p$ is in black.
The posterior predictive for $e^{K^o_l}$ is in blue and for $K_{clm}$ in green.
The posterior distribution of the first time-course repeat $K_{clm}$ parameter is in red.
\hl{Parameters $K^p$, $e^{K^o_l}$ and $K_{clm}$ are on the same scale as the observed data.}
}
\label{fig:diagABC}
\end{figure}
\FloatBarrier
\subsection{\label{sub:two_sta_fre_app}Frequentist approach}
Figure~\ref{fig:old}A is a $MDR\times MDP$ fitness plot from \cite{QFA1} \hl{where growth curves and evidence for genetic interaction are modelled using} the non-hierarchical frequentist methodology discussed in Section~\ref{int:QFAqfa}.
Figure~\ref{fig:REM}B is a $MDR\times MDP$ fitness plot for the frequentist hierarchical approach REM, described in Table~\ref{tab:REM}, applied to the logistic growth parameter estimates used in \cite{QFA1}.
The number of genes identified as interacting with \emph{cdc13-1} by \cite{QFA1} and by the REM are 715 and 315 respectively (Table~\ref{tab:sup_enh}).
The REM has highlighted many strains which have low fitness. In order to fit a linear model to the fitness data and interpret results in terms of the multiplicative model we apply a log transformation to the fitnesses, thereby affecting the distribution of $\emph{orf}\Delta$ level variation.
The REM accounts for between subject variation and allows for the estimation of a query mutation and $\emph{orf}\Delta$ effect to be made simultaneously, unlike the model presented by \cite{QFA1}.
Due to the limitations of the frequentist \hl{hierarchical} modelling framework, the REM model assumes equal variances for all \emph{orf}$\Delta$s and incorrectly describes \emph{orf}$\Delta$ level variation as Log-normal, assumptions that are not necessary in our new Bayesian approaches.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/GO_4}
\caption[Fitness plots with \emph{orf}$\Delta$ posterior mean fitnesses]{Fitness plots with \emph{orf}$\Delta$ posterior mean fitnesses.
Mean $\emph{orf}\Delta$ level fitness are plotted for the control strains against the corresponding query strains.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in red and green for suppressors and enhancers respectively.
A) Non-Bayesian, non-hierarchical fitness plot, based on Table~S6 from \cite{QFA1} $(F=MDR\times MDP)$.
B) Non-Bayesian, hierarchical fitness plot, \hl{from fitting the REM to data} in Table~S6 from \cite{QFA1} $(F=MDR\times MDP)$.
C) IHM fitness plot with $\emph{orf}\Delta$ posterior mean fitness.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted on the plot as red and green for suppressors and enhancers respectively $(F=MDR\times MDP)$.
D) JHM fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
$\emph{orf}\Delta$ strains for the JHM plot are classified as being a suppressor or enhancer based on analysis of growth parameter $r$, meaning occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
For panels A and B significant interactors are classified as those with FDR corrected p-values $<0.05$.
For panels C and D significant interactors have posterior probability $\Delta>0.5$.
To compare fitness plots, labelled genes are those belonging to the following GO terms in Table~\ref{tab:sup_enh}: ``telomere maintenance'', ``ageing'', ``response to DNA damage stimulus'' or ``peroxisomal organization'', as well as the genes identified as interactions only in $K$ with the JHM (see Figure~\ref{fig:JHM_only}) (blue), genes interacting only in $r$ with the JHM (cyan) and the MRX complex genes (pink).
Solid and dashed grey fitted lines are for the 1-1 line and linear model fits respectively.
Alternative fitness plots with each of the GO terms highlighted are given in Section~\ref{app:GO_fit} of the Appendix.
\vspace{0.2in}
}
\label{fig:old}
\label{fig:REM}
\label{fig:IHM}
\label{fig:JHM}
\end{figure}
\clearpage
\begin{table}
\caption[Number of genes interacting with \emph{cdc13-1} at ${27}^{\circ}$C]{\label{tab:sup_enh}Number of genes interacting with \emph{cdc13-1} at ${27}^{\circ}$C identified using each of four approaches: Add \citep{QFA1}, REM, IHM and JHM.
Number of genes annotated with four example GO terms (telomere maintenance, ageing, response to DNA damage stimulus and peroxisome organisation) are also listed.
For the \citet{QFA1} and REM approach, significant interactors are classified as those with FDR corrected p-values (q-values) $<0.05$.
The label ``half data'' denotes analyses where only half of the available experimental observations are used.
The JHM uses a $MDR\times MDP$ summary after model fitting to classify suppressors and enhancers, comparable with the other three approaches.
The full lists of GO terms for each approach considered are given in a spreadsheet document, freely available online at \sloppy\url{http://research.ncl.ac.uk/qfa/HeydariQFABayes/}.\sloppy
}
\centering
\includegraphics[width=14cm]{img_fit/Supp_Enh}
\end{table}
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{../paper/BayesQFA/img/GO_ADD_plot}
\caption[Non-Bayesian, non-hierarchical \citet{QFA1} fitness plot]{Non-Bayesian, non-hierarchical fitness plot, based on Table~S6 from \cite{QFA1} $(F=MDR\times MDP)$.
Mean $\emph{orf}\Delta$ level fitness are plotted for the control strains against the corresponding query strains.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in red and green for suppressors and enhancers respectively.
In order to compare fitness plots we label genes which have shown evidence of genetic interaction with the JHM approach and are either a PEX gene or identified as being over-represented by at least one of the following GO terms with bioinformatics tool DAVID \citep{DAVID}: Telomere maintenance, Aging, or Response to DNA damage stimulus.
Significant interactors are classified as those with FDR corrected p-values (q-values) $<0.05$.
Solid and dashed grey lines are for a simple linear model fit (corresponding to a model of genetic independence) and \hl{the line of equal fitness} respectively.
An alternative fitness plot with gene labels for those with evidence of genetic interaction is given in Figure~\ref{fig:old_first}.}
\label{fig:old}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{../paper/BayesQFA/img/GO_RE_plot}
\caption[Non-Bayesian, hierarchical random effects model fitness plot]{Non-Bayesian, hierarchical random effects model (REM) fitness plot, \hl{from fitting REM to data} in Table~S6 from \cite{QFA1} $(F=MDR\times MDP)$.
Fitness plot notation is given in Figure~\ref{fig:old_first}.
An alternative fitness plot with gene labels for those with evidence of genetic interaction is given in Figure~\ref{fig:REM_app}.}
\label{fig:REM}
\end{figure}
\end{comment}
\FloatBarrier
\subsection{\label{sub:two_sta_bay_app}Two stage Bayesian approach}
Figure \ref{fig:IHM}C is an interaction hierarchical model (IHM) fitness plot with $\emph{orf}\Delta$ level fitness measures generated using the new Bayesian two-stage methodology with fitness in terms of $MDR\times MDP$.
576 genes are identified by the IHM as genetic interactions (Table~\ref{tab:sup_enh}).
Logistic parameter posterior means are used to generate fitness measures.
For a gene $(l)$ from the gene deletion library, $(e^{Z_{l}})$ is the fitness for the control and $(e^{\alpha_{1}+Z_{l}+\delta_{l}\gamma_{c,l}})$ for the query in the IHM.
For a gene $(l)$ in the query screen, with no evidence of genetic interaction i.e. $\delta_{l}=0$, fitness will be a linear transformation from the control counterpart $(e^{\alpha_{1}+Z_{l}})$.
Similar to Figures~\ref{fig:old}A~and~\ref{fig:REM}B, Figure~\ref{fig:IHM}C shows how the majority of control strains are more fit than their query strain counterparts, with a mean fitted line lying below the line of equal fitness.
Comparing the fitted lines in Figures~\ref{fig:old}A~and~\ref{fig:REM}B with Figure~\ref{fig:IHM}C, the IHM shows the largest deviation between the fitted line and the line of equal fitness, is largely due to the difference in $P$ estimated with the SHM for the control and query data sets being scaled out by the parameter $\alpha_{1}$.
If we fix $P$ in our Bayesian models, similar to the frequentist approach, genetic interactions identified are largely the same, but we then have the problem of choosing $P$. We recommend estimating $P$ simultaneously with the other model parameters because if the choice of $P$ is not close to the true value, growth rate $r$ estimates must compensate and don't give accurate estimates for time courses with low carrying capacity $K$.
It can be seen that many of the interacting $\emph{orf}\Delta$s have large deviations from the genetic independence line.
This is because of the indicator variable in the model, used to describe genetic interaction.
When there is enough evidence for interaction the Bernoulli variable is set to 1\hl{, otherwise it is} set to 0.
It is interesting to note that non-significant $\emph{orf}\Delta$s, marked by grey points, lie amongst some of the significant strains.
Many \hl{such points} have high variance and therefore we are \hl{less confident that these interact with the query mutation}.
This \hl{feature} of our new approach \hl{is an improvement over that} presented in \cite{QFA1}, which always shows evidence for an epistatic effect when mean distance from the genetic independence line is large, regardless of strain fitness variability.
An extract from the list of top interactions identified by the IHM is included in Table~\ref{app:IHM_interactions}.
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{../paper/BayesQFA/img/GO_IHM_plot}
\caption[Interaction hierarchical model fitness plot]{Interaction hierarchical model (IHM) fitness plot with $\emph{orf}\Delta$ posterior mean fitness.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted on the plot as red and green for suppressors and enhancers respectively $(F=MDR\times MDP)$.
Solid and dashed grey fitted lines are for the IHM linear model fit.
Further fitness plot notation is given in Figure~\ref{fig:old}.
An alternative fitness plot with gene labels for those with evidence of genetic interaction is given in Figure~\ref{fig:IHM_app}.}
\label{fig:IHM}
\end{figure}
\end{comment}
\FloatBarrier
\subsection{\label{sub:one_sta_app}One stage Bayesian approach}
Figure~\ref{fig:JHM}D is a JHM $MDR\times MDP$ fitness plot using the new, \hl{unified} Bayesian methodology.
The $MDR\times MDP$ fitness plot given in Figure~\ref{fig:JHM}D is for visualisation and comparison with the $MDR\times MDP$ fitness plots of the other approaches considered: the JHM does not make use of a fitness measure.
939 genes are identified by the JHM as genetic interactions (Table~\ref{tab:sup_enh}).
Posterior means of model parameters are used to obtain the following fitness measures.
With the JHM we can obtain an \emph{orf}$\Delta$ level estimate of the carrying capacity and growth rate $(K,r)$ for a gene ($l$).
For a gene ($l$) from the gene deletion library, carrying capacity and growth rate $(e^{K^{o}_{l}},e^{r^{o}_{l}})$ are used to evaluate the fitness for the control and $(e^{\alpha_{1}+K^{o}_{l}+\delta_{l}\gamma_{c,l}},e^{\beta_{1}+r^{o}_{l}+\delta_{l}\omega_{c,l}})$ for the query.
For a gene $(l)$ in the query screen, with no evidence of genetic interaction i.e. $\delta_{l}=0$, carrying capacity and growth rate will be linear transformations from the control counterpart $(e^{\alpha_{1}+K^{o}_{l}},e^{\beta_{1}+r^{o}_{l}})$.
Instead of producing a fitness plot in terms of $MDR\times MDP$, it can also be useful to analyse carrying capacity $K$ and growth rate $r$ fitness plots as\hl{, in the JHM,} evidence for genetic interaction \hl{comes from both of} these parameters \hl{simultaneously}, see Figures~\ref{fig:JHM_K}~and~\ref{fig:JHM_r}.
Fitness plots in terms of logistic growth parameters are useful for identifying some unusual characteristics of $\emph{orf}\Delta$s.
For example, an $\emph{orf}\Delta$ may be defined as a suppressor in terms of $K$ but an enhancer in terms of $r$.
\hl{To enable direct} comparison with the \cite{QFA1} analyses we generated a $MDR\times MDP$ fitness plot, Figure~\ref{fig:JHM}D.
An extract from the list of top interactions identified by the JHM is included in Table~\ref{app:JHM_interactions}.
\begin{comment}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{../paper/BayesQFA/img/GO_JHM_plot}
\caption[Joint hierarchical model fitness plot]{Joint hierarchical model (JHM) $MDR\times{MDP}$ fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
The JHM does not does not make use of a fitness measure such as $MDR\times{MDP}$ but the fitness plot is given in terms of $MDR\times{MDP}$ for comparison with other approaches which do.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on one of the two parameters used to classify genetic interaction, growth parameter $r$, this means occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
Further fitness plot explanation and notation is given in Figure~\ref{fig:old}.
An alternative fitness plot with gene labels for those with evidence of genetic interaction is given in Figure~\ref{fig:JHM_app}.
Fitness plots in terms of carrying capacity $K$ and growth rate $r$ are given in Figures~\ref{fig:JHM_K}~and~\ref{fig:JHM_r}.
}
\label{fig:JHM}
\end{figure}
\FloatBarrier
\end{comment}
\FloatBarrier
\section{\label{Application3}Comparison with previous analysis}
\subsection{\label{individual_interactions}Significant genetic interactions}
Of the genes identified as interacting with \emph{cdc13-1} (1038, see Table~\ref{tab:overlap}A) some are identified consistently across all four approaches (215 out of 1038, see Table~\ref{tab:overlap}A). Of the hits identified by the JHM (939), the majority (639) are common with those in the previously published \citet{QFA1} approach. However, 231 of 939 are uniquely identified by the JHM and could be subtle interactions which are the result of previously unknown biological processes.
To examine the evidence for some interactions uniquely identified by the JHM in more detail we compared the growth curves for three examples from the group of interactions identified only by the JHM. These examples (\emph{chz1}$\Delta$, \emph{pre9}$\Delta$ and \emph{pex6}$\Delta$) are genetic interactions which can be identified in terms of carrying capacity $K$, but not in terms of growth rate $r$ (see Figure~\ref{fig:JHM_only}).
By observing the difference between the fitted growth curve (red) and the expected growth curve, given no interaction (green) in Figure~\ref{fig:JHM_only}A, \ref{fig:JHM_only}B and \ref{fig:JHM_only}C we test for genetic interaction. Since the expected growth curves in the absence of genetic interaction are not representative of either the data or the fitted curves on the repeat and \emph{orf}$\Delta$ level, there is evidence for genetic interaction.
\begin{table}
\caption[Overlap between methods for genes interacting with \emph{cdc13-1} at $\boldsymbol{{27}^{\circ}}$C and gene ontology terms over-represented in lists of interactions]{\label{tab:overlap}Genes interacting with \emph{cdc13-1} at $\boldsymbol{{27}^{\circ}}$C and GO terms over-represented in the list of interactions according to each approach A) Number of genes identified for each approach (Add \cite{QFA1}, REM, IHM and JHM) and the overlap between the approaches. 4135 genes from the \emph{S. cerevisiae} single deletion library tested overall.
B) Number of GO terms identified for each approach (Add \cite{QFA1}, REM, IHM and JHM) and the overlap between the approaches. 6107 \emph{S. cerevisiae} GO Terms available.}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{*{6}{c}}
\multicolumn{1}{l}{\bf{A.}}& & \multicolumn{2}{c}{\emph{REM:0}} & \multicolumn{2}{c}{\emph{REM:1}}\\
\cline{3-6}
& &\emph{Add:0} &\emph{Add:1} &\emph{Add:0} &\emph{Add:1}\\\hline
\multirow{2}{*}{\emph{IHM:0}} &\emph{JHM:0} &3097&54&31&10\\
& \emph{JHM:1} &231&78&29&29\\\hline
\multirow{2}{*}{\emph{IHM:1}} &\emph{JHM:0} &1&2&1&0\\
&\emph{JHM:1} &30&327&0&215\\\hline
\end{tabular}
\qquad
\begin{tabular}{*{6}{c}}
\multicolumn{1}{l}{\bf{B.}}& & \multicolumn{2}{c}{\emph{REM:0}} & \multicolumn{2}{c}{\emph{REM:1}}\\
\cline{3-6}
& &\emph{Add:0} &\emph{Add:1} &\emph{Add:0} &\emph{Add:1}\\\hline
\multirow{2}{*}{\emph{IHM:0}} &\emph{JHM:0} &5813&21&58&7\\
& \emph{JHM:1} &46&8&6&10\\\hline
\multirow{2}{*}{\emph{IHM:1}} &\emph{JHM:0} &20&15&3&12\\
&\emph{JHM:1} &13&54&2&147\\\hline
\end{tabular}
}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img/JHM_only}
\caption[Joint hierarchical model logistic growth curve fits]{
Joint hierarchical model (JHM) logistic growth curve fitting. JHM data for $\emph{orf}\Delta$ repeats have been plotted in A, B and C, with fitted curves overlaid in black for repeat level parameters, red for the $\emph{orf}\Delta$ level query parameter fit and green for the expected $\emph{orf}\Delta$ level query parameter fit with no genetic interaction.
A) JHM scatter plot for 8 \emph{chz1}$\Delta$ \emph{cdc13-1} repeats.
B) JHM scatter plot for 8 \emph{pre9}$\Delta$ \emph{cdc13-1} repeats.
C) JHM scatter plot for 8 \emph{pex6}$\Delta$ \emph{cdc13-1} repeats.
}
\label{fig:JHM_only}
\end{figure}
We chose a prior for the probability $p$ of a gene interacting with the background mutation as 0.05. We therefore expected to find 215 genes interacting. The Bayesian models, for which a prior is applicable (IHM and JHM), find more genes than expected (576 and 939 interactions respectively, Table~\ref{tab:sup_enh}), demonstrating that information in this dataset can overcome prior expectations. The JHM identifies the highest proportion of genes as hits out of all methods considered, particularly identifying suppressors of \emph{cdc13-1} (Table~\ref{tab:sup_enh}). In fact, the JHM identifies more hits than the \citet{QFA1} approach, even when constrained to using only half of the available data.
An important advantage to our new Bayesian approach is
that we no longer have the difficulty of choosing a q-value threshold.
For the \citet{QFA1} approach to have similar numbers of interactions to the JHM, a less stringent q-value threshold would have to be justified \emph{a posteriori} by the experimenter.
\subsection{\label{Application5}Previously known genetic interactions}
In order to compare the quality of our new, Bayesian hierarchical models with existing, frequentist alternatives, we examined the lists of genetic interactions identified by all the methods discussed and presented here.
Comparing results with expected or \hl{previously} known lists of interactions from the relevant literature, we find that genes coding for the MRX complex (\emph{MRE11}, \emph{XRS2} \& \emph{RAD50}), which are known to interact with \emph{cdc13-1} \citep{MRX}, are identified by all four approaches considered and can be seen in a similar position in all four fitness plots (Figure~\ref{fig:old}A, \ref{fig:REM}B, \ref{fig:IHM}C and \ref{fig:JHM}D).
By observing the genes labelled in Figure~\ref{fig:old}A~and~\ref{fig:REM}B we can see that the frequentist approaches are unable to identify many of the interesting genes identified by the JHM as these methods are unable to detect interactions for genes close to the genetic independence line.
The JHM has extracted more information from deletion strain fitnesses observed with high variability than the \cite{QFA1} approach by sharing more information between levels, consequently improving our ability to identify interactions for genes close to the line of genetic independence (subtle interactions). \emph{CTI6}, \emph{RTC6} and \emph{TGS1} are three examples of subtle interactors identified only by the JHM (interaction in terms of $r$ but not $K$) which all have previously known telomere-related functions \citep{TGS1,CTI6,RTC6}.
We tested the biological relevance of results from the various approaches by carrying out unbiased Gene Ontology (GO) term enrichment analyses on the hits (lists of genes classified as having a significant interaction with \emph{cdc13-1}) using the {bioconductoR package GOstats \citep{GOstats}}.
For the GO term enrichment analysis R code used, see Section~\ref{app:GOstats} of the Appendix.
All methods identify a large proportion of the genes in the yeast genome annotated with the GO terms ``telomere maintenance'' and ``response to DNA damage stimulus'' (see Table~\ref{tab:sup_enh}), which were the targets of the original screen, demonstrating that they all correctly identify previously known hits of biological relevance. Interestingly, the JHM identifies many more genes annotated with the ``ageing'' GO term, which we also expect to be related to telomere biology (though the role of telomeres in ageing remains controversial) suggesting that the JHM is identifying novel, relevant interactions not previously identified by the \citet{QFA1} screen (see Table~\ref{tab:sup_enh}). Similarly, the JHM identifies a much larger proportion of the PEX ``peroxisomal'' complex (included in GO term: ``peroxisome organisation'') as interacting with \emph{cdc13-1} (see Table~\ref{tab:sup_enh}) including all of those identified in \citet{QFA1}. Many of the PEX genes show large variation in both $K$ and $r$, an example can be seen in Figure~\ref{fig:JHM_only}C for \emph{pex6$\Delta$}. Members of the PEX complex cluster tightly, above the fitted line in the fitness plot Figure~\ref{fig:JHM}D {(fitness plots with highlighted genes for GO terms in Table~\ref{tab:sup_enh} are given in Section~\ref{app:GO_fit} of the Appendix)}, demonstrating that although these functionally related genes are not strong interactors, they do behave consistently with each other, suggesting that the interactions are real. The results of tests for significant over-representation of all GO terms are given in a spreadsheet document, freely available online at \sloppy\url{http://research.ncl.ac.uk/qfa/HeydariQFABayes/}.\sloppy
Overall, within the genes interacting with \emph{cdc13-1} identified by the \citet{QFA1}, REM, IHM and JHM approaches, 274, 245, 266 and 286 GO terms were significantly over-represented respectively (out of 6235 possible GO terms, see Table \ref{tab:overlap}B). 147 were common to all approaches and examples from the group of GO terms over-represented in the JHM analysis and not in the \citet{QFA1} analysis seem internally consistent (e.g. ``peroxisome organisation'' GO term) and consistent with the biological target of the screen, telomere biology (significant GO terms for genes identified only by the JHM are also included in the spreadsheet document).
{Extracts from the list of top interactions identified by both the IHM and JHM are provided in Section~\ref{app:interactions}.
Files including the full lists of genetic interactions for the IHM and JHM are freely available online at \sloppy\url{http://research.ncl.ac.uk/qfa/HeydariQFABayes/}.}\sloppy
Alternative fitness plots to Figure~\ref{fig:old}A, B, C \& D with gene labels for those showing significant evidence of genetic interaction are provided in Figure~\ref{fig:old_first} and Section~\ref{app:alt_fitness}.
As suppressors and enhancers in the JHM may be in terms of both $K$ and $r$, fitness plots in terms of $K$ and $r$ with gene labels for those showing significant evidence of genetic interaction are given in Figure~\ref{fig:JHM_K_full} and Figure~\ref{fig:JHM_r_full} respectively.
To further compare the similarity of the Bayesian hierarchical models and frequentist analysis, a table of Spearman's rank correlation coefficients \citep{spearman} between genetic strengths and a $MDR\times MDP$ correlation plot of the JHM versus the \citet{QFA1} are given in Section~\ref{app:corr} of the Appendix.
\subsection{\label{Application6}Hierarchy and model parameters}
\hl{The hierarchical structure and model choices included in the Bayesian JHM and IHM are derived from the known experimental structure of QFA.
Different levels of variation for different $\emph{orf}\Delta$s are expected and can be observed \hl{by comparing} distributions of frequentist estimates or by visual inspection of yeast culture images.}
\hl{The direct relationship between experimental and model structure, together with the richness of detail and number of replicates included in QFA experimental design, reassures us that overfitting is not an issue in this analysis.}
For the \emph{ura3$\Delta$}~${{27}^{\circ}}$C and \mbox{\emph{cdc13-1}}~${27}^{{\circ}}$C experiment with 4294 $\emph{orf}\Delta$s there are ~1.25 times the number of parameters in the JHM ($\sim$200,000) compared to the two stage REM approach ($\sim$160,000) but when compared to the large number of pairs of data points ($\sim$830,000) there are sufficient degrees of freedom to justify our proposed Bayesian models.
\subsection{\label{Application7}Computing requirements}
Our Bayesian hierarchical models require significant computational time.
{As expected, the mixing of chains in our models is weakest at population level parameters such as $K_p$ and $\alpha_c$.}
For the \emph{ura3$\Delta$} ${{27}^{\circ}}$C and \mbox{\emph{cdc13-1}} ${27}^{{\circ}}$C \hl{dataset}, the JHM takes ${\sim}2$ weeks to converge and produce a sufficiently large sample. The two stage Bayesian approach takes one week (with the IHM part taking ${\sim}1$ day), whereas the REM takes ${\sim}3$ days and the \cite{QFA1} approach takes ${\sim}3$ hours.
\hl{A QFA experiment can take over a month from start to finish and so analysis time is acceptable in comparison to the time taken for the creation of the data set but still a notable inconvenience.}
{We expect that with further research effort, computational time can be decreased by using an improved inference scheme and that inference for the JHM could be completed in less than a week without parallelisation.}
MCMC algorithms are inherently sequential so, parallelisation is not completely trivial and may be considered for future development.
Parallelisation may reduce computational time by partitioning the state space into segments that can be updated in parallel \citep{parallel}.
For the JHM it may be possible to partition by QFA screens to reduce computational time.
Further, parallelisation may be possible across $\emph{orf}\Delta$s for even further reduction to computational time.
\FloatBarrier
\subsection{\label{conv_diag}Convergence diagnostics}
Evidence of convergence for our Bayesian models in Section~\ref{sub:two_sta_bay_app}~and~\ref{sub:one_sta_app} can be shown by observing posterior samples from the MCMC samplers used.
Figures~\ref{fig:SHMdiag}, \ref{fig:IHMdiag} and \ref{fig:JHMdiag} show evidence of convergence for a subset of population level parameters from the SHM, IHM and JHM respectively.
Posterior samples of 1000 particles are obtained after a burn-in period of 800k and a thinning of every 100 observations for the SHM, IHM and JHM.
Population level parameters are found to have the worst mixing in our models due to the large number of lower level parameters that population level parameter sampling distributions are conditioned upon.
We demonstrate how our population parameters have converged with Trace plots, ACF and density plots in Figures~\ref{fig:SHMdiag}, \ref{fig:IHMdiag} and \ref{fig:JHMdiag}.
Trace plots show that the posterior samples are bound between a fixed range of values, indicating convergence.
Auto-correlation functions do not have any large peaks above the dashed blue line for significant evidence of dependence, showing that each sequential sample value from the posterior distributions are largely uncorrelated with previous values and ensuring that the effective sample size is similar to the actual sample size.
ACF plots in Figures~\ref{fig:IHMdiag}~and~\ref{fig:JHMdiag} do show some dependence within our posterior samples but as the ACF decays rapidly before a lag of 5, there is only a small amount that will not be a problem for inference.
Density plots show that that there is enough information within the models to give sufficiently peaked single modes, converging around a fixed region of plausible values.
Table~\ref{tab:modelconv} gives diagnostic statistics for the population parameters considered in Figures~\ref{fig:SHMdiag}, \ref{fig:IHMdiag} and \ref{fig:JHMdiag}.
We can see in Table~\ref{tab:modelconv} that the lowest effective sample size of our model parameters is $324$, for the JHM $P$ parameter, followed by $378$ for the SHM $P$ parameter.
Of all our model parameters, $P$ was found to have the lowest effective sample size, but we are still able to find a large enough sample for our inference.
Heidelberg and Welch P-values do not show evidence against the stationary of our chains, using a cut-off of $0.10$.
The above statistics are calculated for all model parameters and are used to identify where mixing is poor and if our model has reached convergence.
All chains are accepted for parameter posterior samples in Section~\ref{sub:two_sta_bay_app}~and~\ref{sub:one_sta_app} as effective sample sizes are found to be greater than $300$ and Heidelberg and Welch P-values greater than $0.10$ for every chain.
\input{tables/diag_convergence}
\begin{figure}[h!]
\centering
\resizebox{\columnwidth}{!}{%
\includegraphics[width=14cm]{img_fit/SHM_diag}
}
\caption[Convergence diagnostics for the separate hierarchical model]{Convergence diagnostics for the separate hierarchical model (SHM). Trace, auto-correlation and density plots for the SHM parameter posteriors (sample size = 1000, thinning interval = 100 and burn-in = 800000), see Section~\ref{sub:two_sta_bay_app}. Posterior (black) and prior (red) densities are shown in the right hand column. \label{fig:SHMdiag}
}
\end{figure}
\begin{figure}[h!]
\centering
\resizebox{\columnwidth}{!}{%
\includegraphics[width=14cm]{img_fit/IHM_diag}
}
\caption[Convergence diagnostics for the interaction hierarchical model]{Convergence diagnostics for the interaction hierarchical model (IHM). Trace, auto-correlation and density plots for the IHM parameter posteriors (sample size = 1000, thinning interval = 100 and burn-in = 800000), see Section~\ref{sub:two_sta_bay_app}. Posterior (black) and prior (red) densities are shown in the right hand column.\label{fig:IHMdiag}
}
\end{figure}
\begin{figure}[h!]
\centerline{
\resizebox{\columnwidth}{!}{%
\includegraphics[width=14cm]{img_fit/JHM_diag}
}
}
\caption[Convergence diagnostics for the joint hierarchical model]{Convergence diagnostics for the joint hierarchical model (JHM). Trace, auto-correlation and density plots for the JHM parameter posteriors (sample size = 1000, thinning interval = 100 and burn-in = 800000), see Section~\ref{sub:one_sta_app}. Posterior (black) and prior (red) densities are shown in the right hand column.\label{fig:JHMdiag}
}
\end{figure}
\FloatBarrier
\subsection{\label{sim_study}Simulation study}
A simulation study was carried out to compare the performance of the different approaches considered for a simulated QFA screen comparison from the JHM.
We believe that the JHM closely models a QFA screen comparison and so by simulating a QFA screen comparison data set from the JHM we will obtain a data set for which we know the full set of true genetic interactions.
Simulated JHM data will include important features of QFA screen comparison data, such as a hierarchical structure and genetic interaction in terms of both $K$ and $r$.
Two simulated QFA screens where generated, a control and query screen with some condition effect in the query.
Each screen consists of 4300 \emph{orf}$\Delta$s and 8 logistic growth time-course repeats for each \emph{orf}$\Delta$.
Each time-course consists of 10 measurements, evenly distributed across 6 days.
430 genes were set as genetic interactors in the query screen.
The true Population level parameters are chosen from frequentist estimates of 10 historic data sets, \emph{orf}$\Delta$ and repeat level parameters are then generated from the JHM structure in Table~\ref{tab:JHM} and growth time-course data simulated.
Table~\ref{tab:simstudy} shows the number of true genetic interactions identified, suppressors and enhancers, as well as false positives (FPs) and false negatives (FN) for each of the approaches considered.
As expected, the JHM identifies the largest number of true genetic interactions.
The number of suppressors identified by the JHM is higher than the \citet{QFA1}, REM and IHM but for enhancers, all methods perform very similarly.
Performance of the different methods can be observed through the FP and FN rates.
From Table~\ref{tab:simstudy} we can calculate FP and FN rates, where FP rate$=1-$``sensitivity'' and FN rate$=1-$``specificity''.
FP rates for the \cite{QFA1}, REM, IHM and JHM are $0.078$, $0.042$, $0.006$ and $0.002$ respectively.
The JHM has the lowest FP rate when compared to the other approaches available.
Frequentist approaches \citet{QFA1} and REM have large FP rates when compared to the two Bayesian approaches.
The \citet{QFA1} approach has more false positives than true genetic interactions.
FN rates for the \cite{QFA1}, REM, IHM and JHM are $0.488$, $0.570$, $0.593$ and $0.270$ respectively.
Two-stage approaches \citet{QFA1}, REM and IHM have large FP rates when compared to the JHM.
The \cite{QFA1}, REM and IHM have ${\sim}200$ false negatives, approximately double the number identified by the JHM (${\sim}100$).
Observing the genes that have been missed by the two-stage approaches, we find that they often fail to identify genetic interactions when evidence is weak in only $K$ or $r$, even if there is sufficient evidence in the other parameter such that the JHM can identify the genetic interaction.
From our simulation study we have been able to show that the two-stage frequentist approaches have high false positives and false negatives.
From the number of false positives identified for each method, we can see that the non-hierarchical \citet{QFA1} approach has the worst performance, followed by the hierarchical two-stage approaches.
As expected, the JHM is the best approach when we consider a simulated hierarchical data set with genetic interaction in terms of $K$ and $r$, as the two-stage approaches fail to capture more subtle genetic interactions.
\input{tables/simstudy}
\FloatBarrier
\section{\label{sec:candjags}Bayesian inference code comparison}
Inference for the Bayesian hierarchical models in this thesis is carried out using code written in the C programming language.
To see how our code compares to commonly used software available for carrying out inference for Bayesian models, we have tested posterior samples for our C code and equivalent code using Just Another Gibbs Sampler (JAGS) software (written in C++) \citep{Plummer2003} .
We carry out our JAGS analysis within the R package ``rjags'' \citep{rjags} which provides a more familiar framework for an R user implementing the JAGS software.
The BUGS (Bayesian inference Using Gibbs Sampling) language \citep{BUGS} is used to describe models in JAGS.
The SHM, IHM and JHM have each been described with the BUGS language in Section~\ref{app:jags_code} of the Appendix.
For the following comparison we use a subset from the \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C suppressor/enhancer data set described in Section~\ref{sec:ura3_cdc13-1_27_27}.
A subset of 50 \emph{orf}$\Delta$s (for both the control and query) are chosen, each with 8 time-course repeats.
With a smaller data set we are able to collect large posterior sample sizes, sufficient to carry out a comparison between posterior samples.
Density plots are used to visually compare the similarity of the posterior samples from the C and JAGS code.
The Kolmogorov–Smirnov test \citep{hubergoodness} and unpaired two-sample Student's t-test \citep{degrootprobability} are used to test for significant difference between posterior samples from our C and JAGS code.
A comparison of posterior samples for our most sophisticated model, the JHM, is given below.
Posterior samples of 100k particles are obtained after a burn-in period of 1000k and a thinning of every 100 observations for both the C and JAGS code.
Computational time for the C and JAGS code is $\sim{30}$ hours and $\sim{400}$ hours respectively.
The minimum effective sample size per second (ESS\textsubscript{min}/sec) for the C and JAGS code is $\sim${1} and $\sim${0.1} respectively, demonstrating that the C code is $\sim{10}\times$ faster.
\begin{table}
\caption[Unpaired t-test and Kolmagorov-Smirnov p-values comparing posterior samples from the joint hierarchical model using both C programming language and Just Another Gibbs Sampler software]{Unpaired t-test and Kolmagorov-Smirnov p-values comparing posterior samples from the joint hierarchical model (JHM) using both C and Just Another Gibbs Sampler (JAGS) software. An extract of JHM parameters are given for both the C programming language and JAGS software. Posterior means are also included for both approaches. t-tests are carried out on the log posterior samples i.e. $\hat{K_p}$ in place of $e^{\hat{K_p}}$ to assume normality. \label{tab:jagscompare}}
\centering
\resizebox{14cm}{!}{%
\npdecimalsign{.}
\nprounddigits{3}
\begin{tabular}{c n{1}{3} n{1}{3} n{1}{3} n{1}{3}}
\hline
\emph{Parameter}&\multicolumn{1}{c}{\emph{C Code posterior mean}}&\multicolumn{1}{c}{\emph{JAGS posterior mean}}&\multicolumn{1}{c}{\emph{t-test (with log posterior samples)}}&\multicolumn{1}{c}{\emph{Kolmagorov-Smirnov test}} \\
\hline
$e^{\hat{K_p}}$&0.1432044&0.1431957&0.4522&0.4005\\
$e^{\hat{r_p}}$&4.639228&4.640523&0.4236&0.4815\\
$e^{\hat{P}}$&2.53738e04&2.516994e04&0.1366&0.1162\\
$e^{\hat{\nu_p}}$&7.401522e04&7.416073e04&0.2497&0.1901\\
$e^{\hat{\alpha_c}}$&0.3038192&0.3037929&0.2034&0.1401\\
$e^{\hat{\beta_c}}$&0.384091&0.3841624&0.1563&0.1462\\
\hline
\end{tabular}
}
\end{table}
Figure~\ref{tab:jagscompareplot} gives density plots for an extract of JHM parameters for the C and JAGS software.
Visually there is no significant difference between the posterior sample density plots in Figure~\ref{tab:jagscompareplot}. Of the parameters shown, the weakest effective sample size ($\sim{80000}$ESS) is for the initial inoculum parameter $P$, but this is sufficiently large enough ESS to test if posterior samples show a significant difference.
Table~\ref{tab:jagscompare} demonstrates further that there is no significant difference found between the parameters shown.
The unpaired t-test for log posterior samples (for normality assumption) and Kolmogorov-Smirnov test p-values are all greater than 0.10 for the parameters given, including the inoculum density parameter $P$.
Overall we find no significant evidence against the C code and JAGS code sampling from the same posterior distributions.
\\
\\
As carrying out inference using C is $\sim${10} times faster than the JAGS equivalent code we prefer the C code for our Bayesian hierarchical models.
Obtaining sufficiently sized independent posterior samples of our posterior distributions for a larger data set of $\sim${4000} \emph{orf}$\Delta$s, we estimate our C code to be at least more than $\sim${50}$\times$ faster than the equivalent JAGS as we find the JAGS code to have exponential computational costs as we introduce larger data sets.
JAGS is very useful for model exploration as it is fast and simple to describe complex models.
The JAGS software is so prohibitively slow for the JHM, that an experimenter is likely to not carry out such inference and use a more simple or faster method, justifying the use of the C programming language to carry out inference.
Further improvements such as the introduction of parallelisation may lead to more favourable computational times in the future.
\begin{figure}
\centering
\includegraphics[width=9.4cm]{lateximg/jagscompare}
\caption[Density plots for posterior samples from the joint hierarchical model using the C programming language and Just Another Gibbs Sampler software]{Density plots for posterior samples from the joint hierarchical model (JHM) using the C programming language (red) and Just Another Gibbs Sampler (black) software. Density plots for the JHM parameter posteriors (sample size = 100000, thinning interval = 100 and burn-in = 1000000).
\label{tab:jagscompareplot}
}
\end{figure}
\clearpage
\section{\label{sec:fur_case_stu}Further case studies}
In this section we briefly introduce different data sets that may be considered for further investigation and research.
We can also see how the JHM performs for different experimental conditions by applying the JHM to different QFA screen comparisons, see $MDR{\times}MDP$ fitness plots in Figures~\ref{JHM_CDC13-1_CDC13-1EXO1_27_27}-\ref{JHM_URA_URA_20_37_reversefix}.
The data sets used in Figures~\ref{JHM_CDC13-1_CDC13-1EXO1_27_27}-\ref{JHM_URA_URA_20_37_reversefix} are currently unpublished from the Lydall lab.
For each of the data sets, the JHM in Table~\ref{tab:JHM} is applied with the prior hyper-parameters in Table~\ref{tab:SHM_priors}.
Posterior samples of 1000 particles are obtained after a burn-in period of 800k, and a thinning of every 100 observations.
Similarly to Section~\ref{conv_diag}, chains from our MCMC sampler are accepted where the effective sample sizes are greater than $300$ and Heidelberg and Welch P-values are greater than $0.10$ for every chain.
As in the \cite{QFA1} analysis, each experiment has a list of 159 genes stripped from our final list of genes for biological and experimental reasons.
Results for the \emph{cdc13-1}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C and \emph{cdc13-1}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C experiments have further genes removed for biological and experimental reasons, 23 and 13 genes respectively (a total of 182 and 172 genes respectively).
Figure~\ref{JHM_CDC13-1_CDC13-1EXO1_27_27} is a \emph{cdc13-1}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C suppressor/enhancer analysis for finding genes that interact with $\emph{exo1}$ in a telomere maintenance defective background (\emph{cdc13-1} at $\boldsymbol{{27}^{\circ}}$C).
Similarly, Figure~\ref{JHM_CDC13-1RAD_CDC13-1_27_27} is a \emph{cdc13-1}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C suppressor/enhancer analysis for finding genes that interact with $\emph{rad9}$ in a telomere maintenance defective background.
Figure~\ref{JHM_URA_YKU70_37_37} is a \emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C suppressor/enhancer analysis for finding genes that interact with \emph{yku70} at high temperature.
Figure~\ref{JHM_URA_URA_20_37_reversefix} is an example of a temperature sensitivity experiment, for finding genes that interact with the high temperature of $\boldsymbol{{37}^{\circ}}$C.
Figures~\ref{JHM_CDC13-1_CDC13-1EXO1_27_27}-\ref{JHM_URA_URA_20_37_reversefix} demonstrate that the JHM can capture different linear relationships that are above or below the 1-1 line.
Curvature of the data in Figures~\ref{JHM_CDC13-1_CDC13-1EXO1_27_27}-\ref{JHM_URA_URA_20_37_reversefix} suggests that the linear relationships modelled by the JHM may be improved through linearising transformations of the data. Extending the JHM to account for the curvature in the data may improve our model fit and allow to better determine genes which significantly interact.
Table~\ref{tab:JHM_hits} compares the number of suppressors and enhancers estimated for each of the experiments considered.
The experiments in Table~\ref{tab:JHM_hits} have similar numbers of genetic interactions, ranging from 358 to 511, but much lower than the \emph{cdc13-1}$\boldsymbol{{27}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C experiment which has $939$.
The experiments introduced in this section also differ from the \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C experiment as they have more enhancers than suppressors, further demonstrating the JHM's ability to model different experimental situations and the non-restrictive choice of priors (Table~\ref{tab:SHM_priors}).
\input{tables/JHM_model_hits}
Table~\ref{tab:overlap_further}A shows the overlap in genes with significant evidence of genetic interactions between the different QFA comparisons considered.
The largest number of overlapping genetic interactions are found with the \mbox{\emph{cdc13-1}}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{ura}$\Delta$~$\boldsymbol{{27}^{\circ}}$C experiment, overlapping with 301 and 263 genes from the \mbox{\emph{cdc13-1}}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \mbox{\emph{cdc13-1}}~$\boldsymbol{{27}^{\circ}}$C and \mbox{\emph{cdc13-1}}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \mbox{\emph{cdc13-1}}~$\boldsymbol{{27}^{\circ}}$C experiment respectively.
The \mbox{\emph{cdc13-1}}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{ura}$\Delta$~$\boldsymbol{{27}^{\circ}}$C, \mbox{\emph{cdc13-1}}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \mbox{\emph{cdc13-1}}~$\boldsymbol{{27}^{\circ}}$C and \mbox{\emph{cdc13-1}}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \mbox{\emph{cdc13-1}} $\boldsymbol{{27}^{\circ}}$C experiments are expected to overlap most as they are designed to find genes interacting in a \mbox{\emph{cdc13-1}} background.
The smallest number of overlapping genetic interactions are found with the \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C and \emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C experiment.
The \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C and \emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C experiments are expected to have the least overlap as they are not designed to find genes interacting in a \mbox{\emph{cdc13-1}} background.
The \emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C experiment is designed to look at telomeres, but instead of disrupting the telomere capping protein Cdc13 using \mbox{\emph{cdc13-1}}, a \emph{yku70}$\Delta$ mutation is made such that the protein Yku70 (a telomere binding protein which guides the enzyme telomerase to the telomere \citep{QFA1}) is no longer produced by the cell.
Further \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C is designed to investigate temperature sensitivity only.
Table~\ref{tab:overlap_further}B shows the overlap in significant GO terms between the different QFA comparisons considered.
The largest number of overlapping significant GO terms are found with the \mbox{\emph{cdc13-1}}$\Delta$~$\boldsymbol{{27}^{\circ}}$C experiment, overlapping with $\sim${150} GO terms for each experiment.
The smallest overlap with \mbox{\emph{cdc13-1}}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{ura}$\Delta$~$\boldsymbol{{27}^{\circ}}$C experiment is 110 GO terms with the \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C experiment.
The smallest number of overlapping genetic interactions are for the \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C experiment, followed by \emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C, with $\sim${110} and $\sim${120} GO terms overlapping with the other experiments respectively.
Similarly to the overlap of genes with significant evidence of genetic interaction, the overlap of significant GO terms shows that our \mbox{\emph{cdc13-1}} background experiments share the most GO terms and that the temperature sensitivity experiment \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C has the least overlap.
We have shown that the JHM can successfully model different experimental data sets, Figures~\ref{JHM_CDC13-1_CDC13-1EXO1_27_27}-\ref{JHM_URA_URA_20_37_reversefix} are included as a reference for further research.
Of the different experiments we can see that \mbox{\emph{cdc13-1}}~$\boldsymbol{{27}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C is the most dissimilar to the other experiments due to the large number of genetic interactions, 939 in total (see Table~\ref{tab:JHM_hits}).
The next largest number of genetic interactions is 511 with the \emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs\ emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C experiment, which is approximately half the genes found for the \mbox{\emph{cdc13-1}}~$\boldsymbol{{27}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C experiment.
Tables~\ref{tab:overlap_further}A and \ref{tab:overlap_further}B show that the overlap between QFA comparisons is as expected using the JHM, with the closer related experiments sharing the most overlap.
To account for the curvature of the data observed in Figures~\ref{JHM_CDC13-1_CDC13-1EXO1_27_27}-\ref{JHM_URA_URA_20_37_reversefix} we introduce a JHM with linearising transformations in the next section.
Further research may include developing models that can incorporate multiple QFA comparisons to find evidence of genetic interactions between query screens and incorporate more information within our models.
\begin{table}
\caption[Overlap between different QFA comparisons for genes interacting and gene ontology terms over-represented in lists of interactions]{\label{tab:overlap_further}Overlap between different QFA comparisons for genes interacting and gene ontology terms over-represented in lists of interactions.
For a fair comparison, any genes removed from the results of a QFA comparison for biological and experimental reasons are removed for all experiments,
therefore results for all experiments have a list of 195 genes (159+23+13, see Table~\ref{tab:JHM_hits}) removed from the final list of interactions for biological and experimental reasons.
A) Number of genes identified for each QFA comparison and the overlap between QFA comparisons. 4099 genes from the \emph{S. cerevisiae} single deletion library are considered.
B) Number of GO terms identified for each approach and the overlap between QFA comparisons. 6094 \emph{S. cerevisiae} GO Terms available.}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{*{6}{c}}
\multicolumn{1}{l}{\bf{A.}}&\emph{cdc13-1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C &\emph{cdc13-1}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C&\emph{cdc13-1}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C&\emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C&\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C\\
&vs \emph{ura}$\Delta$~$\boldsymbol{{27}^{\circ}}$C&vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C&vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C&vs \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C &vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C\\\hline
\emph{cdc13-1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{ura}$\Delta$~$\boldsymbol{{27}^{\circ}}$C&926&N/A&N/A&N/A&N/A\\
\emph{cdc13-1}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C&301&386&N/A&N/A&N/A\\
\emph{cdc13-1}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C&263&245&355&N/A&N/A\\
\emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C&252&155&146&506&N/A\\
\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C&223&152&149&164&455\\\hline
\end{tabular}
}
\\ \qquad
\\ \qquad
\\
\resizebox{\columnwidth}{!}{%
\begin{tabular}{*{6}{c}}
\multicolumn{1}{l}{\bf{B.}}&\emph{cdc13-1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C &\emph{cdc13-1}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C&\emph{cdc13-1}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C&\emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C&\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C\\
&vs \emph{ura}$\Delta$~$\boldsymbol{{27}^{\circ}}$C&vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C&vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C&vs \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C &vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C\\\hline
\emph{cdc13-1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{ura}$\Delta$~$\boldsymbol{{27}^{\circ}}$C&282&N/A&N/A&N/A&N/A\\
\emph{cdc13-1}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C&142&188&N/A&N/A&N/A\\
\emph{cdc13-1}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C vs \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C&151&130&212&N/A&N/A\\
\emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C&150&119&125&245&N/A\\
\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C vs \emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C&110&100&112&119&195\\\hline
\end{tabular}
}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_CDC13-1_CDC13-1EXO1_27_27}
\caption[\emph{cdc13-1}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C joint hierarchical model fitness plot]{\emph{cdc13-1}\emph{exo1}$\Delta$~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C joint hierarchical model (JHM) fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
The JHM does not does not make use of a fitness measure such as $MDR\times{MDP}$ but the fitness plot is given in terms of $MDR\times{MDP}$ for comparison with other approaches which do.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on one of the two parameters used to classify genetic interaction, growth parameter $r$, this means occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
Further fitness plot explanation and notation is given in Figure~\ref{fig:JHM}.\label{JHM_CDC13-1_CDC13-1EXO1_27_27}
}
\end{figure}
\clearpage
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_CDC13-1RAD_CDC13-1_27_27}
\caption[\emph{cdc13-1}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C joint hierarchical model fitness plot]{\emph{cdc13-1}\emph{rad9}$\Delta$~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C joint hierarchical model (JHM) fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
The JHM does not does not make use of a fitness measure such as $MDR\times{MDP}$ but the fitness plot is given in terms of $MDR\times{MDP}$ for comparison with other approaches which do.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on one of the two parameters used to classify genetic interaction, growth parameter $r$, this means occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
Further fitness plot explanation and notation is given in Figure~\ref{fig:JHM}.\label{JHM_CDC13-1RAD_CDC13-1_27_27}
}
\end{figure}
\clearpage
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_URA_YKU70_37_37}
\caption[\emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C joint hierarchical model fitness plot]{\emph{yku70}$\Delta$~$\boldsymbol{{37}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C joint hierarchical model (JHM) fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
The JHM does not does not make use of a fitness measure such as $MDR\times{MDP}$ but the fitness plot is given in terms of $MDR\times{MDP}$ for comparison with other approaches which do.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on one of the two parameters used to classify genetic interaction, growth parameter $r$, this means occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
Further fitness plot explanation and notation is given in Figure~\ref{fig:JHM}.\label{JHM_URA_YKU70_37_37}
}
\end{figure}
\clearpage
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM_URA_URA_20_37_reversefix}
\caption[\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C joint hierarchical model fitness plot]{\emph{ura3}$\Delta$~$\boldsymbol{{37}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{20}^{\circ}}$C joint hierarchical model (JHM) fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
The JHM does not does not make use of a fitness measure such as $MDR\times{MDP}$ but the fitness plot is given in terms of $MDR\times{MDP}$ for comparison with other approaches which do.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on one of the two parameters used to classify genetic interaction, growth parameter $r$, this means occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
Further fitness plot explanation and notation is given in Figure~\ref{fig:JHM}.\label{JHM_URA_URA_20_37_reversefix}
}
\end{figure}
\clearpage
\section{\label{sec:batch_eff}Extensions of the joint hierarchical model}
In this section we briefly introduce two new extensions of the JHM for further investigation and research.
An extension to the JHM, given in Table~\ref{tab:JHM}, is to consider a batch effect.
Batch effects are technical sources of variation from the handling of experimental cultures \citep{batch1,batch2}.
Batch effects can be confounded with the biology of interest, leading to misleading results and conclusions.
A QFA screen comparison is carried out between two QFA screens.
Each QFA screen consists of multiple 384 plates grown over time (see Figure~\ref{fig:int:spot}), typically with each \emph{orf}$\Delta$ repeat on a different 384 plate.
For the \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C experiment, each QFA screen is built of 120 384 spot plates (240 total unique plates).
Each 384 plate is created sequentially and may be created by a different experimenter.
The 384 plates may therefore differ due to factors that the experimenters do their best to control such as the amount of nutrition in a plate, temperature, or other environmental effects.
Where \emph{orf}$\Delta$ repeats are carried out across multiple plates, differences in plates can therefore be captured by introducing a batch effect into the model.
Through careful planning and improved experimental design, batch effects can be reduced or removed.
When we are unable to improve our experimental design any further we may be interested in accounting for a batch effect within our model.
Introducing parameters to model batch effects in our experiment we can account for any differences between the 240 384 spot plates.
A JHM with batch effects (JHM-B), described in Table~\ref{tab:BATCH}, will be able to improve inference by including more of the experimental structure.
The model in Table~\ref{tab:BATCH} introduces a batch effect $\kappa_b$ and $\lambda_b$, for a plate $b$, to capture any batch effect in carrying capacity $K$ and growth rate $r$ respectively.
A batch effect will be estimated within the model and consequently any confounding with \emph{orf}$\Delta$ level carrying capacity $K$ and growth rate $r$ parameters will be removed.
Using frequentist estimates of the batch effects in the QFA screens, a normal prior was chosen to describe batch effect parameters, allowing either a positive or negative effect to be incorporated for each \emph{orf}$\Delta$ repeat in terms of $K$ and $r$.
\\
\\
Another extension of the JHM is to consider a transformation to linearise the relationship describing genetic independence in the JHM.
When carrying out linear regression we may be interested in linearising the data to improve the linear relationship \citep{trans1}.
There are many different transformations used for linearising data, the most common are log and power transformations.
Power transformations are families of power functions that are typically used to stabilise variance and make our data more Normal distribution-like.
For a variable $x$, a power function is of the form $f:x\mapsto cx^r$, for $c,r\in\mathbb{R}$, where $c$ and $r$ are constant real numbers.
The Box-Cox transformation \citep{trans2} is a particular case of power transformation that is typically used to transform data and linearise a relationship within a data set.
Without linearising our data, we may not be describing genetic independence within our model correctly, leading to misleading results and conclusions.
A JHM with transformations (JHM-T), described in Table~\ref{tab:TRANS}, will be able to improve inference by ensuring a more linear relationship is made between the control and query screen.
Genetic independence within the JHM is described as a linear relationship (see Sections~\ref{int:defining_epi}~and~\ref{joi:JHM}) for both carrying capacity $K$ and growth rate $r$.
We may not believe there to be a perfectly linear relationship between the control and query for both $K$ and $r$.
Introducing a power transformation for the model of genetic independence in terms of $K$ and $r$ can allow us to linearise the relationship and better model genetic independence.
The model in Table~\ref{tab:TRANS} introduces the transformation parameters $\phi$ and $\chi$ at an \emph{orf}$\Delta$ level for both the carrying capacity $K$ and growth rate $r$ respectively, where $\phi>0$ and $\chi>0$.
The ``vanilla'' JHM assumes an additive model of epistasis with $(\alpha_{c}+K^{o}_{l}+\delta_{l}\gamma_{cl},\beta_{c}+r^{o}_{l}+\delta_{l}\omega_{cl})$, where $\alpha_{c}$ and $\beta_{c}$ are the scale parameters, as we are considering log \emph{orf}$\Delta$ parameters.
The ``vanilla'' JHM effectively assuming a multiplicative model on the original scale of the data i.e. $(e^{\alpha_{c}}e^{K^{o}_{l}+\delta_{l}\gamma_{cl}},e^{\beta_{c}}e^{r^{o}_{l}+\delta_{l}\omega_{cl}})$.
By introducing new parameters $\phi$ and $\chi$ to scale the control and query data $\left(\frac{\alpha_{c}+K^{o}_{l}+\delta_{l}\gamma_{cl}}{\phi},\frac{\beta_{c}+r^{o}_{l}+\delta_{l}\omega_{cl}}{\chi}\right)$ we can expect to have a power transformation with the control and query on the original scale of the data $\left[\left(e^{\alpha_{c}}e^{K^{o}_{l}+\delta_{l}\gamma_{cl}}\right)^{\frac{1}{\phi}},\left(e^{\beta_{c}}e^{r^{o}_{l}+\delta_{l}\omega_{cl}}\right)^{\frac{1}{\chi}}\right]$.
The transformation parameters give the same transformation to both the control and query screens.
Our model will learn about $\phi$ and $\chi$, adjusting the relationship of genetic independence and consequently those identified as genetic interaction.
Choosing to include a multiplicative transformation parameter where the model describes genetic independence (as an additive model) will give the model the flexibility to adjust the linear relationship between the control and query screens.
Prior hyper-parameter choice for the transformation effect must be strictly positive and centred at $1$ (no transformation effect) and so a gamma distribution with a mean of $1$ is chosen for both $\chi$ and $\phi$.
\\
\\
Figures~\ref{fig:BATCH}~and~\ref{fig:TRANS}~show JHM-B and JHM-T $MDR{\times}MDP$ fitness plots respectively, for the \emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C experiment.
Prior hyper-parameter choices for the models are given Table~\ref{tab:SHM_priors}.
Bayesian inference and MCMC methods for the JHM in Table~\ref{tab:JHM} is carried out similarly for both the JHM-B and JHM-T.
Posterior samples of 1000 particles are obtained after a burn-in period of 800k, and a thinning of every 100 observations.
Similarly to Section~\ref{conv_diag}, chains from our MCMC sampler are accepted where the effective sample sizes are greater than $300$ and Heidelberg and Welch P-values are greater than $0.10$ for every chain.
Similarly to the other previous modelling approaches considered (including the ``vanilla'' JHM), a list of 159 are stripped from our final list of genes for biological and experimental reasons.
The JHM-B fit in Figure~\ref{fig:BATCH} has many less interactions on the plot than the ``vanilla'' JHM fitness plot, this may be evidence of a plate effect existing.
The JHM-T fit in Figure~\ref{fig:TRANS} is largely the same as the ``vanilla'' JHM fitness plot.
It is worth noting that the JHM-T model fit in Figure~\ref{fig:TRANS} has posterior mean estimates of $\hat{\phi}=0.96$ and $\hat{\chi}=0.87$, 2dp, suggesting that a transformation may only exist in terms of $r$.
Table~\ref{tab:JHM_hits} compares the number of suppressors and enhancers estimated for the two extensions of the JHM.
The JHM-B reduces the number of genetic interactions from the ``vanilla'' JHM from $939$ to $553$, and similarly reduces the number of suppressors and enhancers.
Therefore from the ``vanilla'' JHM to the JHM-B, there is approximately a $41\%$ reduction of genes identified as showing significant evidence of genetic interaction, strong evidence for the presence of a batch effect.
The JHM-T is more similar to the JHM with $901$ interactions, reducing both suppressors and enhancers by a small amount.
Therefore from the ``vanilla'' JHM to the JHM-T, there is approximately a $4\%$ reduction of genes identified as showing significant evidence of genetic interaction, a much smaller reduction from the JHM than that observed with the JHM-B.
Table~\ref{tab:overlap_ext}A shows that the number of genes that overlap with the genes identified by the ``vanilla'' JHM is 531 and 886 for the JHM-B and JHM-T respectively.
Therefore the number of genes identified as interacting by the ``vanilla'' JHM and now no longer identified is $408$ and $53$ for the JHM-B and JHM-T respectively.
This further demonstrates the large reduction in genetic interactions when using the JHM-B, suggesting that a batch effect is present within the data.
The number of genes newly identified as showing significant evidence of genetic interaction by the JHM-B and JHM-T is $22$ and $15$ respectively.
These numbers are small relative to the number of genes that are no longer identified, indicating that the biggest change from the ``vanilla'' JHM is that the JHM-B and JHM-T are more stringent for determining significant genetic interactions.
Table~\ref{tab:overlap_ext}A shows that the ``vanilla'' JHM and JHM-T have similar overlap with the \citet{QFA1}, REM and IHM approaches. The JHM-B has much less overlap with the \citet{QFA1} approach than the ``vanilla'' JHM does, reducing the overlap from $649$ to $498$, indicating that the changes lead to an approach that is even more dissimilar from the \citet{QFA1} approach.
Table~\ref{tab:overlap_ext}B shows that the overlap in significant GO terms for the JHM-T and JHM-B with the JHM is 204 and 267 respectively.
There are 286 (see Table~\ref{tab:overlap_ext}B) significant GO terms found with the ``vanilla'' JHM, meaning there is a reduction of approximately $29\%$ and $7\%$ with the JHM-B and JHM-T respectively, demonstrating the difference of our new approaches from ``vanilla'' JHM.
Table~\ref{tab:overlap_ext}B also shows that the ``vanilla'' JHM, JHM-B and JHM-T all have a similar number of overlap in significant GO terms with the \citet{QFA1}, REM and IHM approaches.
We have introduced two potential ways of further extending the JHM to better model a QFA screen comparison, Figures~\ref{fig:BATCH}~and~\ref{fig:TRANS} are included as a reference for further research.
The JHM-B has made large changes to our results by reducing the number of hits, see Table~\ref{tab:JHM_hits}.
Further research may involve investigating the behaviour of an alternative JHM-B with tighter priors for the batch effect parameters so we can see how the additional parameters affect the model fit in more detail.
Further research for the JHM-T would involve developing an alternative JHM-T where different transformations are made for the control and query screens.
We find that the largest difference with the JHM-B and JHM-T is that they are more stringent for determining genetic interactions than the ``vanilla'' JHM.
Currently we prefer the ``vanilla'' JHM until further model exploration and analysis such as simulation studies are carried out to further investigate how the JHM-B and JHM-T affect our results.
\begin{table}
\caption[Overlap with joint hierarchical model extensions for genes interacting with \emph{cdc13-1} at $\boldsymbol{{27}^{\circ}}$C and gene ontology terms over-represented in lists of interactions]{\label{tab:overlap_ext}Genes interacting with \emph{cdc13-1} at $\boldsymbol{{27}^{\circ}}$C and GO terms over-represented in the list of interactions according to each approach A) Number of genes identified for each approach (Add \cite{QFA1}, REM, IHM, JHM, JHM-B and JHM-T) and the overlap between the approaches. 4135 genes from the \emph{S. cerevisiae} single deletion library are considered.
B) Number of GO terms identified for each approach (Add \cite{QFA1}, REM, IHM, JHM, JHM-B and JHM-T) and the overlap between the approaches. 6107 \emph{S. cerevisiae} GO Terms available. See Tables~\ref{tab:overlap}A and~\ref{tab:overlap}B for further details on the overlap between the ``vanilla'' models (Add \cite{QFA1}, REM, IHM, JHM).}
\centering
\resizebox{\columnwidth}{!}{%
\begin{tabular}{*{7}{c}}
\multicolumn{1}{l}{\bf{A.}}&\emph{Add} &\emph{REM} &\emph{IHM} &\emph{JHM}&\emph{JHM-B}&\emph{JHM-T}\\\hline
\emph{JHM}&649&273&572&939&N/A&N/A\\
\emph{JHM-B}&498&239&468&531&553&N/A\\
\emph{JHM-T}&628&276&572&886&535&901\\\hline
\end{tabular}
\qquad
\begin{tabular}{*{7}{c}}
\multicolumn{1}{l}{\bf{B.}}&\emph{Add} &\emph{REM} &\emph{IHM} &\emph{JHM}&\emph{JHM-B}&\emph{JHM-T}\\\hline
\emph{JHM}&219&165&216&286&N/A&N/A\\
\emph{JHM-B}&223&170&217&204&265&N/A\\
\emph{JHM-T}&215&160&219&267&206&293\\\hline
\end{tabular}
}
\end{table}
\FloatBarrier
\begin{table}
\caption[Description of the joint hierarchical model with batch effects]{Description of the joint hierarchical model with batch effects. $b$ identifies the batch which an \emph{orf}$\Delta$ repeat belongs to. Further model notation is defined in Table~\ref{tab:JHM}\label{tab:BATCH}}
\input{models/JHM-BATCH}
\end{table}
\begin{table}
\caption[Description of the joint hierarchical model with transformations]{Description of the joint hierarchical model with transformations. Model notation is defined in Table~\ref{tab:JHM}\label{tab:TRANS}}
\input{models/JHM-TRANS}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM-BATCH_URA_CDC13-1_27_27}
\caption[\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C joint hierarchical model with batch effects fitness plot]{\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C joint hierarchical model with Batch effect (JHM-B) fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
The JHM does not does not make use of a fitness measure such as $MDR\times{MDP}$ but the fitness plot is given in terms of $MDR\times{MDP}$ for comparison with other approaches which do.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on one of the two parameters used to classify genetic interaction, growth parameter $r$, this means occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
Further fitness plot explanation and notation is given in Figure~\ref{fig:JHM}.\label{fig:BATCH}
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/JHM-TRANS_URA_CDC13-1_27_27}
\caption[\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C joint hierarchical model with transformations fitness plot]{\emph{cdc13-1}~$\boldsymbol{{27}^{\circ}}$C~vs~\emph{ura3}$\Delta$~$\boldsymbol{{27}^{\circ}}$C joint hierarchical model with transformations (JHM-T) fitness plot with $\emph{orf}\Delta$ posterior mean fitnesses.
The JHM does not does not make use of a fitness measure such as $MDR\times{MDP}$ but the fitness plot is given in terms of $MDR\times{MDP}$ for comparison with other approaches which do.
$\emph{orf}\Delta$ strains are classified as being a suppressor or enhancer based on one of the two parameters used to classify genetic interaction, growth parameter $r$, this means occasionally strains can be more fit in the query experiment in terms of $MDR\times MDP$ but be classified as enhancers (green).
Further fitness plot explanation and notation is given in Figure~\ref{fig:JHM}.\label{fig:TRANS}
}
\end{figure}
\end{chapter}
\section{\label{int:QFA}Quantitative Fitness Analysis}
Genome-wide Quantitative Fitness Analysis (QFA) is a robot-assisted high-throughput \hl{laboratory} workflow, combining systematic \hl{genetic} techniques \hl{to generate arrays of genetically distinct microbial cultures with} quantification and modelling of growth curves to estimate fitnesses \citep{jove, QFA1}.
A QFA screen can be used to compare the fitnesses of cultures with distinct genotypes in order to quantify genetic interaction.
\hl{Genetic interaction strengths are typically estimated by comparing fitnesses in two QFA screens: a control screen and a query screen.
QFA output includes fitness estimates for all \hl{microbial} cultures in an arrayed library including replicate cultures. For example, such a library could be a systematic collection of all non-essential, single gene deletion strains in the \hl{model eukaryote} \emph{Saccharomyces cerevisiae} (\emph{S. cerevisiae}, brewer's yeast).}
All strains within a query screen differ from their control screen counterparts by a \hl{common} condition such as \hl{a background} gene mutation, drug treatment, temperature \hl{or other treatment}.
\hl{To identify strains that show interaction with the query condition, corresponding fitness responses for each strain in the library under the query and control conditions can be compared.}
An example of the procedure to create mutant strains to test for genetic interaction using QFA screens is as follows.
First a suitable query mutation is chosen, which is relevant to an area of biology of particular interest \hl{(e.g. \emph{cdc13-1} for its relevance to telomere capping processes)}.
Next, a library of strains is chosen, within which to search for strains interacting with the query mutation (e.g. a genome-wide library of independent strains with individual, non-essential genes deleted: $\emph{orf}\Delta$s).
Finally, an appropriate, neutral control background mutation is chosen \hl{(e.g.} \emph{ura3}$\Delta$) to allow the separation of the effect of background condition from that of the library strains.
In most cases, control and query mutations are crossed with the chosen library using \hl{Synthetic Genetic Array} (SGA) technology \citep{sgaboone}.
Independent replicate cultures are inoculated and grown across several plates for each strain under each condition to capture biological \hl{and technical} heterogeneity.
\hl{Cultures are grown simultaneously and time course images captured by photography.
Robotic assistance is required for both culture inoculation and image capture during genome-wide screens which can include approximately 5,000 independent genotypes.}
Raw QFA data (photographs) are converted into cell density estimates using the image analysis software Colonyzer \citep{Colonyzer}.
Observed changes in cell density over time are converted to fitness estimates for both the control and query strain by fitting logistic growth curves to data.
Genetic interactions are identified by finding mutants in the query screen whose fitnesses deviate significantly from predictions given by a theoretical model of genetic independence.
\citet{QFA1} describe using QFA to infer genetic interactions with telomere-specific query mutations. They use least squares methods to fit logistic growth curves to culture time courses, then generate a univariate fitness estimate for each time course.
They use a linear model predicting query strain fitness given control strain fitness, consistent with \hl{Fisher's} multiplicative model of genetic independence, to test for genetic interaction between the query mutation and each \emph{orf}$\Delta$.
Deviation from the predicted linear relationship between the query and control fitnesses is evidence for genetic interaction between $\emph{orf}\Delta$ and the query mutation.
The significance of observed interactions is assigned using a simple frequentist linear modelling approach.
One of the major limitations of the statistical model used in \cite{QFA1} is that it assumes each $\emph{orf}\Delta$ fitness has the same variance.
It is expected that explicit modelling of heterogeneity will allow \hl{more robust identification of interactions, particularly where variability for a particular strain is unusually high (e.g. due to experimental or technical difficulties).}
\subsection{\label{int:quantifying_fit}Quantifying fitness}
\hl{Observing changes in cell number in a microbial culture is the most direct way to estimate culture growth rate, an important component of microbial culture fitness.
Direct counting of cell number on a high-throughput scale is not practical and so cell density estimates are made instead from culture photographs taken during QFA.
Estimates of the integrated optical density (IOD) generated by the image analysis tool Colonyzer \citep{Colonyzer} are used to capture cell density dynamics in independent cultures during QFA.}
Density estimates, scaled to normalise for camera resolution, are gathered for each culture and a dynamic model of population growth, the logistic model $\dot{x}=rx(1 - x/K)$ \citep{Verhulst1847} (see Section~\ref{int:logistic_gro}), is fit to the data.
Example photographic images of two yeast colonies inoculated by QFA, growing over time, along with corresponding quantitative measures of growth can be seen in Figure~\ref{fig:spot2}.
\begin{figure}[h!]
\centering
\includegraphics[width=13cm]{img/3daysplate}
\caption[Example 384-spot plate image from a yeast quantitative fitness analysis screen]{
Example 384-spot plate image from a yeast quantitative fitness analysis screen, taken approximately 3 days after inoculation.
Yeast cultures are spotted and grown in regular arrays on solid agar plates.
}
\label{fig:3daysplate}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=13cm]{img/zoomplate}
\caption[Cropped image of 15 out of 384 spotted yeast cultures from a 384-spot plate]{
Cropped image of 15 out of 384 spotted yeast cultures from a 384-spot plate, taken from a quantitative fitness analysis screen. Image taken approximately 3 days after inoculation.
Yeast cultures are spotted and grown in regular arrays on solid agar plates.
}
\label{fig:zoomplate}
\end{figure}
For a QFA screen, cultures are typically grown on 384-spot plates over time, where a process called \emph{spotting} is used to inoculate microbial cultures on the plates.
The spotting process involves a stage where microbial cultures are first diluted and then the diluted culture is spotted to the plate.
Section~\ref{lit:synthetic_gen_arr} describes the spotting process and alternatives in further detail.
An example 384-spot plate of yeast cultures is given in Figure~\ref{fig:3daysplate}.
Yeast cultures in Figure~\ref{fig:3daysplate} are all alive and have similar culture size.
A cropped image of 15 yeast cultures from a 384-spot plate is given in Figure~\ref{fig:zoomplate}.
Yeast cultures in Figure~\ref{fig:zoomplate} have different culture sizes, the smaller cultures have had slow growth relative to the larger cultures.
An example of the raw time series data is given in the \hl{Appendix}, Figure~\ref{app:QFA_set_sam}.
Further detail on the QFA workflow and alternative 384-spot plate images can be found at \citep{jove} and \url{http://research.ncl.ac.uk/qfa/}.
After logistic growth model fitting, estimated logistic growth parameters sets can then be used to determine the fitness of a culture. If required, a univariate fitness definition can be chosen to summarise a set of logistic growth parameters (see Section~\ref{int:fitness_def}).
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img/LawlessfigAB}
\caption[Observed yeast data and fitted logistic growth curves]{
\hl{
A) Timelapse images for two genetically modified \emph{S. cerevisiae} cultures with different genotypes (indicated) corresponding to the time series measurements plotted in panel B.
B) Time course cell density estimates derived from analysis of the timelapse images in panel A together with (least squares) fitted logistic growth curves.
}
}
\label{fig:spot2}
\end{figure}
\subsection{\label{int:logistic_gro}The logistic growth model}
The logistic model of population growth, an ordinary differential equation (ODE) describing the self-limiting growth of a population of size $x(t)$ at time $t$, was developed by \cite{Verhulst1847},
\begin{align}
\label{eq_det}
\frac{dx(t)}{dt}&=rx(t)\left(1-\frac{x(t)}{K}\right).
\end{align}
{The ODE has the following analytic solution:
\begin{equation}
x(t;\theta) = \frac{K P e^{rt}}{K + P \left( e^{rt} - 1\right)},
\label{eq:logistic}
\end{equation}
where $P=x(0)$ and $\theta=(K,r,P)$.}
The model describes a population growing from an initial size $P$ (culture inoculum density) with an intrinsic growth rate $r$, undergoing approximately exponential growth which slows as the availability of some critical resource (e.g. nutrients or space) becomes limiting \citep{theoryoflogisticgro}.
Ultimately, population density saturates at the carrying capacity (maximum achievable population density) $K$, once the critical resource is exhausted.
Appendix~\ref{app:solving_log_gro} shows how to derive the solution of (\ref{eq_det}), given in (\ref{eq:logistic}).
An example of two different logistic growth trajectories are given by the solid lines in Figure~\ref{fig:spot2}B.
Where further flexibility is required, generalized forms of the logistic growth process \citep{analysisoflogistic,logisticrevisited} may be used instead (see Section~\ref{lit:generalised_log}).
\subsection{\label{int:fitness_def}Fitness definitions}
Culture fitness is an important phenotype, indicating the health of a culture.
Several distinct quantitative fitness measures based on fitted logistic model parameters (\ref{eq:logistic}) can be constructed.
\cite{QFA1} present three univariate measures suitable for QFA: Maximum Doubling Rate $(MDR)$ and Maximum Doubling Potential $(MDP)$ detailed in (\ref{eq:MDRMDP}), and their product $MDR\times MDP$, where
\begin{equation}
\label{eq:MDRMDP}
MDR=\frac{r}{log\left(2\frac{K-P}{K-2P}\right)}\:\text{ and }\:MDP=\frac{log\left(\frac{K}{P}\right)}{log(2)}.
\end{equation}
MDR is reciprocal of minimum doubling time $T$ which a cell population takes to reach $2x(0)$, assuming the exponential phase begins at $t=0$:
\begin{equation*}
\frac{x(t)}{x(0)}=2.
\end{equation*}
We now rearrange to give the following expression for MDR:
\begin{equation*}
MDR=\frac{1}{T}=\frac{r}{\log(\frac{2(K-P)}{K-2P})}.
\end{equation*}
MDP is the number of times population size doubles before reaching saturation, assuming geometric progression:
\begin{equation*}
{x(0)}\times 2^{MDP}=K.
\end{equation*}
Rearrange to give the following:
\begin{equation*}
MDP=\frac{\log(\frac{K}{P})}{\log 2}.
\end{equation*}
$MDR$ captures the rate at which microbes divide when experiencing minimal intercellular competition or nutrient stress. A strain's growth rate largely dictates its ability to outcompete any neighbouring strains.
$MDP$ captures the number of divisions the culture is observed to undergo before saturation. A strain which can divide a few more times than its neighbours in a specific environment also has a competitive advantage.
The choice of a single overall fitness score depends on the aspects of microbial physiology most relevant to the biological question at hand.
Typically the fitness definition $MDR\times MDP$ is used in QFA to account for both attributes simultaneously.
Other fitness definitions available include cell count, expected generation number and their approximations \citep{expectgennum}.
\section{\label{int:epistasis}Epistasis}
Epistasis is the phenomenon where the effects of one gene are modified by those of one or several other genes \citep{epis4}.
Besides the multiplicative model, there are other definitions for epistasis such as additive, minimum and log \citep{epis2}.
Minimum is a suboptimal approach which may allow ``masking'' of interactions \citep{epis2}.
For a typical yeast QFA screen comparison, \citet{QFA1} assumes a multiplicative interaction model (\ref{eq:epistasis}), but when dealing with measurements on a log scale, it is effectively assuming \hl{an additive} interaction model \citep{epis3}.
\hl{This highlights the point that multiplicative and additive models are equivalent if fitness data are scaled appropriately \citep{cordell2002epistasis}.}
\subsection{\label{int:defining_epi}Defining epistasis}
\hl{As presented in \cite{QFA1}, this study assumes Fisher's multiplicative model of genetic independence (\ref{eq:epistasis}) \citep{cordell2002epistasis,epis1}, to represent the expected relationship between control strain fitness phenotypes and those of equivalent query strains in the absence of genetic interaction.
In this study, we interpret genotypes for which the query strain fitness deviates significantly from this model of genetic independence as interacting significantly with the query mutation.
Square bracket notation is used to represent a quantitative fitness measure.
For example $[wt]$ and $[query]$ represent wild-type and query mutation fitnesses respectively.
``Wild-type'' strictly refers to the genotype that is prevalent among individuals in a natural (or wild) population.
However, during laboratory cultivation of microbes it is more usual to introduce extra gene mutations to an ancestral lineage that is well established within the scientific community.
Working with established lineages allows direct comparison with results from the literature without the confounding effect of sampling genotypes from natural populations, which are considerably more heterogeneous.
Thus in context of this thesis, ``wild-type'' will refer to the reference strain, before additional mutations are introduced.
$\emph{orf}\Delta$ represents an arbitrary single gene deletion strain (i.e. a mutant from the control strain library).
$query:\emph{orf}\Delta$ represents an arbitrary single gene deletion from the query strain library (e.g. crossed with the query mutation).}
Fisher's multiplicative model of genetic independence is as follows:
\begin{eqnarray}
\label{eq:epistasis}
[query:\emph{orf}\Delta]\times [wt] &=& [query]\times [\emph{orf}\Delta]\\
\Rightarrow [query:\emph{orf}\Delta] &=& \frac{[query]}{[wt]}\times [\emph{orf}\Delta]. \label{eq:linear}
\end{eqnarray}
In (\ref{eq:linear}),~$\frac{[query]}{[wt]}$ is a constant for a given pair of QFA screens, meaning that if this model holds, there should be a linear dependence between $[query:\emph{orf}\Delta]$ and $[\emph{orf}\Delta]$ for all deletions $\emph{orf}\Delta$.
During genome-wide screens of thousands of independent $\emph{orf}\Delta$s, it can be assumed that the majority of gene mutations in the library do not interact with the chosen query mutations.
Therefore, even if the query or wild-type fitnesses are not available to us, the slope of this linear model can still be estimated by fitting it to all available fitness observations, before testing for strains which deviate significantly from the linear model.
Any extra background condition, such as a gene mutation common to both the control and query strains (e.g. triple instead of double deletion strains for the query and control data sets), may change the interpretation or definition of the type of genetic interaction but the same linear relationship is applicable.
\subsection{\cite{QFA1} Quantitative Fitness Analysis screen comparison\label{int:QFAqfa}}
\cite{QFA1} present QFA where the logistic growth model (\ref{eq:logistic}) is fit to experimental data by least squares to give parameter estimates $(\hat{K},\hat{r})$ for each culture time course (each $\emph{orf}\Delta$ replicate).
Inoculum density $P$ is assumed known and the same across all $\emph{orf}\Delta$s and their repeats.
\hl{After inoculating approximately 100 cells per culture, during the first several cell divisions there are so few cells that culture cell densities remain well below the detection threshold of cameras used for image capture and so, without sharing information across all $\emph{orf}\Delta$ repeats, $P$ cannot be estimated directly.}
It is therefore necessary to fix $P$ to the same value for both screens, using an average estimate of $P$ from preliminary least squares logistic growth model fits.
Fitting the model to each $\emph{orf}\Delta$ repeat separately means there is no sharing of information within an $\emph{orf}\Delta$ or between $\emph{orf}\Delta$s when determining $\hat{K}$ and $\hat{r}$.
By developing a hierarchical model to share information across $\emph{orf}\Delta$ repeats for each $\emph{orf}\Delta$ and between $\emph{orf}\Delta$s, estimates for every set of logistic growth curve parameters $(K,r)$ can be improved and therefore for every strain fitness.
Quantitative fitness scores ($F_{cm}$) for each culture were defined (\ref{eq:F}) (see (\ref{eq:MDRMDP}) for definitions of $MDR$ and $MDP$), where
\begin{equation}
\label{eq:F}
F_{cm} = MDR_{cm}\times MDP_{cm}.
\end{equation}
The index $c$ identifies the condition for a given $\emph{orf}\Delta$: $c=0$ for the control strain and $c=1$ for the query strain.
$m$~identifies an $\emph{orf}\Delta$ replicate.
Scaled fitness measures $\tilde{F}_{cm}$ are calculated for both the control and query screen such that the mean across all $\emph{orf}\Delta$s for a given screen is equal to 1.
After scaling, any evidence that $\tilde{F}_{0m}$ and $\tilde{F}_{1m}$ are significantly different will be evidence of genetic interaction.
The following linear model was fit to the control and query strain scaled fitness measure pairs $\tilde{F}_{cm}$ for each unique $\emph{orf}\Delta$ in the gene deletion library:
\begin{align}
\label{eq:lm}
\begin{split}
\tilde{F}_{cm} &= \mu+\gamma_{c}+\varepsilon_{cm}, \text{ where $\gamma_{0}=0$}\\
\varepsilon_{cm} &\sim \operatorname{N}(0,\sigma^{2}), \text{ where $ \varepsilon_{cm}$ is i.i.d.}
\end{split}
\end{align}
In (\ref{eq:lm}), $\gamma_{1}$ represents the \hl{estimated strength of genetic interaction} between the control and query strain.
If the scaled fitnesses for the control and query strain are equivalent for a particular $\emph{orf}\Delta$ such that they are both estimated by some $\mu$, i.e. no evidence of genetic interaction, we would expect $\gamma_{c}=0$.
The model was fit by maximum likelihood, using the R function ``lmList'' \citep{nlme} with variation assumed to be the same for all strains in a given screen and the same for both control and query screens.
So, for every gene deletion from the library an estimate of $\gamma_{1}$ was generated together with a p-value for whether it was significantly different from zero.
False discovery rate (FDR) corrected q-values were then calculated to determine levels of significance for each $\emph{orf}\Delta$.
\citet{QFA1} use the Benjamini-Hochberg test \citep{ben_hoc} for FDR correction.
This test is commonly used in genomic analyses as although it assumes independence of test statistics, even if positive correlation exists between tests, the result is that FDR estimates are slightly conservative.
Finally a list of $\emph{orf}\Delta$ names, ranked by $\gamma$ magnitudes, was output and $\emph{orf}\Delta$s with q-values below a significance cut-off of 0.05 classed as showing significant levels of genetic interaction with the query mutation.
\subsection{\label{int:fit_pro}Fitness plots}
Fitness plots are used to show which $\emph{orf}\Delta$s show evidence of genetic interaction from a QFA screen comparison.
Figure~\ref{fig:old_first} shows an example fitness plot taken from \citep{QFA1}.
Fitness plots are typically mean $\emph{orf}\Delta$ fitnesses for control strains against the corresponding query strains.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in the plot as red and green for suppressors and enhancers respectively.
$\emph{orf}\Delta$s without significant evidence of interaction are in grey.
Solid and dashed grey lines are for a simple linear model fit (corresponding to a model of genetic independence) and \hl{the line of equal fitness} respectively.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_fit/old}
\caption[Fitness plot taken from \citet{QFA1}]{\label{fig:old_first} Fitness plot taken from \citet{QFA1}.
A yeast genome knock out collection was crossed to the \emph{cdc13-1} mutation, or as a control to the \emph{ura3$\Delta$} mutation. 8 replicate crosses were performed for the query and control strains.
$\emph{orf}\Delta$s with significant evidence of interaction are highlighted in red and green for suppressors and enhancers respectively.
$\emph{orf}\Delta$s without significant evidence of interaction are in grey and have no \emph{orf} name label.
Lenient and stringent classification of significant interaction is based on p-values $<0.05$ and FDR corrected p-values (q-values) $<0.05$ respectively.
For a further description on fitness plots, see Section~\ref{int:fit_pro}.
}
\end{figure}
\clearpage
\section{\label{int:stochastic_logistic_gro}The stochastic logistic growth model}
\input{sections_SDE/introduction}
\section{Outline of thesis}
A brief outline of thesis is as follows.
Chapter~\ref{cha:background} gives background to the biological and statistical methods used throughout the thesis. Yeast biology related to the QFA data sets analysed in this study is given as well as an introduction to Bayesian inference.
In Chapter~\ref{cha:modelling_den_int} the SHM and IHM models for the new two-stage Bayesian QFA approach are presented.
Next, the JHM for the new one-stage Bayesian QFA approach is presented.
The chapter is concluded by introducing a two-stage frequentist QFA approach using a random effects model.
In Chapter~\ref{cha:case_stu} the new Bayesian approaches are applied to a previously analysed QFA data set for identifying genes interacting with a telomere defect in yeast.
The chapter is concluded with an analysis of further QFA data sets with the JHM and two extensions of the JHM; included for further investigation and research.
Chapter~\ref{cha:stochastic_app} begins by introducing an existing logistic growth diffusion equation by \citet{roman}.
Two new diffusion equations for carrying out fast, Bayesian parameter estimation for stochastic logistic growth data are then presented.
The chapter is concluded by comparing inference between the approximate models considered and with arbitrarily exact approaches.
Finally, Chapter~\ref{cha:conclusion} presents conclusions on the relative merits of the newly developed Bayesian approaches and stochastic logistic growth models. The chapter is concluded by discussing the broader implications of the results of the studies presented and scope for further research.
\end{chapter}
\begin{comment}
background
Quantitative fitness analysis~(QFA) is a robot assisted high-throughput, systematic suppressor/enhancer analysis for microbial strains.
QFA consists of experimental screening techniques, quantification and modelling of growth curves \cite{jove}.
An important reason for carrying out QFA is to quantify epistasis (genetic interaction).
Determining genetic interaction is typically done comparing a control and a query QFA screen, where a screen's results include fitnesses for all cultures in an arranged library of microbes, including replicate strains.
All strains within a query screen may differ from their control screen counterparts by a condition such as gene mutation, drug or temperature.
The researcher is then interested in identifying control strains that show interaction with the query condition.
This is done by comparing the corresponding fitness responses for each open reading frame deletion or $\emph{orf}\Delta$ (subject) in both the query and control data sets.
Each $\emph{orf}\Delta$ is then classified as being a suppressor, enhancer or non-interacting with the query.
An example of the procedure to create the correct mutant strains for a comparison of QFA screens is as follows.
First a control is chosen.
This may be a neutral background mutation (i.e. \emph{ura3}$\Delta$), temperature, drug or no condition at all.
Secondly a query condition of interest is chosen, such as a gene mutation (i.e. \emph{cdc13-1}), temperature or drug of interest.
Both the control and query are crossed with a large gene deletion library of mutant strain cultures ($\emph{orf}\Delta$s) using SGA technology \citep{sgaboone}, to create a control strain library and query strain library.
Typically the control and query are identical except one factor, if the aim of the experiment is to find genetic interactions, this difference will be a genetic mutation of some sort.
Independent microbial colony repeats are inoculated for each unique genetic mutation under each condition chosen in order to capture biological heterogeneity.
Repeats for each different strain in both a control and a query experiment are randomly or sometimes systematically arranged across multiple plates.
Cultures are grown in high throughput with images captured automatically by robots over time.
The growth density measurements over time are converted to fitness responses for both the control and query strain by fitting logistic growth curves to the data as described in Section~\ref{Defining_Fitness}.
Section~??? discusses the mathematical relationship between both the control and query strain fitnesses, used to classify each $\emph{orf}\Delta$ as being either a suppressor, enhancer or non-interacting mutation.
Currently the QFA methodology described in \cite{QFA1} is used to identify genetic interaction using a multiplicative model.
To collect enough data to carry out a comparison of genome wide QFA screens, a methodology such as high-throughput screening is required.
Thousands of yeast strains with various gene deletions need to be systematically created, cultured and then measurable traits quantified.
Other large-scale quantitative genetic interaction screening approaches that exist are E-MAP,GIM and SGA \citep{epimethods}, we expect QFA to give much higher quality fitness estimates than these methods.
These other methods are often coupled with simple tests or matrix approximation \citep{epimethods} based approaches to identify genetic interactions.
These statistical methods used to identify genetic interactions do not make use of hierarchical modelling to account for variation on a population, subject and repeat level.
Previous work \citep{QFA1} to determine genetic interaction uses least squares methods to fit logistic growth curves to time courses.
For each time course fit a univariate summary of fitness is then generated.
A linear multiplicative model is then used to describe genetic independence between the query and control strain fitnesses measures.
Any deviation from the linear relationship between the query and control fitnesses is evidence of genetic interaction.
Analysis of high throughput genetic screen data involves modelling both the experimental structure and its sources of variation.
One of the major limitation of the model used in \cite{QFA1} is that it assumes each $\emph{orf}\Delta$ fitness has the same variance.
This variation could be modelled in order to give more accurate measures of interaction and more appropriate scientific conclusions.
Parameter values are assumed constant with a frequentist approach.
A Bayesian approach introduces distributed parameters to describe uncertainty associated with each fitness and interaction.
With a Bayesian approach we can also create more complex models than the frequentist approach that better describe the variation within the data.
Bayesian analysis allows us to easily describe evidence of genetic interaction by a Bernoulli parameter.
We can then obtain posterior distributions to describe our uncertainty about parameter estimates.
Posterior means can then be calculated to summarize the evidence of genetic interaction.
Significance of differences can be assessed by comparing posterior distributions probabilistically, which is a more meaningful interpretation than frequentist p-values.
Bayesian methods give us the means to analyse hierarchical models that would be very difficult to analyse in the frequentist paradigm.
The hierarchical approach also allows the borrowing of strength across subjects \citep{GelmanMultilevel}, helping identify interacting $\emph{orf}\Delta$s which otherwise may of been given low significance and overlooked.
Bayesian priors are used to incorporate the constraints of the experiment and the existing information known about the possible values for parameters.
Replacing the usual QFA \citep[see, for example,][]{QFA1} with a updated Bayesian QFA we look to circumvent the many disadvantages of the previous frequentist approach.
Using our proposed MCMC-based Bayesian modelling approach, computational times can be several days long. This a lot longer than a much simpler least squares approach, which could be completed in under a hour.
Due to the large quantity of data, long computation time is a burden for most high throughput data analysis.
This problem is exaggerated when coupled with poor mixing and without a large benefit to the statistical analysis the experimenter is likely to simplify their modelling procedure in order to complete their analysis in a suitable time frame.
The cost of computational time for our Bayesian methods will be small relative to the benefits, with the ability to better determine genes that significantly interact within QFA with less error and increased confidence than the previous frequentist approach.
Similar to the existing approach for determining epistasis from the comparison of two QFA screens, our first approach to this problem is completed in two stages.
First a hierarchical logistic growth curve model is fitted to the yeast growth measurements.
Secondly the first model fits are then input to a hierarchical interaction model.
We later present a one stage approach which we refer to as the joint model.
A one stage Bayesian approach or joint model is able to model mutant strain fitnesses and genetic interaction simultaneously.
With the two stage approach a summary of fitness is used to pass information between the two models, but this is not the case with the joint model.
The joint model is then able to account for variation within the two microbial phenotypes (growth rate and carrying capacity) separately while simultaneously determining evidence of genetic interaction.
The paper is organized as follows: Section~??? describes the data from a QFA experiment.
The new models for Bayesian QFA are outlined in Section~???.
In Section~??? the new Bayesian models are applied to a previously analysed QFA data set for identifying yeast genes interacting with a telomere defect.
Section~??? discusses the decision to replace the old QFA with the newly developed Bayesian QFA.
\end{comment}
\section{\label{modelling:intro}Introduction}
In this chapter, alternative modelling approaches are developed to better model a QFA screen comparison than the current frequentist \citet{QFA1} approach.
Section~\ref{modelling:Bay_hie_mod_inf} presents the modelling assumptions for the development of a Bayesian approach.
Two Bayesian approaches are then presented in Sections~\ref{cha:two_stage}~and~\ref{cha:one_stage}, incorporating some model assumptions that are not convenient in a frequentist setting.
So that our Bayesian models can be compared with a frequentist hierarchical modelling approach, a random effects model is then presented in Section~\ref{two:REM}.
The models in this chapter are compared using previously analysed \emph{S. cerevisiae} QFA screen data in the next chapter.
Historic \emph{S. cerevisiae} QFA screen datasets are used to shape the model assumptions adopted in the following sections.
\section{\label{modelling:Bay_hie_mod_inf}Bayesian hierarchical model inference}
As an alternative to the maximum likelihood approach presented by \cite{QFA1}, we present a Bayesian, hierarchical methodology where \emph{a priori} uncertainty about each parameter value is described by probability distributions \citep{Bayth} and information about parameter distributions is shared across $\emph{orf}\Delta$s and conditions.
Plausible frequentist estimates from across 10 different historic QFA data sets, \hl{including a wide range of different background mutations and treatments} were used to \hl{quantify} \emph{a priori} uncertainty in model parameters.
Prior distributions describe our beliefs about parameter values. These should be diffuse enough to capture all plausible values (to capture the full range of observations in the datasets) while being restrictive enough to rule out implausible values (to ensure efficient inference).
Inappropriate choice of priors can result in chains drifting during mixing and becoming stuck in implausible regions.
Although using conjugate priors would allow faster inference, we find that the conjugate priors available for variance parameters \citep{Gelmanprior} are either too restrictive at low variance (Inverse-gamma), not restrictive enough at low variance (half-t family of prior distributions) or are non-informative or largely discard the prior information available (Uniform).
Our choice for the priors of precision parameters is the non-conjugate Log-normal as we find the distribution is only restrictive at extremely high and low variances.
We use three types of distribution to model parameter uncertainty: Log-normal, Normal and scaled t-distribution with three degrees of freedom. We use the Log-normal distribution to describe parameters which are required to be non-negative (e.g. parameters describing precisions, or repeat-level fitnesses) or parameter distributions which are found by visual inspection to be asymmetric. We use the Normal distribution to describe parameters which are symmetrically distributed (e.g. some prior distributions and the measurement error model) and we use the $t$-distribution to describe parameters whose uncertainty distribution is long-tailed (i.e. where using the Normal distribution would result in excessive shrinkage towards the mean).
A Normal distribution was considered for describing the variation in $\emph{orf}\Delta$s but was found to be inappropriate, failing to assign density at the extreme high and low fitnesses.
For example, after \hl{visual inspection of} frequentist $\emph{orf}\Delta$ level means about their population mean, we found there to be many unusually fit, dead or missing $\emph{orf}\Delta$ and concluded that $\emph{orf}\Delta$ fitnesses would be well modelled by the t-distribution.
Instead of manually fixing the inoculum density parameter $P$ as in \cite{QFA1} our Bayesian hierarchical models deal with the scarcity of information about the early part of culture growth curves by estimating a single $P$ across all $\emph{orf}\Delta$s \hl{(and conditions in some of our models).}
Our new approach learns about $P$ from the data and gives us a posterior distribution to describe our uncertainty about its value.
\hl{The new, hierarchical structure implemented in our models \citep{BayHi} reflects the structure of QFA experiments.}
Information is shared efficiently among groups of parameters such as between repeat level parameters for a single mutant strain.
An example of the type of Bayesian hierarchical modelling which we use to model genetic interaction can be seen in \cite{hierarchical1}, where hierarchical models are used to account for group effects.
In \cite{epis1} the signal of genetic interaction is chosen to be ``strictly ON or OFF" when modelling gene activity.
\hl{We include this concept in our interaction models by using a Bernoulli distributed indicator variable \citep{indicator} to describe whether there is evidence of an $\emph{orf}\Delta$ interacting with the query mutation; the more evidence of interaction, the closer posterior expectations will be to one.}
Failing to account for all sources of variation within the experimental structure, such as the difference in variation between the control and query fitnesses, may lead to inaccurate conclusions.
By incorporating more information into the model with prior distributions and a more flexible modelling approach, we will increase statistical power.
With an improved analysis it may then be possible for a similar number of genetic interactions to be identified with a smaller sample size, saving on the significant experimental costs associated with QFA.
Inference is carried out using Markov Chain Monte Carlo (MCMC) methods. The algorithm used is a Metropolis-within-Gibbs sampler where each full-conditional is sampled in turn either directly or using a simple Normal random walk Metropolis step.
Due to the large number of model parameters and large quantity of data from high-throughput QFA experiments, the algorithms used \hl{for carrying out inference} often have poor mixing and give highly auto-correlated samples, requiring thinning.
Posterior means are used to obtain point estimates where required.
For the new Bayesian approaches (described in Section~\ref{cha:two_stage} and \ref{cha:one_stage}), model fitting is carried out using the techniques discussed above, implemented in C for computational speed, and is freely available in the R package ``qfaBayes'' at \sloppy\url{https://r-forge.r-project.org/projects/qfa}.\sloppy
\section{\label{cha:two_stage}Two-stage Bayesian hierarchical approach}
\hl{In the following sections, a two-stage Bayesian, hierarchical modelling approach (see Section~\ref{two:SHM}~and~\ref{two:IHM}) is presented.
The following two-stage Bayesian approach generates $\emph{orf}\Delta$ fitness distributions and infers genetic interaction probabilities separately.}
For a QFA screen comparison, first the separate hierarchical model (SHM) given in Section~\ref{two:SHM}, is fit to each screen separately and a set of logistic growth parameter estimates obtained for each time-course.
Secondly, each set of logistic growth parameter estimates is converted into a univariate fitness summary and input to the interaction hierarchical model (IHM) given in Section~\ref{two:IHM}, to determine which genes show evidence of genetic interaction.
\subsection{\label{two:SHM}Separate hierarchical model}
The separate hierarchical model (SHM), presented in Table~\ref{tab:SHM}, models the growth of multiple yeast cultures using the logistic function described in (\ref{eq:logistic}).
In this first \hl{hierarchical model, the logistic model} is fit to the query and control strains separately.
In order to measure the variation between $\emph{orf}\Delta$s, parameters ($K^p$,$\sigma^{K}_{o}$) and ($r^p$,$\sigma^{r}_{o}$) are included at the population level of the hierarchy.
Within-$\emph{orf}\Delta$ variation is modelled by each set of $\emph{orf}\Delta$ level parameters ($K^{o}_{l}$,$\tau^{K}_{l}$) and ($r^{o}_{l}$,$\tau^{r}_{l}$).
Learning about these higher level parameters allows information to be shared across parameters lower in the hierarchy.
A three-level hierarchical model is applied to $(K,K^{o}_{l},K_{lm})$ and $(r,r^{o}_{l},r_{lm})$, sharing information on the repeat level and the $\emph{orf}\Delta$ level.
Note that $\emph{orf}\Delta$ level parameters ${K^{o}_{l}}$ and ${r^{o}_{l}}$ are on the log scale ($e^{K^{o}_{l}}$ and $e^{r^{o}_{l}}$ are on the scale of the observed data).
Assuming a \hl{Normal} error structure, random \hl{measurement} error is modelled by the $\nu_l$ parameters (one for each $\emph{orf}\Delta$).
Information on random error is shared across all $\emph{orf}\Delta$s by drawing \hl{$\log \nu_l$} from a normal distribution parameterised by ($\nu_p$,$\sigma^{\nu}$).
A two-level hierarchical structure is also used for both the $\tau_{l}^{K}$ and $\tau_{l}^{r}$ parameters.
Modelling logistic model parameter distributions on the log scale ensures that parameter values remain strictly positive (a realistic biological constraint). Truncating distributions allows us to implement further, realistic constraints on the data. Truncating $\log r_{lm}$ values greater than 3.5 corresponds to disallowing biologically unrealistic culture doubling times \hl{faster than about 30 minutes} and truncating of repeat level parameters $\log K_{lm}$ above 0 ensures that no carrying capacity estimate is greater than the maximum observable cell density, which is 1 after scaling.
$\emph{orf}\Delta$ level parameters $e^{K_{o}^{l}}$ and $e^{r_{o}^{l}}$ are on the same scale as the observed data. Realistic biological constraints (positive logistic model parameters) are enforced at the repeat level, however both $e^{K_{o}^{l}}$ and $e^{r_{o}^{l}}$, which are assumed to have scaled~$t$-distributions, are truncated below zero to keep exponentiated parameters strictly positive.
Most $\emph{orf}\Delta$ level logistic growth parameters are distributed in a bell shape around some mean value, it is the unusually fit, dead or missing $\emph{orf}\Delta$s within a typical QFA screen that require the use of a long tailed distribution such as the scaled~$t$-distribution with 3 degrees of freedom.
The non-standard choice of a truncated scaled~$t$-distribution with 3 degrees of freedom ensures that the extreme high and low values have probability assigned to them regardless of the population level location and scale parameters for a given QFA screen.
For example, after \hl{visual inspection of} frequentist $\emph{orf}\Delta$ level means about their population mean, we found there to be many unusually fit, dead or missing $\emph{orf}\Delta$ and concluded that $\emph{orf}\Delta$ fitnesses would be well modelled by the t-distribution.
\begin{table}
\caption[Description of the separate hierarchical model]{Description of the separate hierarchical model (SHM).
Dependent variable $y_{lmn}$ (scaled cell density measurements) and independent variable $t_{lmn}$ (time since inoculation) are data input to the SHM.
$x(t)$ is the solution to the logistic model ODE given in (\ref{eq:logistic}).
$l$~indicates a particular $\emph{orf}\Delta$ from the gene deletion library, $m$ indicates a repeat for a given $\emph{orf}\Delta$ and $n$ indicates the time point for a given $\emph{orf}\Delta$ repeat.\label{tab:SHM}}
\input{models/SHM}
\end{table}
Identifiability problems can arise for parameters $K_{lm}$ and $r_{lm}$ when observed cell densities are low and unchanging (consistent with growth curves for cultures which are very sick, dead or missing). In these cases, either $K_{lm}$ or $r_{lm}$ can take values near zero, allowing the other parameter to take any value without significantly affecting the model fit.
In the \citet{QFA1} approach identification problems are handled in an automated post-processing stage: for cultures with low K estimates (classified as dead), $r$ is automatically set to zero.
Without correcting for identification problems in our Bayesian models, misleading information from implausible values will be shared across our models.
Computing time wasted on such identifiability problems is reduced by truncating repeat level parameters $r_{lm}$, preventing the MCMC algorithms from becoming stuck in extremely low probability regions when $K_{lm}$ takes near zero values.
Similarly, $\log \tau^{K}_{l}$ parameters are truncated below 0 to overcome identifiability problems between parameters $K_{lm}$ and $r_{lm}$ when $r_{lm}$ takes near zero values.
The SHM in Table~\ref{tab:SHM} is fit to both the query and control strains separately.
Means are taken \hl{to summarise} logistic growth parameter posterior distributions for each $\emph{orf}\Delta$ repeat.
Summaries $(\hat{K}_{lm},\hat{r}_{lm},\hat{P})$ for each $\emph{orf}\Delta$ repeat are converted to univariate fitnesses $F_{clm}$, where $c$~identifies the condition (query or control), with any given fitness measure e.g. $MDR\times MDP$ (see (\ref{eq:MDRMDP}) and \cite{QFA1}).
A problem of the two-stage approach is that we must choose a fitness definition most relevant to the experiment.
We choose the same definition used in \citet{QFA1}, $MDR{\times}MDP$, for the comparison of our methods.
An alternative choice of fitness definition could be used given sufficient biological justification.
Section~\ref{int:fitness_def} gives the derivations of $MDR$ and $MDP$.
The product of $MDR{\times}MDP$ is used as it accounts for the attributes of two definitions simultaneously.
The flow of information within the model and how each parameter is related to the data can be seen \hl{from} the \hl{plate diagram} in Figure~\ref{fig:SHMDAG} \citep{DAGbook}.
\begin{figure}[t]
\centering
\makebox{\includegraphics[width=12cm]{lateximg/DAGSHM}}
\caption[Plate diagram for the separate hierarchical model]{\hl{Plate diagram} for the separate hierarchical model, described in Section~\ref{two:SHM}. This figure shows the four levels of hierarchy in the SHM model, population, $\emph{orf}\Delta$ ($l$), repeat ($m$) and time point ($n$).
Prior hyperparameters for the population parameters are omitted.
A circular node represents a parameter in the model.
An arrow from a source node to a target node indicates that the source node parameter is a \hl{prior hyperparameter} for the target node parameter.
Each rectangular box corresponds to a level of the hierarchy.
Nodes within multiple boxes are nested and their parameters are indexed by corresponding levels of the hierarchy. The node consisting of two concentric circles corresponds to the models fitted values.
The rectangular node represents the observed \hl{data}.}
\label{fig:SHMDAG}
\end{figure}
\FloatBarrier
\subsection{\label{two:IHM}Interaction hierarchical model}
\hl{After the SHM fit, the IHM, presented in Table~\ref{tab:IHM}, can then be used to model estimated fitness scores $F_{clm}$ and determine, for each $\emph{orf}\Delta$, whether there is evidence for interaction.}
\hl{Fitnesses are passed to the IHM where query screen fitnesses are compared with control screen fitnesses, assuming genetic independence.
Deviations from predicted fitnesses are evidence for genetic interaction.}
The flow of information within the IHM and how each parameter is related to the data can be seen \hl{from} the \hl{plate diagram} in Figure~\ref{fig:IHMDAG}.
\begin{table}
\caption[Description of the interaction hierarchical model]{Description of the interaction hierarchical model (IHM).
$F_{clm}$ are the observed fitness scores, where $c$~identifies the condition for a given $\emph{orf}\Delta$, $l$~identifies a particular $\emph{orf}\Delta$ from the gene deletion library and $m$ identifies a repeat for a given $\emph{orf}\Delta$.\label{tab:IHM}}
\input{models/IHM}
\end{table}
The interaction model accounts for between $\emph{orf}\Delta$ variation with the set of parameters ($Z^{p}$,$\sigma_{Z}$) and within $\emph{orf}\Delta$ variation by the set of parameters ($Z_{l}$,$\nu_{l}$).
A linear \hl{relationship} between the control and query $\emph{orf}\Delta$ level parameters is \hl{specified} with a scale parameter $\alpha_{1}$.
\hl{Any deviation from this relationship (genetic interaction) is accounted for by the term $\delta_{l}\gamma_{1,l}$.}
$\delta_{l}$ is a binary indicator of genetic interaction for \emph{orf}$\Delta$ $l$.
A scaling parameter $\alpha_{1}$ allows any effects due to differences in the control and query data sets to be scaled out, such as differences in \hl{genetic background}, incubator temperature or inoculum density.
The linear relationship between the control and query fitness scores, consistent with the multiplicative model of genetic independence, described in (\ref{eq:linear}), is implemented in the IHM as: $\hat{F} = e^{\alpha_{c}+Z_{l}+\delta_{l}\gamma_{cl}}=e^{\alpha_{c}}e^{Z_{l}+\delta_{l}\gamma_{cl}}$.
\hl{Strains whose fitnesses lie} along the linear relationship \hl{defined by} the scalar $\alpha_{1}$ \hl{show no evidence for interaction with the query condition}. \hl{On the other hand, deviation from the linear relationship, represented} by the posterior mean of $\delta_{l}\gamma_{1,l}$ \hl{is} evidence for genetic interaction.
The larger the posterior mean for $\delta_{l}$ is the \hl{higher} the probability or evidence there is for interaction, \hl{while $\gamma_{1,l}$ is a measure of the strength of interaction}.
Where the query condition has a negative effect (i.e. \hl{decreases fitness on average, compared to the control condition}), query fitnesses which are above and below the linear relationship are suppressors and enhancers \hl{of the fitness defect associated with the query condition} respectively.
A list of gene names are ordered by $\delta_{l}\gamma_{cl}$ posterior means and those $\emph{orf}\Delta$s with $\hat{\delta}_{l}>0.5$ will be classified and labelled as showing ``significant'' evidence of interaction.
\hl{The Bernoulli probability parameter $p$ is our prior estimate for the probability of a given $\emph{orf}\Delta$ showing evidence of genetic interaction. For a typical yeast QFA screen, $p$ is set to 0.05 as the experimenter's belief before the experiment is carried out is that $5\%$ of our $\emph{orf}\Delta$s exhibit genetic interactions.}
Observational noise is quantified by $\nu_{cl}$.
The $\nu_{cl}$ parameter accounts for difference in variation between condition i.e. the query and control data sets and for difference in variation between $\emph{orf}\Delta$s.
\begin{figure}[t]
\centering
\makebox{\includegraphics[width=14cm]{lateximg/DAGIHM}}
\caption[Plate diagram for the interaction hierarchical model]{Plate diagram for the interaction hierarchical model, described in Section~\ref{two:IHM}.
This figure shows the four levels of hierarchy in the IHM model: population, $\emph{orf}\Delta$ ($l$), condition ($c$) and repeat ($m$).
Prior hyperparameters for population parameters are omitted.
Plate diagram notation as in Figure~\ref{fig:SHMDAG}.
}
\label{fig:IHMDAG}
\end{figure}
\FloatBarrier
\section{\label{cha:one_stage}One-stage Bayesian hierarchical approach}
Following from Section~\ref{cha:two_stage}, a one-stage approach for inferring fitness and genetic interaction probabilities separately is presented.
All of the SHM and IHM modelling assumptions described in Section~\ref{cha:two_stage}, such as distributional choices and hierarchical structure are inherited by the one stage approach known as the joint hierarchical model (JHM).
\subsection{\label{joi:JHM}Joint hierarchical model}
The JHM given in Table~\ref{tab:JHM} is an alternative, fully Bayesian \hl{version of} the two-stage approach described in Section~\ref{two:SHM}~and~\ref{two:IHM}.
\hl{The JHM} incorporates the key modelling ideas from both the SHM and the IHM \hl{with the considerable advantage that we can learn about logistic growth model, fitness and genetic interaction parameters simultaneously, thereby avoiding having to choose a fitness measure or point estimates for passing information between models}.
\hl{The JHM is} an extension of the SHM with \hl{the presence or absence of genetic interaction} being described by a Bernoulli indicator and an additional level of error to account for variation due to the query condition.
Genetic interaction is modelled in terms of the two logistic growth parameters $K$ and $r$ simultaneously.
Similar to the interaction model in Section~\ref{two:IHM} in Chapter~\ref{cha:two_stage}, linear relationships between control and query carrying capacity and growth rate (instead of fitness score) are assumed: $(e^{\alpha_{c}+K^{o}_{l}+\delta_{l}\gamma_{cl}},e^{\beta_{c}+r^{o}_{l}+\delta_{l}\omega_{cl}})$.
\begin{table}
\caption[Description of the joint hierarchical model]{Description of the joint hierarchical model (JHM).
The dependent variable $y_{clmn}$ (scaled cell density measurements) and independent variable $t_{clmn}$ (time since inoculation) are input to the JHM.
$c$~identifies the condition for a given $\emph{orf}\Delta$, $l$~identifies a particular $\emph{orf}\Delta$ from the gene deletion library, $m$ identifies a repeat for a given $\emph{orf}\Delta$ and $n$ identifies the time point for a given condition and $\emph{orf}\Delta$ repeat.\label{tab:JHM}}
\input{models/JHM}
\end{table}
\hl{By fitting a single JHM, we need only calculate posterior means, check model diagnostics and thin posteriors once. However, the CPU time taken to reach convergence for any given data set is roughly twice that of the two-stage approach for a genome-wide QFA.}
The flow of information within the model and how each parameter is related to the data can be seen \hl{from} the \hl{plate diagram} in Figure~\ref{fig:joi:JHMDAG}.
\begin{figure}
\centering
\makebox{\includegraphics[width=14cm]{lateximg/DAGJHM}}
\caption[Plate diagram for the joint hierarchical model]{Plate diagram for the joint hierarchical model, described in Section~\ref{joi:JHM}.
This figure shows the five levels of hierarchy in the JHM model, population, $\emph{orf}\Delta$ ($l$), condition ($c$), repeat ($m$) and time point ($n$).
Prior hyperparameters for the population parameters are omitted.
Plate diagram notation is given in Figure~\ref{fig:SHMDAG}.
}
\label{fig:joi:JHMDAG}
\end{figure}
\FloatBarrier
\section{\label{two:REM}Random effects model}
\hl{To improve on the \cite{QFA1} modelling approach whilst remaining within the frequentist paradigm, by accounting for the hierarchical structure of the data, a random effects model \citep{mixedeffects,nlme} can be used.
The random effects model (REM) given in Table~\ref{tab:REM} is used to model estimated fitness scores $F_{clm}$ from (\ref{eq:F}) and estimate evidence of interaction for each $\emph{orf}\Delta$ simultaneously with a single model fit.
Introducing a random effect $Z_l$ allows us to account for between subject variation by estimating a single ${\sigma_Z}^2$.
Unlike the \cite{QFA1} approach, observed values ${F}_{clm}$ are not scaled and instead a parameter to model a condition effect $\mu_c$ is introduced.}
$\gamma_{cl}$ represents the estimated strength of genetic interaction between an $\emph{orf}\Delta$ and its query mutation counterpart.
For a multiplicative model of epistasis, an additive model is used to describe the log transformed data $f_{clm}=\log(F_{clm}+1)$, where ${F}_{clm}$ are the observed fitnesses.
We use the Benjamini-Hochberg test to correct for multiple testing in order to make a fair comparison with the \citep{QFA1} approach.
Inference for a frequentist random effects model can be carried out most simply with the R package ``lme4'' \citep{lme4}.
For the R code to fit the REM see Section~\ref{app:remcode} of the Appendix.
In the frequentist paradigm some parameters cannot be modelled as random effects since computational difficulties associated with large matrix computations arise with multiple random effects and very large data sets.
Similarly, a more appropriate model with a log-link function in order to model repeat level variation with a normal distribution cannot be fit, due to computational difficulties that arise with non-linear model maximum likelihood algorithms and large data sets.
Such computational difficulties cause algorithms for parameter estimation to fail to converge.
\begin{table}
\caption[Description of the random effects model]{Description of the random effects model (REM).
$c$~identifies the condition for a given $\emph{orf}\Delta$, $l$~identifies a particular $\emph{orf}\Delta$ from the gene deletion library and $m$ identifies a repeat for a given $\emph{orf}\Delta$.\label{tab:REM}}
\begin{align*}
f_{clm}&= \mu_c+Z_l+\gamma_{cl}+\varepsilon_{clm}\\
\mu_{c}&=\begin{cases}
\mu+\alpha & \text{if } c=0;\\
\mu & \text{if } c=1.
\end{cases}\qquad
&\gamma_{cl}&=\begin{cases}
0 & \text{if } c=0;\\
\gamma_{l} & \text{if } c=1.
\end{cases}\\
Z_l&\sim \mathcal{N}(0,{\sigma_Z}^2)
&\varepsilon_{clm} &\sim \mathcal{N}(0,\sigma^2)
\end{align*}
\end{table}
\end{chapter}
\begin{comment}
Where required, Metropolis-Hastings steps are used to sample from posterior distributions.
A data augmentation step is required to sample the posterior for genetic interaction parameters when a Bernoulli prior is used.
Bernoulli parameter $p$ is chosen by the experimenter, a more appropriate choice is to use a beta distribution on $p$, but currently mixing problems have resulted in a simpler model choice.
Priors for precision parameters are log-normal, The inverse-gamma distribution is conjagate to the variance but we find this and other available choices do not give broad enough at regions of low variance \citep{Gelmanprior}.
Where priors are too tight paramters do not fit, meaning non-standard distributinos.
The largest consideration for describing the experimental structure is ensuring that there is a distinction between extreme values and implausable values.
Ensuring distributions are broad is important so that we are not dictating the results.
Different levels dist of variation, if not otherwise
Log normal dist both give trunctions and more importantly give distributional assumptions and more linear???
Identifiability propblems occur when we do not restrict implausable values.
Problem 1.
This is problem arrises both in logistic growth model fitting and is solved in the frequentist least squares approach with a
It is very hard to choose ditributions that allow extreme values only when plausable should they be considered.
Not slight of hand
identification problems are typically handled in a post processing stage in a frequentist setting after visual inspection.
This is a problem in our Bayesian models as these outliers are effectively sharing information across our models and changing results.
Truncations are important
Fixing $P$, similarly to the frequentist approach ($P=0.0007$, 4dp), we still find those identified as genetic interactions. Choosing larger values of $P$ are largely the same but because time courses with low carrying capacity $K$ are sensitive to $P$, larger choices in $P$ incorrectly result in a decrease in growth rate $r$.
No correlation structure as yeast are independent samples outside whatever found so no need for FDR this also effects in a frequentist.
\end{comment}
\section{\label{sto:intro}Introduction}
In this Chapter, fast approximations to the stochastic logistic growth model (SLGM) \citep{capo_slgm} (see Section~\ref{int:stochastic_logistic_gro}) are presented.
The SLGM is given by the following diffusion equation:
\begin{align}
\label{eq_det_sde}
dX_t&=rX_t\left(1-\frac{X_t}{K}\right)dt+\sigma X_t dW_t,
\end{align}
where $X_{t_0}=P$ and is independent of $W_t$, $t\geq t_0$.
A deterministic logistic growth model (see Section\ref{int:logistic_gro}) is unable to describe intrinsic error within stochastic logistic growth time course data.
Consequently a deterministic model may lead to less accurate estimates of logistic growth parameters than a SDE, which can describe intrinsic noise.
So that random fluctuations present within observed yeast QFA data (\ref{int:QFA}) can accounted for as intrinsic noise instead of being confounded within our measurement error we are interested in using the SLGM in (\ref{eq_det_sde}), instead of its deterministic counterpart (\ref{eq_det}).
Alternative stochastic logistic growth equations exist (see Section~\ref{int:stochastic_logistic_gro}) but we find (\ref{eq_det_sde}) to be the most appropriate as intrinsic noise does not tend to zero with larger population sizes.
The SLGM (\ref{eq_det_sde}) is analytically intractable and therefore inference requires relatively slow numerical simulation.
Where fast inference is of importance such as real-time analysis or big data problems, we can use model approximations which do have analytically tractable densities, enabling fast inference.
For large hierarchical Bayesian models (see Chapter~\ref{cha:modelling_den_int}), computational time for inference is typically long, ranging from one to two weeks using a deterministic logistic growth model.
Inference for large hierarchical Bayesian models using the SLGM would increase computational time considerably (computational time is roughly proportional to the number of time points longer) with relatively slow numerical simulation approaches, therefore we may be interested in using approximate models that will allow us to carry out fast inference.
First an approximate model developed by \citet{roman} is introduced.
Two new approximate models are then presented using the linear noise approximation (LNA) \citep{LNA,komorowski} of the SLGM.
The model proposed by \citet{roman} is found to be a zero-order noise approximation.
The approximate models considered are compared against each other for both simulated and observed logistic growth data.
Finally, the approximate models are compared to ``exact'' approaches.
\input{sections_SDE/roman}
\input{sections_SDE/LNAM}
\input{sections_SDE/LNAA}
\input{sections_SDE/application}
\end{chapter}
\section{\label{app:LNAM_sol}Linear noise approximation of the stochastic logistic growth model with multiplicative intrinsic noise solution}
First we look to solve $dZ_t$, given in equation (\ref{eq:LNAM_dz}).
We define $f(t)=-be^{v_t}=-\frac{baPe^{aT}}{bP(e^{aT}-1)+a}$ to obtain the following,
\begin{equation*}
dZ_t=f(t)Z_tdt+\sigma dW_t.
\end{equation*}
In order to match our initial conditions correctly, $Z_0=0$.
Define a new process $U_t=e^{-\int^t_{t_0}f(s)ds}Z_t$ and solve the integral,
\begin{equation*}
\int^t_{t_0}f(s)ds=\int^t_{t_0}-\frac{baPe^{aS}}{bP(e^{aS}-1)+a}ds=\log\left(\frac{a}{bP(e^{aT}-1)+a}\right),
\end{equation*}
where, $S=s-{t_0}$ and $T=t-{t_0}$.
Apply the chain rule to $U_t$,
\begin{equation*}
dU_t=e^{-\int^t_{t_0}f(s)ds}dZ_t-f(t)e^{-\int^t_{t_0}f(s)ds}Z_tdt.
\end{equation*}
Now substitute in $dZ_t=f(t)Z_tdt+\sigma dW_t$ and simplify to give
\begin{equation*}
dU_t= e^{-\int^t_{t_0}f(s)ds}\sigma dW_t.
\end{equation*}
Apply the following notation
$\phi(t)=e^{\int^t_{t_0}f(s)ds}=\frac{a}{bP(e^{aT}-1)+a}$ and $\psi(t)=\sigma$ to give
\begin{equation*}
dU_t=\phi(t)^{-1}\psi(t) dW_t.
\end{equation*}
$U_t$, has the following solution,
\begin{equation*}
U_t=U_0+\int^t_{t_0} \phi(s)^{-1} \psi(s)dW_s.
\end{equation*}
As $U_t=\phi(t)^{-1}Z_t$, $Z_t$ then has the following solution \citep{arnold2013stochastic},
\begin{equation*}
Z_t=\phi(t)\left[Z_0+\int^t_{t_0}\phi(s)^{-1}\psi(s) dW_s\right].
\end{equation*}
Finally, the distribution at time t is $Z_t|Z_0\sim N(M_t,E_t)$ \citep{arnold2013stochastic}, where \\
$M_t=\phi(t)Z_0$ and
$E_t=\phi(t)^2\int^t_{t_0}\left[{\phi(s)}^{-1}\psi(s)\right]^2ds$.
\\
Further, $M_t=\frac{a}{bP(e^{aT}-1)+a}Z_0 $ and
$E_t=\sigma^2\left[\frac{a}{bP(e^{aT}-1)+a}\right]^2
\int^t_{t_0}\left[
\frac{a}{bP(e^{aS}-1)+a}
\right]^{-2}$ds.\\
As
$\int^t_{t_0}\left[
\frac{a}{bP(e^{aS}-1)+a}
\right]^{-2}ds=\frac{
b^2P^2(e^{2aT}-1)
+4bP(a-bP)(e^{aT}-1)
+2aT(a-bP)^2
}{2a^3}
$,
{\fontsize{11.5}{11.5}\selectfont
\begin{align*}
E_t=&\sigma^2\left[\frac{a}{bP(e^{aT}-1)+a}\right]^2
\left[
\frac{
b^2P^2(e^{2aT}-1)
+4bP(a-bP)(e^{aT}-1)
+2aT(a-bP)^2
}{2a^3}
\right]\\
=&\sigma^2\left[\frac{
b^2P^2(e^{2aT}-1)
+4bP(a-bP)(e^{aT}-1)
+2aT(a-bP)^2
}
{2a\left(bP(e^{aT}-1)+a\right)^2}\right].
\end{align*}
}
Taking our solutions for $v_t$ (\ref{eq:LNAM_det_sol}) and $Z_t$, we can now write our solution for the LNA to the log of the logistic growth process (\ref{eq:SDE2}).\\
As $Y_t=v_t+Z_t$,
\begin{equation*}
Y_t|Y_0\sim \mathcal{N}\left(\log\left[\frac{aPe^{aT}}{bP(e^{aT}-1)+a}\right]+M_t,E_t\right).
\end{equation*}
Note: $\frac{aPe^{aT}}{bP(e^{aT}-1)+a}$ has the same functional form as the solution to the deterministic part of the logistic growth process (\ref{eq_det_sde}) and is equivalent when $\sigma=0$ (such that $a=r-\frac{\sigma^2}{2}=r$).\\
\\
Further, as $Y_t$ is normally distributed, we know $X_t=e^{Y_t}$ will be log normally distributed and
\begin{equation*}
X_t|X_0\sim \log\:\mathcal{N}(\log\left(\frac{aPe^{aT}}{bP(e^{aT}-1)+a}\right)+M_t,E_t).
\end{equation*}
Alternatively set $Q=\left(\frac{\frac{a}{b}}{P}-1\right)e^{at_{0}}$,
\begin{equation*}
X_t|X_{0}\sim \log\:\mathcal{N}(\log\left(\frac{\frac{a}{b}}{1+Qe^{-at}}\right)+M_t,E_t).
\end{equation*}
\noindent From our solution to the log process we can obtain the following transition density
\begin{align*}
\begin{split}
(Y_{t_i}|Y_{t_{i-1}}&=y_{t_{i-1}})\sim\operatorname{N}\left(\mu_{t_i},\Xi_{t_i}\right),\\
\text{where } y_{t_{i-1}}&=v_{t_{i-1}}+z_{t_{i-1}}, Q=\left(\frac{\frac{a}{b}}{P}-1\right)e^{at_{0}},\\
\mu_{t_i}&=y_{t_{i-1}}+\log\left(\frac{1+Qe^{-at_{i-1}}}{1+Qe^{-at_i}}\right)+e^{-a(t_i-t_{i-1})}\frac{1+Qe^{-at_{i-1}}}{1+Qe^{-at_i}}z_{t_{i-1}} \text{ and}\\
\Xi_{t_i}&=\sigma^2\left[\frac{4Q(e^{at_i}-e^{at_{i-1}})+e^{2at_i}-e^{2at_{i-1}}+2aQ^2(t_i-t_{i-1})}{2a(Q+e^{at_i})^2}\right].
\end{split}
\end{align*}
\clearpage
\section{\label{app:zero_ord}Zero-order noise approximation of the stochastic logistic growth model}
After obtaining (\ref{eq:SDEV}) in Section~\ref{sec:LNAM}, we can derive the RRTR logistic growth diffusion process as follows.
First our expression for $dv_t$, given in (\ref{eq:SDEV}), is approximated by setting~$\sigma^2=0$,
\begin{equation*}
dv_t=\left(r-\frac{1}{2}\sigma^2-\frac{r}{K}e^{v_t}\right)dt=\left(r-\frac{r}{K}e^{v_t}\right)dt.
\end{equation*}
We now write down an expression for $dZ_t$, where $dY_t$ is given in (\ref{eq:SDE2}) and $dZ_t=dY_t-dv_t$,
\begin{equation*}
dZ_t=
\left(r-\frac{1}{2}\sigma^2-\frac{r}{K}e^{Y_t}\right)dt+\sigma dW_t-\left(r-\frac{r}{K}e^{v_t}\right)dt.
\end{equation*}
We can then rearrange and simplify to give the following,
\begin{equation*}
dZ_t=\left(\frac{r}{K}\left[e^{v_t}-e^{Y_t}\right]-\frac{1}{2}\sigma^2\right)dt+\sigma dW_t.
\end{equation*}
We now substitute in $Y_t=v_t+Z_t$,
\begin{equation*}
dZ_t=\left(\frac{r}{K}\left[e^{v_t}-e^{v_t+Z_t}\right]-\frac{1}{2}\sigma^2\right)dt+\sigma dW_t.
\end{equation*}
We now apply a zero order LNA by setting $e^{Z_t}=1$ to obtain,
\begin{equation*}
dZ_t=\left(\frac{r}{K}\left[e^{v_t}-e^{v_t}\right]-\frac{1}{2}\sigma^2\right)dt+\sigma dW_t.
\end{equation*}
We can then simplify to give the following,
\begin{equation}\label{eq:RRTRa}
dZ_t=-\frac{1}{2}\sigma^2 dt+\sigma dW_t.
\end{equation}
Differentiating $v_t$, given in (\ref{eq:LNAM_det_sol}), with respect to t we can obtain an alternative expression for $dv_t$,
\begin{equation}\label{eq:RRTRb}
dv_t=\frac{a(a-bP)}{bP(e^{aT}-1)+a}dt=\frac{r(K-P)}{K+P(e^{rT}-1)}dt,
\end{equation}
where $T=t-t_0$. We now write down our new expression for $Y_t$, where $dY_t=dv_t+dZ_t$, given (\ref{eq:RRTRb}) and (\ref{eq:RRTRa}),
\begin{equation*}
dY_t=\left(\frac{r(K-P)}{K+P(e^{aT}-1)}-\frac{1}{2}\sigma^2\right)dt+\sigma dW_t
\end{equation*}
or alternatively by setting $Q=\left(\frac{K}{P}-1\right)e^{at_{0}}$,
\begin{equation*}
dY_t=\left(\frac{Qr}{e^{rt}+Q}-\frac{1}{2}\sigma^2\right)dt+\sigma dW_t.
\end{equation*}
We can then apply It\^{o}'s lemma (\ref{eq_itolemma}) \citep{ito} with the transformation ${f(t,Y_t)\equiv X_t=e^{Y_t}}$.
After deriving the following partial derivatives:
\begin{equation*}
\frac{df}{dt}=0,\qquad\frac{df}{dx}=e^{Y_t}\quad\text{and}\quad\frac{d^2f}{dx^2}=e^{Y_t},
\end{equation*}
we can obtain the following It\^{o} drift-diffusion process:
\begin{equation*}
dX_t=\frac{Qr}{e^{rt}+Q}X_tdt+\sigma dW_t,
\end{equation*}
which is exactly the RRTR logistic diffusion process presented by \cite{roman}.
\clearpage
\section{\label{app:LNAA_sol}Linear noise approximation of the stochastic logistic growth model with additive intrinsic noise solution}
First we look to solve $dZ_t$, given in (\ref{eq:LNAA_dz}).
We define $f(t)=a-2bv_t$ to obtain the following,
\begin{equation*}
dZ_t=f(t)Z_tdt+\sigma v_t dW_t.
\end{equation*}
In order to match our initial conditions correctly, $Z_0=0$.
Define a new process $U_t=e^{-\int^t_{t_0}f(s)ds}Z_t$ and solve the integral,
\begin{equation*}
\int^t_{t_0}f(s)ds=\int^t_{t_0}(a-2bV_s)ds=aT-2\log\left(\frac{bP(e^{aT}-1)+a}{a}\right),
\end{equation*}
as $\int^t_{t_0}V_sds=\frac{1}{b}\log \left(\frac{bP(e^{aT}-1)+a}{a}\right)$, where $S=s-{t_0}$ and $T=t-{t_0}$.
Apply the chain rule to $U_t$,
\begin{equation*}
dU_t=e^{-\int^t_{t_0}f(s)ds}dZ_t-f(t)e^{-\int^t_{t_0}f(s)ds}Z_tdt.
\end{equation*}
Now substitute in $dZ_t=f(t)Z_tdt+\sigma v_t dW_t$ and simplify to give,
\begin{equation*}
dU_t=e^{-\int^t_{t_0}f(s)ds}\sigma v_t dW_t.
\end{equation*}
Apply the following notation $\phi(t)=e^{\int^t_{t_0}f(s)ds}=e^{aT}\left(\frac{a}{bP(e^{aT}-1)+a}\right)^2$ and $\psi(t)=\sigma v_t$ to give,
\begin{equation*}
dU_t=\phi(t)^{-1}\psi(t) dW_t.
\end{equation*}
$U_t$ has the following solution,
\begin{equation*}
U_t=U_0+\int^t_{t_0} \phi(s)^{-1} \psi(s)dW_s.
\end{equation*}
As $U_t=\phi(t)^{-1}Z_t$, $Z_t$ has the following solution \citep{arnold2013stochastic},
\begin{equation*}
Z_t=\phi(t)\left[Z_0+\int^t_{t_0}\phi(s)^{-1}\psi(s) dW_s\right].
\end{equation*}
Finally the distribution at time t is $Z_t|Z_0\sim N(M_t,E_t)$ \citep{arnold2013stochastic}, where \\
{\fontsize{11}{11}\selectfont
$M_t=\phi(t)Z_0$ and
$E_t=\phi(t)^2\int^t_{t_0}\left[{\phi(s)}^{-1}\psi(s)\right]^2ds$.
}
\begin{equation*}
M_t=e^{aT}\left(\frac{a}{bP(e^{aT}-1)+a}\right)^2Z_0
\end{equation*}
and
\begin{align*}
E_t=&\left(e^{aT}\left(\frac{a}{bP(e^{aT}-1)+a}\right)^2\right)^2
\int^t_{t_0}\left[e^{aS}\left(\frac{a}{bP(e^{aS}-1)+a}\right)^2\right]^{-2} \sigma^2 V_s^2 ds
\\
=&\sigma^2\left(e^{aT}\left(\frac{a}{bP(e^{aT}-1)+a}\right)^2\right)^2
\\
&\times\int^t_{t_0}\left[
e^{aS}\left(\frac{a}{bP(e^{aS}-1)+a}\right)^2
\right]^{-2} \left[
\frac{aPe^{aS}}{bP(e^{aS}-1)+a}
\right]^{2} ds
\\
=&\sigma^2\left(e^{aT}\left(\frac{a}{bP(e^{aT}-1)+a}\right)^2\right)^2
\\
&\times\int^t_{t_0}\left[
e^{-2aS}\left(\frac{a}{bP(e^{aS}-1)+a}\right)^{-4}
\right] \left[
\frac{aPe^{aS}}{bP(e^{aS}-1)+a}
\right]^{2} ds
\\
=&\sigma^2\left(e^{aT}\left(\frac{1}{bP(e^{aT}-1)+a}\right)^2\right)^2
\int^t_{t_0}\left[a^2P^2
\left(\frac{1}{bP(e^{aS}-1)+a}\right)^{-2}
\right] ds,
\end{align*}
as $\int^{t}_{t_{0}}\left(\frac{1}{bP(e^{aS}-1)+a}\right)^{-2}ds
=\frac{
b^2P^2(e^{2aT}-1)
+4bP(a-bP)(e^{aT}-1)
+2aT(a-bP)^2
}{2a}$,
\begin{align*}
E_t=&\frac{1}{2}\sigma^2aP^2e^{2aT}\left(\frac{1}{bP(e^{aT}-1)+a}\right)^4\\
&\times
\left[
b^2P^2(e^{2aT}-1)
+4bP(a-bP)(e^{aT}-1)
+2aT(a-bP)^2
\right].
\end{align*}
\noindent Taking our solutions for $v_t$ (\ref{eq:LNAA_det_sol}) and $Z_t$, we can obtain the following transition density
\begin{align*}
\begin{split}
(X_{t_i}|X_{t_{i-1}}=&x_{t_{i-1}})\sim N(\mu_{t_i},\Xi_{t_i}),\\
\text{where }x_{t_{i-1}}=&v_{t_{i-1}}+z_{t_{i-1}},\\
\mu_{t_i}=&x_{t_{i-1}}+\left(\frac{aPe^{aT_i}}{bP(e^{aT_i}-1)+a}\right)-\left(\frac{aPe^{aT_{i-1}}}{bP(e^{aT_{i-1}}-1)+a}\right)\\
&+e^{a(t_i-t_{i-1})}\left(\frac{bP(e^{aT_{i-1}}-1)+a}{bP(e^{aT_i}-1)+a}\right)^2Z_{t_{i-1}}\text{ and}\\
\Xi_{t_i}=&\frac{1}{2}\sigma^2aP^2e^{2aT_i}\left(\frac{1}{bP(e^{aT_i}-1)+a}\right)^4\\
&\times[
b^2P^2(e^{2aT_i}-e^{2aT_{i-1}})
+4bP(a-bP)(e^{aT_i}-e^{aT_{i-1}})\\
&\;\:\:\:\:+2a(t_i-t_{i-1})(a-bP)^2
].
\end{split}
\end{align*}
\clearpage
\section{\label{app:prior_hyp}Prior hyper-parameters for Bayesian state space models}
\input{tables/SDE_priors}
\clearpage
\section{\label{app:kalman_fil}Kalman filter for the linear noise approximation of the stochastic logistic growth model with additive intrinsic noise and Normal measurement error}
To find $\pi(y_{t_{1:N}})$ for the LNAA with Normal measurement error we can use the following Kalman Filter algorithm. First we assume the following:
\begin{align*}
\theta_{{t_{i}}}|y_{1:{{t_{i}}}}&\sim \operatorname{N}(m_{t_{i}},C_{t_{i}}),\\
m_{t_{i}}&=a_{t_{i}}+R_{t_{i}}F(F^{T}R_{{t_{i}}}F+U)^{-1}[y_{t_{i}}-F^{T}a_{t_{i}}],\\
C_{t_{i}}&=R_{t_{i}}-R_{t_{i}}F(F^TR_{t_{i}}F+U)^{-1}F^{T}R_{t_{i}}
\end{align*}
and initialize with $m_0=P$ and $C_0=0$. Now suppose that,
\begin{align*}
\theta_{t_{i}}|y_{1:{t_{i-1}}}&\sim \operatorname{N}(a_{t_{i}},R_{t_{i}}),\\
a_{t_{i}}&=G_{{t_{i}}}m_{{t_{i-1}}}\\
\text{and }R_{t_{i}}&=G_{{t_{i}}}C_{{t_{i-1}}}G_{t_{i}}^T+W_{t_{i}}.
\end{align*}
The transition density distribution, see (\ref{eq:LNAA_tran}) is as follows:
\begin{align*}
\theta_{{t_{i}}}|\theta_{{t_{i-1}}}&\sim\operatorname{N}(G_{{t_{i}}}\theta_{{t_{i-1}}},W_{t_{i}})\\
\text{or equivalently }(X_{t_i}|X_{t_{i-1}}=x_{t_{i-1}})&\sim\operatorname{N}\left(\mu_{t_i},\Xi_{t_i}\right),\text{ where }x_{t_{i-1}}=v_{t_{i-1}}+z_{t_{i-1}},\\
\theta_{t}
&=
\begin{pmatrix}
1 \\
X_{t_{i}}
\end{pmatrix}
=
\begin{pmatrix}
1 & 0 \\
H_{\alpha,t_{i}} & H_{\beta,t_{i}}
\end{pmatrix}
\begin{pmatrix}
1 \\
X_{t_{i-1}}
\end{pmatrix}
\\&=
G_{t_{i}}\theta_{{t_{i-1}}},\\
G_{t_{i}}&= \begin{pmatrix}
1 & 0\\
H_{\alpha,t_{i}} & H_{\beta,t_{i}}
\end{pmatrix}, \quad
W_{t_{i}}= \begin{pmatrix}
0 & 0 \\
0 & \Xi_{t_i}
\end{pmatrix} \\
\text{where }H_{\alpha,t_{i}}=H_\alpha({t_{i}},{t_{i-1}})=&v_t-V_{t-1}e^{a(t_i-t_{i-1})}\left(\frac{bP(e^{aT_{i-1}}-1)+a}{bP(e^{aT_i}-1)+a}\right)^2\\
\text{and }H_{\beta,t_{i}}=&H_\beta({t_{i}},{t_{i-1}})=e^{a(t_i-t_{i-1})}\left(\frac{bP(e^{aT_{i-1}}-1)+a}{bP(e^{aT_i}-1)+a}\right)^2.
\end{align*}
The measurement error distribution is as follows:
\begin{align*}
y_{t_{i}}|\theta_{{t_{i}}}{\sim}&\operatorname{N}(F^T\theta_{{t_{i}}},U)\\
\text{or equivalently }y_{t_{i}}|\theta_{{t_{i}}}{\sim}&\operatorname{N}(X_{{t_{i}}},\sigma_{\nu}^2),\\
\text{where }
F=& \begin{pmatrix}
0 \\
1
\end{pmatrix}\text{ and }
U= \sigma_{\nu}^2.
\end{align*}
Matrix Algebra:
\begin{align*}
a_{t_{i}}=&G_{{t_{i}}}m_{{t_{i-1}}}\\
=&\begin{pmatrix}
1 & 0\\
H_{\alpha,t_{i}} & H_{\beta,t_{i}}
\end{pmatrix}
\begin{pmatrix}
1\\
m_{{t_{i-1}}}
\end{pmatrix}
=\begin{pmatrix}
1\\
H_{\alpha,t_{i}}+H_{\beta,t_{i}}m_{{t_{i-1}}}
\end{pmatrix}
\end{align*}
\begin{align*}
R_{t_{i}}&=G_{{t_{i}}}C_{{t_{i-1}}}G_{t_{i}}^T+W_{t_{i}}
\\
&=
\begin{pmatrix}
0 & 0\\
0 & {H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2
\end{pmatrix}
+
\begin{pmatrix}
0 & 0\\
0 & \Xi_{t_i}
\end{pmatrix}
=\begin{pmatrix}
0 & 0\\
0 & {H_{\beta,t_{i}}}^2 c_{{t_{i-1}}}^2+\Xi_{t_i}
\end{pmatrix}
\end{align*}
\begin{align*}
C_{t_{i-1}}&=
\begin{pmatrix}
0 & 0\\
0 & c_{{t_{i-1}}}^2
\end{pmatrix}
\end{align*}
\begin{align*}
R_{t_{i}}F(F^{T}R_{{t_{i}}}F+U)^{-1}=&
\begin{pmatrix}
0 & 0\\
0 & {H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}
\end{pmatrix}
\begin{pmatrix}
0 \\
1
\end{pmatrix}
\\
&\times
\left[
\begin{pmatrix}
0 & 1\\
\end{pmatrix}
\begin{pmatrix}
0 & 0\\
0 & {H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}
\end{pmatrix}
\begin{pmatrix}
0 \\
1
\end{pmatrix}
+\sigma_{\nu}^2
\right]^{-1}
\\
=&\left[
\begin{pmatrix}
{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i} +\sigma_{\nu}^2
\end{pmatrix}
\right]^{-1}
\begin{pmatrix}
0 \\
{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}
\end{pmatrix}
\end{align*}
\begin{align*}
m_{t_{i}}=&a_{t_{i}}+R_{t_{i}}F(F^{T}R_{{t_{i}}}F+U)^{-1}[y_{t_{i}}-F^{T}a_{t_{i}}]\\
=&\begin{pmatrix}
1\\
H_{\alpha,t_{i}}+H_{\beta,t_{i}}m_{{t_{i-1}}}
\end{pmatrix}\\
&+
\left[
\begin{pmatrix}
{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i} +\sigma_{\nu}^2
\end{pmatrix}
\right]^{-1}\\
&\times
\begin{pmatrix}
0 \\
{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}
\end{pmatrix
\left[
y_{t_{i}}-
\begin{pmatrix}
0 & 1 \\
\end{pmatrix}
\begin{pmatrix}
1\\
H_{\alpha,t_{i}}+H_{\beta,t_{i}}m_{{t_{i-1}}}
\end{pmatrix}
\right]\\
=&\begin{pmatrix}
0\\
H_{\alpha,t_{i}}+H_{\beta,t_{i}}m_{{t_{i-1}}}+\frac{{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}}{ {H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}+\sigma_{\nu}^2}\left[y_{t_{i}}-H_{\alpha,t_{i}}-H_{\beta,t_{i}}m_{{t_{i-1}}}\right]
\end{pmatrix}
\end{align*}
\begin{align*}
C_{t_{i}}=&R_{t_{i}}-R_{t_{i}}F(F^TR_{t_{i}}F+U)^{-1}F^{T}R_{t_{i}}\\
=&\begin{pmatrix}
0 & 0\\
0 & {H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}
\end{pmatrix}\\
&-
\left[
\begin{pmatrix}
{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i} +\sigma_{\nu}^2
\end{pmatrix}
\right]^{-1}\\
&\times
\begin{pmatrix}
0\\
{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}
\end{pmatrix
\left[
\begin{pmatrix}
0 & 1
\end{pmatrix}
\begin{pmatrix}
0 & 0\\
0 & {H{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}
\end{pmatrix}
\right]
\\
=&
\begin{pmatrix}
0 & 0 \\
0 & {H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}-\frac{\left({H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}\right)^2}{{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}+\sigma_{\nu}^2}
\end{pmatrix}
\end{align*}
\\
With $m_{t_{i}}$ and $C_{t_{i}}$ for $i=1:N$, we can evaluate $a_{t_{i}}$, $R_{t_{i}}$ and $\pi(x_{t_{i}}|y_{t_{1:(i-1)}})$ for $i=1:N$.
We are interested in $\pi(y_{t_{1:i}})=\prod^N_{i=1}\pi(y_{t_{i}}|y_{t_{1:(i-1)}})$, where
$\pi(y_{t_{i}}|y_{t_{1:(i-1)}})=\int_{x}\pi(y_{t_{i}}|x_{t_{i}})\pi(x_{t_{i}}|y_{t_{1:(i-1)}})dx_{t_{i}}$ gives a tractable Gaussian integral.
Finally,
\begin{align*}
\log\pi(y_{{t_{1:N}}})&=\sum^N_{i=1}\log\pi(y_{t_{i}}|y_{{t_{1:(i-1)}}})\\
&=\sum^N_{i=1}\left[-\log\left({\sqrt{2\pi(\sigma_{f}^2+\sigma_{g}^2)}}\right){-\frac{(\mu_f-\mu_g)^2}{2(\sigma_{f}^2+\sigma_{g}^2)}}\right],
\end{align*}
\begin{align*}
\text{where }\mu_f-\mu_g=&y_{t_{i}}-a_{t_{i}}=y_{t_{i}}-H_{\alpha,t_{i}}-H_{\beta,t_{i}}m_{t_{i-1}}\\
\text{and }\sigma_{f}^2+\sigma_{g}^2=&\sigma_{\nu}^2+R_{t_{i}}=\sigma_{\nu}^2+{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}.
\end{align*}
\clearpage
\underline{Procedure}\\
\\
1. Set $i=1$. Initialize $m_0=P$ and $C_0=0$.\\
\\
2. Evaluate and store the following log likelihood term:
\begin{align*}
\log\pi(y_{t_{i}}|y_{t_{1:(i-1)}})=&\left[-\log\left({\sqrt{2\pi(\sigma_{f}^2+\sigma_{g}^2)}}\right){-\frac{(\mu_f-\mu_g)^2}{2(\sigma_{f}^2+\sigma_{g}^2)}}\right],\\
\text{where }\mu_f-\mu_g=&y_{t_{i}}-H_{\alpha,t_{i}}-H_{\beta,t_{i}}m_{{t_{i-1}}}
\text{ and }\sigma_{f}^2+\sigma_{g}^2=\sigma_{\nu}^2+{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}.
\end{align*}
3. Create and store both $m_{t_i}$, and $C_{t_{i}}$,
\begin{align*}
\text{where }m_{t_{i}}=&H_{\alpha,t_{i}}+H_{\beta,t_{i}}m_{t_{i-1}}+\frac{{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}}{ {H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}+\sigma_{\nu}^2}\left[y_{t_{i}}-H_{\alpha,t_{i}}-H_{\beta,t_{i}}m_{{t_{i-1}}}\right]\\
\text{and }c_{{t_{i}}}^2=&{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}-\frac{\left({H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}\right)^2}{{H_{\beta,t_{i}}}^2c_{{t_{i-1}}}^2+\Xi_{t_i}+\sigma_{\nu}^2}.
\end{align*}
\\
4. Increment $i$, $i$=$(i+1)$ and repeat steps 2-3 till $\log\pi(y_{t_{N}}|y_{t_{1:(N-1)}})$ is evaluated.\\
\\
5. Calculate the sum:
\begin{equation*}
\log\pi(y_{{t_{1:N}}})=\sum^N_{i=1}\log\pi(y_{t_{i}}|y_{{t_{1:(i-1)}}}).
\end{equation*}
\section{\label{sec:SDE_application}Simulation and Bayesian inference for the stochastic logistic growth model and approximations}
To compare the accuracies of each of the three approximate models in representing the SLGM, we first compare simulated forward trajectories from the RRTR, LNAM and LNAA with simulated forward trajectories from the SLGM (Figure~\ref{4nonu}).
We use the Euler-Maruyama method \citep{embook} (see Section~\ref{lit:em}) with very fine discretisation to give arbitrarily exact simulated trajectories from each SDE.
The LNAA and LNAM trajectories are visually indistinguishable from the SLGM (Figures~\ref{4nonu} A, C \& D).
On the other hand, population sizes simulated with the RRTR display large deviations from the mean as the population approaches its stationary phase (Figures~\ref{4nonu}A \& B).
Figure~\ref{4nonu}E further highlights the increases in variation as the population approaches stationary phase for simulated trajectories of the RRTR, in contrast to the SLGM and LNA models
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_sde/4nonuALT}
\caption[Forward trajectories for the stochastic logistic growth model and approximations]{\label{4nonu}
Forward trajectories (No. of simulations=100) for the stochastic logistic growth model and approximations.
See Table~\ref{app:sde_val_fur} for parameter values.
A) The stochastic logistic growth model (SLGM).
B) The \cite{roman} (RRTR) approximation.
C) The linear noise approximation with multiplicative intrinsic noise (LNAM).
D) The linear noise approximation with additive intrinsic noise (LNAA).
E) Standard deviations of simulated trajectories over time for the SLGM (black), RRTR (red), LNAM (green) and LNAA (blue).
}
\end{figure}
\clearpage
\subsection{\label{sec:simulation_stu}Bayesian parameter inference with approximate models}
To compare the quality of parameter inference using each of these approximations we simulated synthetic time-course data from the SLGM and combined this with either Log-normal or Normal measurement error.
Carrying out Bayesian inference with broad priors (see (\ref{app:LNAM_sta_spa_mod}) and (\ref{app:LNAA_sta_spa_mod})) we compared the parameters recovered using each approximation with those used to generate the synthetic dataset.
The synthetic time-course datasets consist of 27 time points generated using the Euler-Maruyama method with very fine intervals \citep{embook}.
We formulate our inference problem as a dynamic linear state space model \citep{dynamicmodels}.
The advantage of a state space formulation is that we are then able to build a Kalman filter to carry out fast parameter inference.
We can take advantage of a linear Gaussian structure and construct a Kalman filter recursion for marginal likelihood computation (Appendix~\ref{app:kalman_fil}).
By choosing to match the measurement error structure to the intrinsic error of our models we can build a linear Gaussian structure.
We therefore assume Log-normal (multiplicative) error for the RRTR and LNAM, and for the LNAA we assume Normal (additive) measurement error.
Dependent variable $y_{t_i}$ and independent variable $\{t_{i},i=1,...,N\}$ are data input to the model (where $t_i$ is the time at point $i$ and $N$ is the number of time points).
$X_t$ is the state process, describing the population size.
The state space model for the RRTR and LNAM is as follows:
\begin{align}
\log(y_{t_i}) &\sim \operatorname{N}(X_{t_i},{\nu}^{2} ),\notag\\
(X_{t_i}|X_{t_{i-1}}=x_{t_{i-1}})&\sim\operatorname{N}\left(\mu_{t_i},\Xi_{t_i}\right), \text{ where } x_{t_{i}}=v_{t_{i}}+z_{t_{i}},\label{app:LNAM_sta_spa_mod}
\end{align}
$\mu_{t_i}$ and $\Xi_{t_i}$ are given by (\ref{eq:RRTR_tran}) and (\ref{eq:LNAM_tran}) for the RRTR and LNAM respectively. Priors are as follows:
\begin{align*}
\log X_0 \equiv \log~P &\sim \operatorname{N}({\mu}_P,{\tau_P}^{-1}),& \quad
\log~K &\sim \operatorname{N}({\mu}_K,{\tau_K}^{-1}),& \quad
\log~r &\sim \operatorname{N}({\mu}_r,{\tau_r}^{-1}),\\
\log~\nu^{-2} &\sim \operatorname{N}({\mu}_\nu,{\tau_\nu}^{-1}),&
\log~{\sigma^{-2}} &\sim \operatorname{N}({\mu}_\sigma,{\tau_\sigma}^{-1})I_{[1,\infty]}.&
\end{align*}
Bayesian inference is carried out with broad priors such that estimated parameter values are not heavily influenced by our choice.
See Table~\ref{table:SDE_priors} for prior hyper-parameter values.
Log-normal prior distributions are chosen to ensure positive logistic growth parameters and precision parameters are strictly positive.
Our prior for $\log~{\sigma^{-2}}$ is truncated below 1 to avoid unnecessary exploration of extremely low probability regions, which could be caused by problems identifying $\nu$, for example when $\log~\nu^{-2}$ takes large values, and to ensure that intrinsic noise does not dominate the process.
Our choice of 1 for the truncation threshold is made by observing forward simulations from our processes and choosing a value for $\log~{\sigma^{-2}}$ where intrinsic noise is so large that the deterministic part of the process is masked, consequently making the LNA a bad approximation.
We also find that truncating $\log~{\sigma^{-2}}$ is more preferable to truncating $\log~\nu^{-2}$ as truncating $\log~\nu^{-2}$ does not alleviate the identifiability problem without being very restrictive for the measurement error structures.
The state space model for the LNAA is as follows:
\begin{align}
y_{t_i} &\sim \operatorname{N}(X_{t_i},{\nu}^{2} ),\notag\\
(X_{t_i}|X_{t_{i-1}}=x_{t_{i-1}})&\sim\operatorname{N}\left(\mu_{t_i},\Xi_{t_i}\right)
, \text{ where } x_{t_{i}}=v_{t_{i}}+z_{t_{i}},\label{app:LNAA_sta_spa_mod}
\end{align}
$\mu_{t_i}$ and $\Xi_{t_i}$ are given by (\ref{eq:LNAA_tran}).
Priors are as in (\ref{app:LNAM_sta_spa_mod}).
Measurement error for the observed values is Normal so that we have a linear Gaussian structure.
The state space models in (\ref{app:LNAM_sta_spa_mod}) and (\ref{app:LNAA_sta_spa_mod}) have different measurement error structures.
So that a fair comparison can be made between (\ref{app:LNAM_sta_spa_mod}) and (\ref{app:LNAA_sta_spa_mod}), we choose our priors so that the marginal moments for the measurement error of our models is not too dissimilar, particularly at the earliest stage where most growth is observed.
To see how the inference from our approximate models compares with slower ``exact'' models, we consider Euler-Maruyama approximations \citep{euler_maruyama} of (\ref{eq_det_sde}) and of the log transformed process, using fine intervals.
We use the approach of \citep{darren2005} to carry out inference of our ``exact'' models.
A single site update algorithm is used to update model parameters and the Euler-Maruyama approximation of the latent process in turn.
Given these approximations we can construct a state space model for an ``exact'' SLGM with Log-normal measurement error (SLGM+L) and similarly for the SLGM with Normal measurement error (SLGM+N), priors are as in (\ref{app:LNAM_sta_spa_mod}).
Our inference makes use of a Kalman filter to integrate out the state process.
The Kalman filer allows for fast inference compared to slow numerical simulation approaches that impute all states.
The algorithm for our approximate models is the Metropolis-within-Gibbs sampler with a symmetric proposal \citep{gamerman}. Full-conditionals are sampled in turn to give samples from the joint posterior distribution:
\begin{equation*}
\pi{(K,r,P,\sigma,\nu,X_{t_{1:N}},y_{t_{1:N}})},
\end{equation*}
where $X_{t_{1:N}}$ is the latent process and $y_{t_{1:N}}$ is the observed data, for $N$ observed data points.
The Metropolis-within-Gibbs sampler algorithm is as follows:\\\\
1) Initialise counter $i=1$ and parameters $K_{(0)},r_{(0)},\sigma_{(0)},P_{(0)},\nu_{(0)}$\\
\\
2) Simulate $K_{(i)}$ from $K\sim\pi{(K|\nu_{(i-1)},r_{(i-1)},\sigma_{(i-1)},P_{(i-1)},y_{t_{1:N}})}$\\
\\
3) Simulate $r_{(i)}$ from $r\sim\pi{(r|\nu_{(i-1)},K_{(i)},\sigma_{(i-1)},P_{(i-1)},y_{t_{1:N}}}$\\
\\
4) Simulate $\sigma_{(i)}$ from $\sigma\sim\pi{(\sigma|\nu_{(i-1)},K_{(i)},r_{(i)},P_{(i-1)},y_{t_{1:N}})}$\\
\\
5) Simulate $P_{(i)}$ from $\nu\sim\pi{(P|\nu_{(i-1)},K_{(i)},r_{(i)},\sigma_{(i)},y_{t_{1:N}})}$\\
\\
6) Simulate $\nu_{(i)}$ from $\nu\sim\pi{(\nu|K_{(i)},r_{(i)},\sigma_{(i)},P_{(i)},y_{t_{1:N}})}$\\
\\
7) Repeat steps 2-6 until the sample size required is obtained.\\
We find the mixing for our algorithm is improved when we have intermediate steps between sampling from the $\sigma_{(i)}$ and $\nu_{(i)}$ full conditionals.
Each update in our algorithm is accomplished by a Metropolis-Hastings step using a Kalman filter.
Acceptance ratios are calculated for each update during a burn-in period.
To improve the computational speed of our inference, further research may involve using an algorithm where we jointly update our parameters.
Posterior means are used to obtain point estimates and standard deviations for describing variation of inferred parameters.
The Heidelberger and Welch convergence diagnostic \citep{Heidelberger} is used to determine whether convergence has been achieved for all parameters.
Computational times for convergence of our MCMC schemes (code is available at \url{https://github.com/jhncl/LNA.git}) can be compared using estimates for the minimum effective sample size per second (ESS\textsubscript{min}/sec) \citep{coda}.
The average ESS\textsubscript{min}/sec of our approximate model (coded in C) is $\sim$100 and ``exact'' model $\sim$1 (coded in JAGS \citep{rjags} with 15 imputed states between time points, chosen to maximise ESS\textsubscript{min}/sec).
We find that our C code is typically twice as fast as the simple MCMC scheme used by JAGS, indicating that our inference is ${\sim}50\times$ faster than an ``exact'' approach.
A more efficient ``exact'' approach could speed up further, say by another factor of 5, but our approximate approach will at least be an order of magnitude faster.
We use a burn-in of 600,000 and a thinning of 4,000 to obtain a final posterior sample size of 1,000 for MCMC convergence of all our models.
To compare the approximate models ability to recover parameters from the SLGM with simulated Log-normal measurement error, we simulate data and carry out Bayesian inference. Figure~\ref{simlog} shows that all three approximate models can capture the synthetic time-course well, but that the RRTR model is the least representative with the largest amount of drift occurring at the saturation stage, a property not found in the SLGM or the two new LNA models.
Comparing forwards trajectories with measurement error (Figure~\ref{simlog}), the ``exact'' model is visually similar to all our approximate models, but least similar to the RRTR.
Further, Table~\ref{app:sde_val_fur} demonstrates that parameter posterior means are close to the true values and that standard deviations are small for all models and each parameter set.
By comparing posterior means and standard deviations to the true values, Table~\ref{app:sde_val_fur} shows that all our models are able to recover the three different parameter sets considered.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_sde/simlog}
\caption[Forward trajectories of logistic growth models and stochastic logistic data with Log-normal measurement error]{\label{simlog}
Forward trajectories with measurement error for the stochastic logistic growth model and approximations, simulated from parameter posterior samples (sample size=1000).
Model fitting is carried out on SLGM forward trajectories with Log-normal measurement error (black), for three different sets of parameters (see Table~\ref{app:sde_val_fur}). See (\ref{app:LNAM_sta_spa_mod}) or (\ref{app:LNAA_sta_spa_mod}) for model and Table~\ref{table:SDE_priors} for prior hyper-parameter values.
Each row of figures corresponds to a different time course data set, simulated from a different set of parameter values, see Table~\ref{app:sde_val_fur}.
Each column of figures corresponds to a different model fit:
A), E) \& I) SLGM+L (orange).
B), F) \& J) RRTR model with lognormal error (red).
C), G) \& K) LNAM model with lognormal error (green).
D), H) \& L) LNAA model with normal error (blue).
See Table~\ref{app:sde_val_fur} for parameter posterior means and true values.
}
\end{figure}
To compare the approximations to the SLGM with simulated Normal measurement error, we simulate data and carry out Bayesian inference.
Figure~\ref{sim} shows that of our approximate models, only the LNAA model can appropriately represent the simulated time-course as both our models with Log-normal measurement error, the RRTR and LNAM do not closely bound the data.
Comparing forwards trajectories with measurement error (Figure~\ref{sim}), the ``exact'' model is most visually similar to the LNAA, which shares the same measurement error structure.
Further, Table~\ref{app:sde_val_fur} demonstrates that only our models with Normal measurement error have posterior means close to the true values and that standard deviations are larger in the models with Log-normal measurement error.
Observing the posterior means for $K$ for each parameter set (Table~\ref{app:sde_val_fur}), we can see that the RRTR has the largest standard deviations and that, of the approximate models, its posterior means are furthest from both the true values and the ``exact'' model posterior means.
Comparing LNA models to the ``exact'' models with matching measurement error, we can see in Table~\ref{app:sde_val_fur} that they share similar posterior means and only slightly larger standard deviations.
Example posterior diagnostics given in Figure~\ref{app:diag_sim_n}, demonstrate that posteriors are distributed tightly around true values for our LNAA and data from the SLGM with Normal measurement error.
\begin{figure}[h!]
\centering
\includegraphics[width=13cm]{img_sde/diag_sim_n}
\caption[Convergence diagnostics for the linear noise approximation of the stochastic logistic growth model with additive intrinsic noise]{
Convergence diagnostics for the linear noise approximation of the stochastic logistic growth model with additive intrinsic noise (LNAA) fit to simulated stochastic logistic growth data with Normal measurement error, see Figure~\ref{sim}D.
Trace, auto-correlation and density plots for the (LNAA) parameter posteriors (sample size = 1000, thinning interval = 4000).
Posterior density (black), prior density (dashed blue) and true parameter values (red) are shown in the right hand column.\label{app:diag_sim_n}
}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_sde/sim}
\caption[Forward trajectories of logistic growth models and stochastic logistic data with Normal measurement error]{\label{sim}
Forward trajectories with measurement error, simulated from inferred parameter posterior samples (sample size=1000).
Model fitting is carried out on SLGM forward trajectories with Normal measurement error (black), for three different sets of parameters (see Table~\ref{app:sde_val_fur}).
See (\ref{app:LNAM_sta_spa_mod}) or (\ref{app:LNAA_sta_spa_mod}) for model and Table~\ref{table:SDE_priors} for prior hyper-parameter values.
Each row of figures corresponds to a different time course data set, simulated from a different set of parameter values, see Table~\ref{app:sde_val_fur}.
Each column of figures corresponds to a different model fit:
A), E) \& I) SLGM+N (pink).
B), F) \& J) RRTR model with lognormal error (red).
C), G) \& K) LNAM model with lognormal error (green).
D), H) \& L) LNAA model with normal error (blue).
See Table~\ref{app:sde_val_fur} for parameter posterior means and true values.
}
\end{figure}
\input{tables/SDE_values}
\subsection{\label{sec:application_obs}Application to observed yeast data}
We now consider which diffusion equation model can best represent observed microbial population growth curves taken from a Quantitative Fitness Analysis (QFA) experiment (Section~\ref{int:QFA}) \citep{QFA1,jove}, see Figure~\ref{real}.
The data consists of scaled cell density estimates over time for budding yeast \emph{Saccharomyces cerevisiae}.
Independent replicate cultures are inoculated on plates and photographed over a period of 5 days.
The images captured are then converted into estimates of integrated optical density (IOD, which we assume are proportional to cell population size), by the software package Colonyzer \citep{Colonyzer}.
The dataset chosen for our model fitting is a representative set of 10 time-courses, each with 27 time points.
Once we have chosen the most appropriate stochastic model we can then look to apply our chosen model to logistic growth data from the QFA screens used throughout Chapter~\ref{cha:case_stu} in the future.
As in Figure~\ref{sim}, we see that the LNAA model is the only approximation that can appropriately represent the time-course and that both the RRTR and LNAM fail to bound the data as tightly as the LNAA (Figure~\ref{real}).
Our two ``exact'' models are visually similar to our approximate models with the same measurement error, with the SLGM+N most similar to the LNAA and the SLGM+L to the RRTR and LNAM. This is as expected due to matching measurement error structures.
Table~\ref{app:sde_val_fur} summarises parameter estimates for the observed yeast data using each model. The variation in the LNAA model parameter posteriors is much smaller than the RRTR and LNAM, indicating a more appropriate model fit.
Comparing the LNA models and ``exact'' models with matching measurement error, we can see in Table~\ref{app:sde_val_fur} that they share similar posterior means and standard deviations for all parameters and in particular, they are very similar for both $K$ and $r$, which are important phenotypes for calculating fitness \citep{QFA1}.
\begin{figure}[h!]
\centering
\includegraphics[width=14cm]{img_sde/real}
\caption[Forward trajectories of logistic growth models and observed yeast data]{\label{real}
Forward trajectories with measurement error, simulated from inferred parameter posterior samples (sample size=1000).
Model fitting is carried out on observed yeast time-course data (black).
See (\ref{app:LNAM_sta_spa_mod}) or (\ref{app:LNAA_sta_spa_mod}) and Table~\ref{table:SDE_priors} for prior hyper-parameter values.
See Table~\ref{app:sde_val_fur} for parameter posterior means.
A) SLGM+N (pink).
B) SLGM+L (orange).
A) RRTR model with Log-normal error (red).
B) LNAM model with log-normal error (green).
C) LNAA model with Normal error (blue).
}
\end{figure}
\input{tables/MSE}
In Table~\ref{tab:MSE}, to compare quality of parameter inference for 10 observed yeast time-courses with each approximate model. Mean squared error (MSE) for 1000 posterior sample forward simulations are calculated for each yeast time course and summed to give a Total MSE for each model.
It is clear that the RRTR is the worst overall representation of the 10 yeast time courses, with the highest total MSE and a much larger total MSE than the ``exact'' SLGM+L. It is interesting to see there is a very similar total MSE for the SLGM+L and LNAM, and similarly for the SLGM+N and LNAA, demonstrating that our approximations perform well.
Once the most appropriate approximate stochastic model is chosen, we can incorporate the SDE within our Bayesian hierarchical models described in Section~\ref{cha:modelling_den_int}.
Currently the Bayesian hierarchical models described in Section~\ref{cha:modelling_den_int} have long computational times, $\sim$2 weeks for the joint hierarchical model (JHM) ($\sim$1 week with further optimisations) and so extending these models using slow numerical methods would lead to prohibitively slow computational times that we estimate to take $\sim$3-6 months (with 4294 \emph{orf}$\Delta$s, $\sim$8 repeats and $\sim$27 time points).
Inference using the Kalman filter will allow the Bayesian hierarchical models to carry out stochastic modelling at a greatly reduced computational time ($\sim$10$\times$ faster) compared to an arbitrarily exact approach.
\section{\label{sec:Introduction}Introduction}
To account for uncertainty about processes affecting population growth which are not explicitly described by the deterministic logistic model, we can include a term describing intrinsic noise and consider an SDE version of the model. Here we extend the ODE in (\ref{eq_det}) by adding a term representing multiplicative intrinsic noise (\ref{eq_det_sde_2}) to give a model which we refer to as the stochastic logistic growth model (SLGM), which was first introduced by \citet{capo_slgm},
\begin{align}
\label{eq_det_sde_2}
dX_t&=rX_t\left(1-\frac{X_t}{K}\right)dt+\sigma X_t dW_t,
\end{align}
where $X_{t_0}=P$ and is independent of Wiener process $W_t$, $t\geq t_0$.
The Wiener process (or standard Brownian motion) is a continuous-time stochastic process, see Section~\ref{lit:sde}.
The Kolmogorov forward equation has not been solved for (\ref{eq_det_sde_2}) (or for any similar formulation of a logistic SDE) and so no explicit expression for the transition density is available.
\cite{roman} introduce a diffusion process approximating the SLGM with a transition density that can be derived explicitly (see Section~\ref{sec:roman}).
Alternative stochastic logistic growth models to (\ref{eq_det_sde_2}) are available.
\citet{allen} derives the stochastic logistic growth models given in (\ref{eq_allen1})~and~(\ref{eq_allen2}) from Markov jump processes \citep{allen,wilkinson2012stochastic}.
Firstly,
\begin{align}
\label{eq_allen1}
dX_t&=rX_t\left(1-\frac{X_t}{K}\right)dt+\sqrt{rX_t}dW_t,
\end{align}
where $X_{t_0}=P$ and is independent of $W_t$, $t\geq t_0$. Secondly,
\begin{align}
\label{eq_allen2}
dX_t&=rX_t\left(1-\frac{X_t}{K}\right)dt+\sqrt{rX_t\left(1+\frac{X_t}{K}\right)}dW_t,
\end{align}
where $X_{t_0}=P$ and is independent of $W_t$, $t\geq t_0$.
Note that (\ref{eq_det_sde_2})~(\ref{eq_allen1})~and~(\ref{eq_allen2}) are not equivalent to each other.
(\ref{eq_allen1})~and~(\ref{eq_allen2}) are able to describe the discreteness of the Markov jump processes that they approximate (or demographic noise).
Demographic noise becomes less significant for large population sizes, therefore (\ref{eq_allen1})~and~(\ref{eq_allen2}) describe more deterministic growth curves when population size is large (i.e. large carrying capacity $K$).
Equation~\ref{eq_det_sde_2} introduces an additional parameter $\sigma$, unlike (\ref{eq_allen1})~and~(\ref{eq_allen2}).
The additional parameter in (\ref{eq_det_sde_2}) allows us to tune the amount of noise in the system that is not directly associated with the noise due to the discreteness of the process (demographic noise).
The additional parameter also gives (\ref{eq_det_sde_2}) further flexibility for modelling intrinsic noise than (\ref{eq_allen1})~and~(\ref{eq_allen2}).
As the diffusion terms of (\ref{eq_allen1})~and~(\ref{eq_allen2}) are functions of the logistic growth parameters, for large populations (\ref{eq_allen1})~and~(\ref{eq_allen2}) can confound intrinsic noise with estimates of logistic growth parameters $r$ and $K$.
For the above reasons, the SLGM in (\ref{eq_det_sde_2}) is the most appropriate model for estimating logistic growth parameters of large populations, as intrinsic noise does not tend to zero with larger population sizes, unlike (\ref{eq_allen1})~and~(\ref{eq_allen2}).
\section{\label{sec:LNAA}Linear noise approximation with additive noise}
As in Section~\ref{sec:LNAM}, we start from the SLGM, given in (\ref{eq_det_sde}).
Without first log transforming the process, the LNA will lead to a worse approximation to the diffusion term of the SLGM, but we will see in the coming sections that there are nevertheless advantages.
We separate the process $X_t$ into a deterministic part $v_t$ and a stochastic part $Z_t$ so that $X_t=v_t+Z_t$ and consequently $dX_t=dv_t+dZ_t$.
We chose $v_t$ to be the solution of the deterministic part of (\ref{eq_det_sde}):
\begin{equation}
dv_t=\left(rv_t-\frac{r}{K}v_t^2\right)dt.\label{eq:SDEV2}
\end{equation}
We now redefine our previous notation as follows: $a=r$ and $b=\frac{r}{K}$.
Equation~\ref{eq:SDEV2} is then solved for $v_t$:
\begin{equation}\label{eq:LNAA_det_sol}
v_t=\frac{aPe^{aT}}{bP(e^{aT}-1)+a}.
\end{equation}
We now write down an expression for $dZ_t$, where $dZ_t=dX_t-dv_t$:
\begin{equation*}
dZ_t=\left(aX_t-bX_t^2\right)dt+\sigma X_t dW_t-\left(av_t-bv_t^2\right)dt.
\end{equation*}
We then substitute in $X_t=v_t+Z_t$ and simplify the expression to give
\begin{equation*}
dZ_t=(a-2bv_t)Z_t-bZ_t^2dt+\left( \sigma v_t +\sigma Z_t\right) dW_t.
\end{equation*}
As $dZ_t$ is a non-linear SDE it cannot be solved explicitly, we use the LNA (see Section~\ref{lit:LNA}) to obtain a linear SDE that we can solve explicitly.
We now apply the LNA, by setting second-order term $-bZ_t^2dt=0$ and $\sigma Z_t dW_t=0$ to obtain
\begin{equation}\label{eq:LNAA_dz}
dZ_t=(a-2bv_t)Z_tdt+\sigma v_t dW_t.
\end{equation}
This process is a particular case of the Ornstein-Uhlenbeck process, which can be solved.
The transition density for $X_t$ (derivation in Appendix~\ref{app:LNAA_sol}) is then
\begin{align}
\begin{split}\label{eq:LNAA_tran}
(X_{t_i}|X_{t_{i-1}}&=x_{t_{i-1}})\sim N(\mu_{t_i},\Xi_{t_i}),\\
\text{where } x_{t_{i-1}}&=v_{t_{i-1}}+z_{t_{i-1}},\\
\mu_{t_i}&=x_{t_{i-1}}+\left(\frac{aPe^{aT_i}}{bP(e^{aT_i}-1)+a}\right)-\left(\frac{aPe^{aT_{i-1}}}{bP(e^{aT_{i-1}}-1)+a}\right)\\
&+e^{a(t_i-t_{i-1})}\left(\frac{bP(e^{aT_{i-1}}-1)+a}{bP(e^{aT_i}-1)+a}\right)^2Z_{t_{i-1}}\text{ and}\\
\Xi_t&=\frac{1}{2}\sigma^2aP^2e^{2aT_i}\left(\frac{1}{bP(e^{aT_i}-1)+a}\right)^4\\
&\times[
b^2P^2(e^{2aT_i}-e^{2aT_{i-1}})
+4bP(a-bP)(e^{aT_i}-e^{aT_{i-1}})\\
&\;\:\:\:\:+2a(t_i-t_{i-1})(a-bP)^2
].
\end{split}
\end{align}
The LNA of the SLGM, with additive intrinsic noise (LNAA) can then be written as
\begin{align*}
dX_t=\left[b{v_t}^2+\left(a-2bv_t\right)X_t\right]dt+\sigma v_t dW_t,
\end{align*}
where $P=X_{t_0}$ and is independent of $W_t$, $t\geq t_0$.
\section{\label{sec:LNAM}Linear noise approximation with multiplicative noise}
We now take a different approach to approximating the SLGM (\ref{eq_det_sde}), which will turn out to be closer to the exact solution of the SLGM than the RRTR (\ref{eq_sde}).
Starting from the original model (\ref{eq_det_sde}),
we apply It\^{o}'s lemma \citep{ito,sdebook}:
\begin{equation}\label{eq_itolemma}
df(t,X_t)=\frac{df}{dt}dt+\mu\frac{df}{dx}dt+\frac{1}{2}\sigma^2\frac{d^2f}{dx^2}dt+\sigma\frac{df}{dx}dW_t,
\end{equation}
with the transformation $f(t,X_t)\equiv Y_t=\log X_t$.
After deriving the following partial derivatives:
\begin{equation*}
\frac{df}{dt}=0,\qquad\frac{df}{dx}=\frac{1}{X_t}\quad\text{and}\quad\frac{d^2f}{dx^2}=-\frac{1}{X_t^2},
\end{equation*}
we can obtain the following It\^{o} drift-diffusion process:
\begin{align}
\label{eq:SDE2}
dY_t=\left(r-\frac{1}{2}\sigma^2-\frac{r}{K}e^{Y_t}\right)dt+\sigma dW_t.
\end{align}
The log transformation from multiplicative to additive noise, gives a constant diffusion term, so that the LNA will give a good approximation to (\ref{eq_det_sde}).
The LNA reduces a non-linear SDE to a linear SDE with additive noise.
The LNA can be viewed as a first order Taylor expansion of an approximating SDE about a deterministic solution.
We now separate the process $Y_t$ into a deterministic part $v_t$ and a stochastic part $Z_t$ so that $Y_t=v_t+Z_t$ and consequently $dY_t=dv_t+dZ_t$.
We choose $v_t$ to be the solution of the deterministic part of (\ref{eq:SDE2}):
\begin{equation}\label{eq:SDEV}
dv_t=\left(r-\frac{1}{2}\sigma^2-\frac{r}{K}e^{v_t}\right)dt.
\end{equation}
We now redefine our notation as follows: $a=r-\frac{\sigma^2}{2}$ and $b=\frac{r}{K}$.
Equation~\ref{eq:SDEV} is then solved for $v_t$:
\begin{equation}\label{eq:LNAM_det_sol}
v_t=\log\left(\frac{aPe^{aT}}{bP(e^{aT}-1)+a}\right),
\end{equation}
where $T=t-t_0$. We now write down an expression for $dZ_t$, where $dZ_t=dY_t-dv_t$:
\begin{equation*}
dZ_t=\left(a-be^{Y_t}\right)dt+\sigma dW_t-\left(a-be^{v_t}\right)dt
\end{equation*}
We then substitute in $Y_t=v_t+Z_t$ and simplify the expression to give
\begin{equation} \label{eq:zero_start}
dZ_t=b\left(e^{v_t}-e^{v_t+Z_t}\right)dt+\sigma dW_t.
\end{equation}
As $dZ_t$ is a non-linear SDE it cannot be solved explicitly, we use the LNA (see Section~\ref{lit:LNA}) to obtain a linear SDE that we can solve explicitly.
We apply the LNA by making a first-order approximation of $e^{Z_t}\approx 1+Z_t$ and then simplify to give
\begin{equation}\label{eq:LNAM_dz}
dZ_t=-be^{v_t}Z_tdt+\sigma dW_t.
\end{equation}
This process is a particular case of the time-varying Ornstein-Uhlenbeck process, which can be solved explicitly.
The transition density for $Y_t$ (derivation in Appendix~\ref{app:LNAM_sol}) is then:
\begin{align}
\begin{split}\label{eq:LNAM_tran}
(Y_{t_i}|Y_{t_{i-1}}&=y_{t_{i-1}})\sim\operatorname{N}\left(\mu_{t_i},\Xi_{t_i}\right),\\
\text{redefine } y_{t_{i-1}}&=v_{t_{i-1}}+z_{t_{i-1}}, Q=\left(\frac{\frac{a}{b}}{P}-1\right)e^{at_{0}},\\
\mu_{t_i}&=y_{t_{i-1}}+\log\left(\frac{1+Qe^{-at_{i-1}}}{1+Qe^{-at_i}}\right)+e^{-a(t_i-t_{i-1})}\frac{1+Qe^{-at_{i-1}}}{1+Qe^{-at_i}}z_{t_{i-1}} \text{ and}\\
\Xi_{t_i}&=\sigma^2\left[\frac{4Q(e^{at_i}-e^{at_{i-1}})+e^{2at_i}-e^{2at_{i-1}}+2aQ^2(t_i-t_{i-1})}{2a(Q+e^{at_i})^2}\right].
\end{split}
\end{align}
The LNA of the SLGM with multiplicative intrinsic noise (LNAM) can then be written as
\begin{align*}
d\log X_t=\left[dv_t+be^{v_t}v_t-be^{v_t}\log X_t\right]dt+\sigma dW_t,
\end{align*}
where $P=X_{t_0}$ and is independent of $W_t$, $t\geq t_0$.
\\
Note that the RRTR given in (\ref{eq_sde}) can be similarly derived using a zero-order noise approximation ($e^{Z_t}\approx 1$) instead of the LNA.
\section{\label{sec:roman}The \cite{roman} diffusion process}
\cite{roman} present a logistic growth diffusion process (RRTR) which has a transition density that can be written explicitly, allowing inference for model parameter values from discrete sampling trajectories.
\\
The RRTR is derived from the following ODE:
\begin{align}
\label{eq_ode}
\frac{dx_t}{dt}&=\frac{Qr}{e^{rt}+Q}x_t,
\end{align}
where $Q=\left(\frac{K}{P}-1\right)e^{rt_0}$, $P=x_{t_0}$ and $t\geq t_0$.
The solution to (\ref{eq_ode}) is given in (\ref{eq:logistic}) (it has the same solution as (\ref{eq_det})).
\cite{roman} see (\ref{eq_ode}) as a generalisation of the Malthusian growth model with a deterministic, time-dependent fertility $h(t)=\frac{Qr}{e^{rt}+Q}$, and replace this with $\frac{Qr}{e^{rt}+Q}+\sigma W_t$ to obtain the following approximation to the SLGM:
\begin{align}
\label{eq_sde}
dX_t&=\frac{Qr}{e^{r{t}}+Q}X_td{t}+{\sigma}X_tdW_t,
\end{align}
where $Q=\left(\frac{K}{P}-1\right)e^{rt_0}$, $P=X_{t_0}$ and is independent of $W_t$, $t\geq t_0$.
The process described in (\ref{eq_sde}) is a particular case of the Log-normal process with exogenous factors, therefore an exact transition density is available \citep{gutierrez}.
The transition density for $Y_t$, where $Y_t=\log(X_t)$, can be written:
\begin{align}
\begin{split}\label{eq:RRTR_tran}
(Y_{t_i}|Y_{t_{i-1}} &= y_{t_{i-1}}) \sim\operatorname{N}\left(
\mu_{t_i}
,
\Xi_{t_i}
\right),\\
\text{where } a &= r,\quad b=\frac{r}{K}, \\
\mu_{t_i}=& \log(y_{t_{i-1}})
+\log\left(\frac{1+be^{-at_i}}{1+be^{-at_{i-1}}}\right)
-\frac{\sigma^2}{2}(t_i-t_{i-1}) \text{ and}\\
\Xi_{t_i} &= \sigma^2(t_i-t_{i-1}).
\end{split}
\end{align}
|